40 Comments
Mar 8, 2021Liked by Mark Saroufim

Great article Mark.

I agree that machine learning has to reinvent itself somehow to advance past graduate student descent (love this one). Neural networks are just too good at solving a range of problems that weren’t possible a few years ago, so it’s hard to let go of something that keeps getting better (even if incrementally), and that keeps bringing loads of funding. Machine learning is just very funding-genic, which mixes good research papers with a lot of impressive results that are just iterations of previous ones.

My impression is that most advances in machine learning nowadays are in combining non-interpretable algorithms like deep nets with expert-informed hard constraints to solve previously intractable problems. Making computational fluid dynamics simulators 10 times faster is an example. Discovering interpretable differential equations from simulations or experimental data is another. There’s a range of nonlinear coordinate transformation problems that can be solved with networks that haven’t been addressed yet because mathematicians think deep nets are too unreliable; which I understand. There’s also a lot of room in bridging physical scales with deep networks (coarse-graining, upscaling, downscaling etc.). But, as you said, all these are applications of ML rather than theory within ML. There’s a new field of physics-informed machine learning that is radically changing how we do science and motivating a lot of new problems in machine learning.

The root problem with graduate student descent is that machine learning solves ill-posed problems. So it’s sometimes hard to know when to stop trying. I wouldn’t say academics in general have descended into mere recycling of papers. It’s just that the quick advancements we’ve witnessed with deep nets in the last decade is tapering off. That’s normal with any new method. People get excited at first, it keeps getting better, until it doesn’t; but with ML it’s hard to know when it stops getting better because of all the benchmark overfitting you mentioned.

Ultimately, I think that ML should always go back to neuroscience and robotics for inspiration. Human intelligence has always been the motivation for machine intelligence. For example, humans are a reminder that intelligence comes with embodiment: a combination of actions and perceptions. I like the approach taken in embodied AI and artificial life. Good old fashioned scientists trying to figure out intelligence by bridging physics, neuroscience and AI to figure it out.

Anyway, keep writing!

Expand full comment
Jan 14, 2021Liked by Mark Saroufim

nice shitpost, go back to /sci/

Expand full comment
Mar 8, 2021Liked by Mark Saroufim

The dust of DL starts to settle...

Currently I'm at my MSc and already had a taste of, as you call it, "stagnation". Only one of tenth DL papers has some actual value. Especially funny when you get to see layers upon layers of architecture, some convoluted loss function, and next to it old classical ML method that has better or similar result. But, yeah... Something is definitely wrong and people got really lazy.

Expand full comment
Jan 18, 2021Liked by Mark Saroufim

There's lots of insight in this post, and I would definitely want newcomers to the field be acquainted with some of the realities behind the glitzy reputation. I would not however downplay field experts as mere masters of experimentation time. Insight and intuition are what makes science move forward, and in that sense, DL is not different - it's just that a lot of the perfectly reasonable intuition is wrong, and only a tiny fraction actually gains recognition as an advancement

Expand full comment
Jan 16, 2021Liked by Mark Saroufim

Great article - I particularly appreciated your open-minded approach to Keras and fastai.

Expand full comment

Thanks you, this was a joy to read.

Expand full comment
Jan 14, 2021Liked by Mark Saroufim

As a game designer, my humble contribution to the AI space is a minimalist MOBA (Dota-like) game that is designed to be ML-friendly and incorporates advanced Bot AIs as part of the meta-game.

https://github.com/amethyst/shotcaller

https://www.notion.so/erlendsh/Bots-AI-b59f2f75c5f34a7aae3edfe1de564c14

Expand full comment
Jan 14, 2021Liked by Mark Saroufim

thank you Mark, enjoyed the read

Expand full comment

Deep learning is alchemy and not science. Until we have a theory that explains how to interpret each and every node in a deep learning result. it will never be science. DL is like Damascus steel. It made beautiful swords but no one understood why until the physicists, chemists, and metallurgists used science to understand atoms.

Expand full comment

Very enjoyable reading your thoughts so far. Noticed a small error, 4th paragraph, last sentence "where the likelihood of success if secondary" I think you meant "IS secondary"

Thank you and please keep up the good work.

Expand full comment

Mark, how much longer are you going to refuse to reply to my January 30 post?

Expand full comment

I see, Mark, that everyone appreciates your frankness and your open-mindedness although you have just demonstrated that there is no artificial intelligence in machine learning, just computation, more and more quick. You regret that there are not innovations among these researchers, no emergence of new concepts.

But will you be open-minded if I tell you that artificial intelligence has been operational in France since the 1980s? Expert systems installed in companies working by reasoning capable of dialoguing in the sense of Turing and programming without recourse to any programming language?

How many times have I approached researchers this way! They say, "oh yes, very interesting, tell me more". I tell them more and, there, no one! They no longer respond. If I manage (twice) to give them a demonstration via Skype they are amazed to see that it is operational. One of them even wrote a rave report (Dean Horak). Two years later he has" forgotten "everything and declares that real AI does not exist.

Do you want to know (actually) more?

Jean-Philippe de Lespinay

Expand full comment

And what if i tell that such a software exists and is a real disruptor and will change the computing world for good? the water will run again !

Expand full comment

Typo: "is often guilty of guilty of this"

Expand full comment

wow, you must be seriously bored. I am too - I don't blame you. Fucking pandemic. Serious shitpost tho. Got bored 1/3rd of the way through. Skimmed to the end to see if it got better "Language models in particular like GPT-3 are starting to feel a lot like the Large Hadron Collider." The LHC is *amazing*. Yes, it's a multi-billion dollar monstrosity, but it's probing what reality *is* at the microscopic scale. And GPT-3? From how I've played with it (through my friend's API access), it's a pretty amazing language model. Sure, the sophistication of a 3yo, but a 3yo who's memorized wikipedia and google news over the past decade. Which by itself, is pretty amazing.

Expand full comment

@Vapnik Dam dude chill. At least post your criticism in a way that can lead to a discussion

Expand full comment