40 Comments
Mar 8, 2021Liked by Mark Saroufim

Great article Mark.

I agree that machine learning has to reinvent itself somehow to advance past graduate student descent (love this one). Neural networks are just too good at solving a range of problems that weren’t possible a few years ago, so it’s hard to let go of something that keeps getting better (even if incrementally), and that keeps bringing loads of funding. Machine learning is just very funding-genic, which mixes good research papers with a lot of impressive results that are just iterations of previous ones.

My impression is that most advances in machine learning nowadays are in combining non-interpretable algorithms like deep nets with expert-informed hard constraints to solve previously intractable problems. Making computational fluid dynamics simulators 10 times faster is an example. Discovering interpretable differential equations from simulations or experimental data is another. There’s a range of nonlinear coordinate transformation problems that can be solved with networks that haven’t been addressed yet because mathematicians think deep nets are too unreliable; which I understand. There’s also a lot of room in bridging physical scales with deep networks (coarse-graining, upscaling, downscaling etc.). But, as you said, all these are applications of ML rather than theory within ML. There’s a new field of physics-informed machine learning that is radically changing how we do science and motivating a lot of new problems in machine learning.

The root problem with graduate student descent is that machine learning solves ill-posed problems. So it’s sometimes hard to know when to stop trying. I wouldn’t say academics in general have descended into mere recycling of papers. It’s just that the quick advancements we’ve witnessed with deep nets in the last decade is tapering off. That’s normal with any new method. People get excited at first, it keeps getting better, until it doesn’t; but with ML it’s hard to know when it stops getting better because of all the benchmark overfitting you mentioned.

Ultimately, I think that ML should always go back to neuroscience and robotics for inspiration. Human intelligence has always been the motivation for machine intelligence. For example, humans are a reminder that intelligence comes with embodiment: a combination of actions and perceptions. I like the approach taken in embodied AI and artificial life. Good old fashioned scientists trying to figure out intelligence by bridging physics, neuroscience and AI to figure it out.

Anyway, keep writing!

Expand full comment
author

I really enjoyed reading this. Part of the reason why I'm so excited about Julia is because they make it easy to encode express physical problems and solve them with fewer parameters using ML. I think this is very exciting and potentially solves the scaling issues that naive reinforcement learning encounters. Especially when it comes to solving energy, physics or biological problems I see the intersection with ML to still be fruitful and enlightening for a while longer.

Perhaps there's no real point in calling something stagnating early, since the switching cost in this case isn't that severe, most of the excess work is lost capital, attention and libraries - nothing as severe as investing in the wrong energy infrastructure. People will adapt in due time.

Expand full comment
Jan 14, 2021Liked by Mark Saroufim

nice shitpost, go back to /sci/

Expand full comment
Mar 8, 2021Liked by Mark Saroufim

The dust of DL starts to settle...

Currently I'm at my MSc and already had a taste of, as you call it, "stagnation". Only one of tenth DL papers has some actual value. Especially funny when you get to see layers upon layers of architecture, some convoluted loss function, and next to it old classical ML method that has better or similar result. But, yeah... Something is definitely wrong and people got really lazy.

Expand full comment
Jan 18, 2021Liked by Mark Saroufim

There's lots of insight in this post, and I would definitely want newcomers to the field be acquainted with some of the realities behind the glitzy reputation. I would not however downplay field experts as mere masters of experimentation time. Insight and intuition are what makes science move forward, and in that sense, DL is not different - it's just that a lot of the perfectly reasonable intuition is wrong, and only a tiny fraction actually gains recognition as an advancement

Expand full comment
Jan 16, 2021Liked by Mark Saroufim

Great article - I particularly appreciated your open-minded approach to Keras and fastai.

Expand full comment

Thanks you, this was a joy to read.

Expand full comment
author

So was this comment - thank you!

Expand full comment
Jan 14, 2021Liked by Mark Saroufim

As a game designer, my humble contribution to the AI space is a minimalist MOBA (Dota-like) game that is designed to be ML-friendly and incorporates advanced Bot AIs as part of the meta-game.

https://github.com/amethyst/shotcaller

https://www.notion.so/erlendsh/Bots-AI-b59f2f75c5f34a7aae3edfe1de564c14

Expand full comment
author

I'm actually already a fan of the amethyst project - the simulator is often the bottleneck in RL simulations so having it be really fast is huge. I've messed around with Rust a bunch over the last few years but haven't been able to find a small useful side project I can just sink my teeth into. But happy to help out if I can be useful in any way for game dev and Rust projects

Expand full comment
Jan 14, 2021Liked by Mark Saroufim

thank you Mark, enjoyed the read

Expand full comment

Deep learning is alchemy and not science. Until we have a theory that explains how to interpret each and every node in a deep learning result. it will never be science. DL is like Damascus steel. It made beautiful swords but no one understood why until the physicists, chemists, and metallurgists used science to understand atoms.

Expand full comment
author

If Deep Learning is alchemy why am I not a billionaire yet?

Expand full comment

Why would you think alchemy would ever make you a billionaire? Chasing the philosopher's stone was a waste of time and talent. Today we understand via real science why that endeavor failed. I suspect that 100 years from now we will understand something similar about today's ML.

Expand full comment

Very enjoyable reading your thoughts so far. Noticed a small error, 4th paragraph, last sentence "where the likelihood of success if secondary" I think you meant "IS secondary"

Thank you and please keep up the good work.

Expand full comment

Mark, how much longer are you going to refuse to reply to my January 30 post?

Expand full comment
author

Hey Jean, I'm not refusing to answer - I got many responses to this post and it has been a struggle to answer all of them. The link you shared wasn't opening for me so I'm not sure how I can help. In person phone calls will be a struggle for me but I'm happy to chat here

Expand full comment

Why don't you tell me honestly that you will never respond to my message of February 10th?

Expand full comment

Mark, I'm still waiting for your answer ! If what I wrote to you does not interest you, please tell me, I will stop bothering you. We will both save time.

Expand full comment
Comment deleted
Expand full comment

Phew! I'm glad to see you're not running away from me.

Indeed, my links when they are PDFs do not seem to be displayed with Chrome. I don’t know why. But they work with Firefox, Edge, etc.

Here is the description of Dean Horak in LinkedIn a known AI researcher, after a demonstration I made him via Skype:

https://www.tree-logic.com/articles/Analyse-IA-raisonnante-par-Dean-Horak_-09.2013.html

In fact, he is far from having understood everything, but this is the unique case of a computer scientist who dared to test an AI made for non-computer scientists and speak well of it. He has not seen, for example, that with this AI, which has been fully operational for a long time, anyone programs much faster and better than a team of computer scientists.

I got my Awards.Ai prize

Awards.AI Photo de la mention web.html

in a category created especially for me in 2017: “AI Achievement”, with the comment: “Tree Logic presents a computer technology,“ La Maieutique ”, which will drive world data processing into a new aera: the aera of computer becoming “human”, communicative, intelligent and knowledge-hungry. Plus these key abilities we have been waiting from him since its inception: helpful, never forgetting a new knowledge, and user friendly".

This category subsequently disappeared because the competitors, that is to say 99.99% of the candidates, computer scientists unable to present an "achieved" AI, strongly protested!

Thanks to this prize, I had the right to publish an article about my invention on the Awards.Ai site: "A mass-market ARTIFICIAL INTELLIGENCE"

https://www.tree-logic.com/articles/awards.ai-article-IA-JP-revu-Michou-et-H%C3%A9lo%C3%AFse.html

But it received no comment and was silently withdrawn after a few months without any warning or explanation.

For 4 years no one has ever talked about this award in the US and only two media in France. Computer and AI specialists keep this invention hidden from the public when it is the Kurzweil's Singularity ! Computing and the planet's economy are therefore 30 years behind.

I know I'm talking to a computer scientist and this thread is read only by computer scientists but, sorry ! I have to say this for the sake of mankind: the computer scientist is the worst enemy of AI and therefore, of humans. Guess why.

Will you have the courage to break the omerta?

Expand full comment

it took me two hours to manage to display my links here, which I had to convert all into html, me who am not a computer scientist!

Expand full comment

I see, Mark, that everyone appreciates your frankness and your open-mindedness although you have just demonstrated that there is no artificial intelligence in machine learning, just computation, more and more quick. You regret that there are not innovations among these researchers, no emergence of new concepts.

But will you be open-minded if I tell you that artificial intelligence has been operational in France since the 1980s? Expert systems installed in companies working by reasoning capable of dialoguing in the sense of Turing and programming without recourse to any programming language?

How many times have I approached researchers this way! They say, "oh yes, very interesting, tell me more". I tell them more and, there, no one! They no longer respond. If I manage (twice) to give them a demonstration via Skype they are amazed to see that it is operational. One of them even wrote a rave report (Dean Horak). Two years later he has" forgotten "everything and declares that real AI does not exist.

Do you want to know (actually) more?

Jean-Philippe de Lespinay

Expand full comment
author

This feels like the beginning of a good blog post - yeah I'd love to learn more. I believe there's 2 biases at play: the first is that more companies want to converge on the same techniques i.e you're not going to get fired for running a new transformer and the second is that the reason why transformers are so popular in the first place is they seem to outcompete a lot of the classical techniques. So if the second point is wrong, I'm most definitely curious.

Expand full comment

I told you : "How many times have I approached researchers this way! They say, "oh yes, very interesting, tell me more". I tell them more and, there, no one! They no longer respond." That's what you just did. As soon as you have discovered Reasoning Artificial Intelligence, which makes IT developers useless because it programs in real time, you refuse to speak to me. I was wrong about"your frankness and open-mindedness".

Decidedly there is an astonishing cowardice among computer scientists and no desire to take into account the interests of humanity.

Expand full comment

I have no more news from you, what's going on?

Expand full comment

I received the Awards.Ai prize in 2017 in the category The Special Award for AI Achievement: "Tree Logic presents a computer technology,“ La Maieutique ”, which will drive world data processing into a new aera: the aera of computer becoming “Human”, communicative, intelligent and knowledge-hungry. Plus these key abilities we have been waiting from him since its inception: helpful, never forgetting a new knowledge, and user friendly. "

This category was created especially for me that year and then disappeared. Awards.Ai no longer wants to reward the AI Achievement because it is driving crazy IT people !

Expand full comment

I didn't quite understand your answer. The word "transformer" among others. Here is already the report made by Dean Horak following a Skype demonstration of my reasoning AI : https://www.tree-logic.com/articles/Analyse%20IA%20raisonnante%20par%20Dean%20Horak,%2009.2013.pdf

I don't agree with everything he says but the essential is there.

We could talk by Skype but not by phone because I speak very bad English. Here is my nickname: Q2mitt (or Jean-Philippe de Lespinay).

Expand full comment

And what if i tell that such a software exists and is a real disruptor and will change the computing world for good? the water will run again !

Expand full comment

Typo: "is often guilty of guilty of this"

Expand full comment
author

fixed - thank you

Expand full comment

wow, you must be seriously bored. I am too - I don't blame you. Fucking pandemic. Serious shitpost tho. Got bored 1/3rd of the way through. Skimmed to the end to see if it got better "Language models in particular like GPT-3 are starting to feel a lot like the Large Hadron Collider." The LHC is *amazing*. Yes, it's a multi-billion dollar monstrosity, but it's probing what reality *is* at the microscopic scale. And GPT-3? From how I've played with it (through my friend's API access), it's a pretty amazing language model. Sure, the sophistication of a 3yo, but a 3yo who's memorized wikipedia and google news over the past decade. Which by itself, is pretty amazing.

Expand full comment
author

it's ok hmu if u wanna play video games

Expand full comment

what would you want it to do, to "impress" you?

Expand full comment
author

I am impressed, I'm just interested in working on different things as well

Expand full comment

@Vapnik Dam dude chill. At least post your criticism in a way that can lead to a discussion

Expand full comment