Discussion about this post

User's avatar
Joe Bak's avatar

Great article Mark.

I agree that machine learning has to reinvent itself somehow to advance past graduate student descent (love this one). Neural networks are just too good at solving a range of problems that weren’t possible a few years ago, so it’s hard to let go of something that keeps getting better (even if incrementally), and that keeps bringing loads of funding. Machine learning is just very funding-genic, which mixes good research papers with a lot of impressive results that are just iterations of previous ones.

My impression is that most advances in machine learning nowadays are in combining non-interpretable algorithms like deep nets with expert-informed hard constraints to solve previously intractable problems. Making computational fluid dynamics simulators 10 times faster is an example. Discovering interpretable differential equations from simulations or experimental data is another. There’s a range of nonlinear coordinate transformation problems that can be solved with networks that haven’t been addressed yet because mathematicians think deep nets are too unreliable; which I understand. There’s also a lot of room in bridging physical scales with deep networks (coarse-graining, upscaling, downscaling etc.). But, as you said, all these are applications of ML rather than theory within ML. There’s a new field of physics-informed machine learning that is radically changing how we do science and motivating a lot of new problems in machine learning.

The root problem with graduate student descent is that machine learning solves ill-posed problems. So it’s sometimes hard to know when to stop trying. I wouldn’t say academics in general have descended into mere recycling of papers. It’s just that the quick advancements we’ve witnessed with deep nets in the last decade is tapering off. That’s normal with any new method. People get excited at first, it keeps getting better, until it doesn’t; but with ML it’s hard to know when it stops getting better because of all the benchmark overfitting you mentioned.

Ultimately, I think that ML should always go back to neuroscience and robotics for inspiration. Human intelligence has always been the motivation for machine intelligence. For example, humans are a reminder that intelligence comes with embodiment: a combination of actions and perceptions. I like the approach taken in embodied AI and artificial life. Good old fashioned scientists trying to figure out intelligence by bridging physics, neuroscience and AI to figure it out.

Anyway, keep writing!

Expand full comment
beezwax's avatar

nice shitpost, go back to /sci/

Expand full comment
38 more comments...

No posts