The Lindahl Letter
The Lindahl Letter
Is ML destroying engineering colleges?
--:--
--:--

Is ML destroying engineering colleges?

Considering if overcrowding has occurred within the field of ML

Welcome to a more investigative journalism based issue of The Lindahl Letter this week. This one really made me think. It's a provocative question for sure. Emotional reactions to the premise of destruction aside the question of overcrowding within the field of ML has really caught my attention this week. One piece of prose stood out on this topic and I’m not the only one to react to it. Published back on January 22, 2022 on LinkedIn of all places was a post titled, “How a False Love for AI/ML is Destroying our Engineering Colleges,” [1]. This post seems like an article in waiting or a research note of some kind that was intended to be widely shared by Smruti Sarangi who is the Usha Hasteer Chair Professor at IIT Delhi. It has caused a lot of discussion for sure with over 500 comments and around 10,000 interactions.

Obviously, at this point, I needed to do a bit more research on this one beyond just reading a post shared by a professor on LinkedIn. I ended up looking all over and found a few Tweets that were adjacent to the topic. Yes, I fell deeply into the rabbit hole of reading exchanges on Twitter that were long stale and otherwise left online like the remnants of previous sunrises. Sure they happened and outside of photograph evidence or in this case Tweets everyone has generally moved on to whatever is next on the agenda.

One of them was from Yaroslav Bulatov who is on the PyTorch team over at Meta/Facebook. 

Twitter avatar for @yaroslavvb
Yaroslav Bulatov @yaroslavvb
Table 2 of arxiv.org/pdf/2007.01547… shows what's wrong with ML research. Papers got in by providing a theorem (checked by reviewers) and "significant improvement" (not checked). Significant improvement disappeared when tested by third party
Image

That Tweet included some arguments toward trying to illustrate what was wrong with ML research by putting papers with a theorem expressed or mentioned, but no significant improvement demonstrated. That paper referenced was from Schmidt, Schneider, and Hennig back in 2021 called, “Descending through a Crowded Valley – Benchmarking Deep Learning Optimizers,” [2]. It’s about selecting an optimizer in the deep learning space and it is an interesting read. They are essentially trying to empirically test optimizers and share the results. It’s an interesting thing to go out and work on in terms of answering questions about what optimizers to actually use. Within the conclusion the authors shared a truly interesting observation that, "Perhaps the most important takeaway from our study is hidden in plain sight: the field is in danger of being drowned by noise." I don’t normally include quotes in my weekly research notes, but this one just stood out and needed to be read aloud for the true impact to be appreciated. My interest here really is about a deeper question about if overcrowding within the ML space is causing negative externalities.

Generally I avoid doing research by reading Twitter posts, but I thought sharing this thread from Tom Goldstein focusing on a history of AI winters. It's from a National Science Foundation town hall talk, “ML Needs Science,” but distilled on Twitter to be easily consumed. 

A counter argument was shared by Melanie Mitchell on Twitter as well. Melanie has also written a paper about AI Winters that is easy to read called, “Why AI is Harder Than We Think” [3].

Twitter avatar for @MelMitchell1
Melanie Mitchell @MelMitchell1
I have also written about some reasons for AI Winters (arxiv.org/abs/2104.12871) and have a different perspective from Tom Goldstein (TG). TG hypothesizes AI winters happened because AI tried to go from "zero to Turing Test" by programs that implemented logical reasoning. (2/10)

One of the things in all of that back and forth that caught my attention was a commencement address from 1974 delivered at the California Institute of Technology by Richard Feynman [4].

I could not find enough content to really complete a full post on this topic so not only did I reach out to Gary Marcus on Twitter, but also I sent a note over to Smruti Sarangi on LinkedIn to see if anything additional had been written or published. It turns out that Smruti was just sharing a few thoughts on LinkedIn as a post back on January 22, 2022 and was not preparing any academic papers on the core question of this Substack post. That question was really about, “Is ML destroying engineering colleges?” I ended up wondering about an overcrowding effect within engineering colleges where ML is sucking up all the oxygen and research focus of a generation. That could be reframed into a testable hypothesis by grabbing the last 10 years of publications in some of the top engineering journals to see if publications related to ML are crowding out other academic contributions. I spent a bit of time trying to figure out if somebody had done some academic work on ML creating overcrowding within engineering fields, but I have not found anything directly addressing the elements that caught my attention. 

My Twitter interactions this week have been interesting and lively. Normally my Tweets and other research links shared during the weeks don’t get such a high level of interaction. This is a topic that people seem to be passionate about and a lot of different points of view exist on this one. Research is certainly happening and will continue to happen. My honest guess here is that all the efforts to publish within the general scope of ML will reach a peak and people will branch off into other things. My take is that veering off into new areas of academic exploration for most researchers will be a healthy thing to happen for the academy after a bit of intellectual overcrowding occurred. 

Footnotes:

[1] https://www.linkedin.com/pulse/how-false-love-aiml-destroying-our-engineering-colleges-sarangi/ 

[2] https://arxiv.org/pdf/2007.01547.pdf 

[3] https://arxiv.org/pdf/2104.12871.pdf 

[4] https://calteches.library.caltech.edu/51/2/CargoCult.htm 

[5] https://www.barnesandnoble.com/w/surely-youre-joking-mr-feynman-richard-phillips-feynman/1112142471

What’s next for The Lindahl Letter?

  • Week 76: What is post theory science?

  • Week 77: What is GPT-NeoX-20B?

  • Week 78: A history of machine learning acquisitions

  • Week 79: Bayesian optimization

  • Week 80: Deep learning

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed this content, then please take a moment and share it with a friend. Thank you and enjoy the day!

0 Comments
The Lindahl Letter
The Lindahl Letter
Thoughts about technology (AI/ML) in newsletter form every Friday