The Lindahl Letter
The Lindahl Letter
Language models revisited
0:00
-5:48

Language models revisited

Maybe revisiting large language models should have been saved for a few weeks from now, but we are going to begin that journey into the foundations of machine learning anyway. My opening question within this chautauqua should be about how large language models in the machine learning space will change society. To that end it might be good to read a post from the Stanford University HAI or Human Centered Artificial Intelligence Institute, “How Large Language Models Will Transform Science, Society, and AI,” by Alex Tamkin and Deep Ganguli [1]. That institute has a mission of, “Advancing AI research, education, policy, and practice to improve the human condition.” While that sounds like an interesting mission statement to attempt to fulfill, it probably ignores the darker possibilities of what could happen. I went out and read the 8 page paper from the post Alex and Deep that they shared, “Understanding the Capabilities, Limitations, and Societal Impact of Large Language Models” [2]. Understanding how large language models could impact the economy and potential misuses are considered in that research which made it a very solid place to start my analysis for this week.

Some really large blocks of content for machine learning exist online. The amount of written work being shared related to machine learning is exponentially growing. It is seriously out of control and beyond what anybody can really reasonably track anymore. One of those blocks of content that caught my attention this week was the ML Compendium by Dr. Ori Cohen [3]. First, this pretty deep work made me wonder about how GitBook works and what other content might be on that platform. Second, it made me wonder about how interactive delivery formats might change the future of textbooks in college settings. A quick search for language models in that collection of links and other content took me to a section on “attention” that included BERT, GPT-2, and GPT-3 [4]. It was not really what I was looking to read about this week and my attention quickly turned elsewhere. 

What I was expecting to dig into was the paper on foundational models from a bunch of Stanford University related contributors noted as, “On the Opportunities and Risks of Foundation Models: A new publication from the Center for Research on Foundation Models (CRFM) at Stanford University, with contributions by Shelby Grossman and others from the Stanford Internet Observatory” [5]. You can get the full 212 paper over on ArXiv [6]. By this time in our journey together you have downloaded that paper a couple of times. Yannic theorized that the paper will end up being a key referenced work due to the number of contributors and the volume of things covered. I can see it becoming a part of curriculums for years to come as it has so much reference material in one place and it is free to download.

I’m going to backtrack for a minute here and let you know that after a bit of review it appears that GitBook was designed to provide living documentation [7]. Teams use it to maintain and share technical documentation for software and APIs. It appears that it is also used for some projects like the one shared above. I really do think that type of content curation is probably the future of academic publishing for coursework. Really large static textbooks will be replaced by interactive content that could survive in the metaverse. Students' expectations for the delivery of content to them will fundamentally change in the next 10 years and courses that demand a rigid reading of chapter by chapter in a textbook will fall out of favor. 

Links and thoughts:

Top 6 Tweets of the week:

Footnotes:

[1] https://hai.stanford.edu/news/how-large-language-models-will-transform-science-society-and-ai

[2] https://arxiv.org/abs/2102.02503 

[3] https://mlcompendium.gitbook.io/machine-and-deep-learning-compendium/ 

[4] https://mlcompendium.gitbook.io/machine-and-deep-learning-compendium/deep-learning/deep-neural-nets#attention 

[5] https://fsi.stanford.edu/publication/opportunities-and-risks-foundation-models 

[6] https://arxiv.org/pdf/2108.07258.pdf 

[7] https://docs.gitbook.com/

What’s next for The Lindahl Letter?

  • Week 65: Ethics in machine learning

  • Week 66: Does a digital divide in machine learning exist?

  • Week 67: Who still does ML tooling by hand?

  • Week 68: Publishing a model or selling the API?

  • Week 69: A machine learning cookbook?

I’ll try to keep the what’s next list for The Lindahl Letter forward looking with at least five weeks of posts in planning or review. If you enjoyed this content, then please take a moment and share it with a friend. Thank you and enjoy the week ahead. 

0 Comments
The Lindahl Letter
The Lindahl Letter
Thoughts about technology (AI/ML) in newsletter form every Friday
Listen on
Substack App
RSS Feed
Appears in episode
Dr. Nels Lindahl