The Lindahl Letter
The Lindahl Letter
What are ensemble ML models?
1
0:00
-5:11

What are ensemble ML models?

Substack Week 91
1

For those of you that keep track of these types of things we are now in real time based on my publishing schedule. Over the course of the next few weeks no backlog exists as we make the run to 104 consecutive Substack posts spanning 2 years of content creation on this platform. It’s week 91 right now in the publishing schedule which means that only 13 blocks of super exciting writing about machine learning stand between you and the completion of that penultimate tasking. To that end I’ll be working without a sizeable backlog that would prevent procrastination or a loss of focus from breaking the streak. At this very moment, I’m probably writing about that consideration to help refocus my efforts on completing this last stretch. You may recall that after the 2-year mark I’m going to mix things up a bit and switch focus from machine learning to artificial intelligence in general. The format might change a bit as well, but you will have to stay tuned to see what ends up showing up every Friday.

My physical notebook where I write things down by hand with a Parker Sonnet fountain pen includes a few different sketches related to building a universal request handler. A lot of the voice assistants receive queries that need to be given to some type of ML model to solve. Making a decision about which model to apply and then how to manage and sort those results back into the general knowledge graph is an interesting problem to solve. Generally, one solution is to just fire off the request to a series of API channels and the one that reports back a probable answer in the shortest time is what gets served up by the voice assistant. All of that happens so very quickly that you don’t really notice a long delay. We as people handle super complex reasoning tasks and work on things with an extreme depth without even questioning the solution selection process.

Perhaps as a subset of that general selection of what model to use when questioning something else props up from researchers and practitioners. One of the things people who are involved with work in the ML models space ask from time to time is about why you cannot just combine all the ML models together and make a super model. One of the ways people are working to bring models together involves the ensemble method for machine learning models. This methodology involves making a few models and then combining them to improve results. This is not a method to just stack random models and try to make it work. The ensemble method is a technique that is based on working with the same dataset and maybe combining for example a bunch of favorable random forests or some other set of similar models to form an ensemble. From what I have been able to tell from reading articles in this space its not a super solution to just bring all machine learning models together in one unified model theory.

Dietterich, T. G. (2000, June). Ensemble methods in machine learning. In International workshop on multiple classifier systems (pp. 1-15). Springer, Berlin, Heidelberg. https://web.engr.oregonstate.edu/~tgd/publications/mcs-ensembles.pdf

Dietterich, T. G. (2002). Ensemble learning. The handbook of brain theory and neural networks2(1), 110-125. https://courses.cs.washington.edu/courses/cse446/12wi/tgd-ensembles.pdf

This one is out and in use in the wild. You can actually utilize ensemble ML models from some of the systems like scikit-learn [1]. You can also pretty quickly implement ensemble models with the “The Functional API” as a part of TensorFlow core [2]. You can pretty quickly get up to speed and use this one in notebooks or other places.

Links and thoughts:

Lex Fridman Podcast “#324 – Daniel Negreanu: Poker”

Lex Fridman Podcast “#315 – Magnus Carlsen: Greatest Chess Player of All Time”

“Microsoft's Surface event, Pixel 7 and Pixel Watch reviews, and Meta Connect 2022”

“Mark Zuckerberg on the Quest Pro, future of the metaverse, and more”

Top 5 Tweets of the week:

Twitter avatar for @YiMaTweets
Yi Ma @YiMaTweets
youtube.com/watch?v=23YpCw… Around our recent position paper, a talk and roundtable about "the Origin & Nature of Intelligence" with Prof. Doris Tsao of UC Berkeley, Prof. Karl Friston of Univ. College London, Prof. Michael Wooldridge at Univ. of Oxford, Mr. Jeff Hawkins at Numenta.

Footnotes:

[1] https://scikit-learn.org/stable/modules/ensemble.html
[2] https://www.tensorflow.org/guide/keras/functional

What’s next for The Lindahl Letter?

●     Week 92: National AI strategies revisited

●     Week 93: Papers critical of ML

●     Week 94: AI hardware (RISC-V AI Chips)

●     Week 95: Quantum machine learning

●     Week 96: Where are large language models going?

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.

1 Comment
The Lindahl Letter
The Lindahl Letter
Thoughts about technology (AI/ML) in newsletter form every Friday