The Lindahl Letter
The Lindahl Letter
Symbolic machine learning
0:00
-5:37

Symbolic machine learning

Within my staging Google document for Substack posts I reached the end of the originally planned out posts for this series of content. Earlier this morning I expanded the staging shell post outlines to week 104 which as you can imagine is a significant point in the publication lifecycle. 2 years of writing Substack posts will be here before you know it. I have enough content in the backlog for this Substack series to get to week 120. At the two year mark I’m planning on moving away from machine learning posts into just generally covering artificial intelligence and producing research notes related to a planned set of academic articles. 

That means that it is possible that weeks of ongoing coverage of a topic being worked on as a future academic article could be distributed during year 3 of this Substack series. That is probably a good method to really dig deep into a few topics along the way. One of the things I have worked pretty hard to avoid is producing coverage of the same topic over and over again. One of the things I have noticed in the last few months is that I may have reached conceptual exhaustion within the machine learning topics at around one hundred different concepts. At this point, I should probably go look at all the general conceptual models of the machine learning space and see how close I am to reaching comprehensive coverage. I jumped over into Google Trends and took a look at what topics are bubbling to the surface [0].

That very meta aside about the future of The Lindahl Letter being complete; let’s jump into the topic at hand for today. Most of the time you will see people calling out Symbolic AI vs. symbolic machine learning. If you are interested in trying to build an artificial intelligence system that works similar to the human brain in terms of learning, then you are going to run into the idea of Symbolic AI. Think of things like deep learning, Bayesian networks, or evolutionary algorithms. What I was curious about this week was how many times people try to evaluate symbolic machine learning as a concept. Explicitly searching on Google Scholar for “symbolic machine learning” will yield just over 2,000 results [1]. Some of the academic coverage on this topic goes back to the 1990’s which was obviously where my reading started. Typically I try to rewind back to where articles were sparse and the content was more focused. Recently the volume of content has exploded, but a good portion of it is derivative. I ended up reading an article from Harries and Horn called, “Detecting Concept Drift in Financial Time Series Prediction using Symbolic Machine Learning,” that was published back in 1995 [2]. Sometimes I just enjoy reading about forecasting related concepts as it is grounded in a field of study that has always just made sense to me. Within that space of consideration a copy of Armstrong’s Principles of Forecasting (2001) is sitting on my bookshelf just a couple of feet away. I don’t plan on letting go of that weighty tome any time soon. Oddly enough this article seemed to focus on the potential promise of symbolic machine learning in the future. The phrase only occurs 4 times in the article and that includes the title and abstract. I’m wondering if maybe it was added after the article was written. 

I was reading a few academic articles and wondering what exactly people are doing within the practical applications of symbolic machine learning. Google Scholar indicates that 6 related searches stand out. Those searches include remote sensing, algorithms, neural networks, classifiers, reactive control systems, and European settlement maps. Obviously, I was super curious what symbolic machine learning had to do with settlement maps. I found a letter from IEEE Xplore called, “Application of the Symbolic Machine Learning to Copernicus VHR Imagery: The European Settlement Map,” [3]. It’s pretty much exactly what you might think it would be about in terms of a literal mapping of settlements. The scale of the data being processed on this one seems really interesting. This letter did mention symbolic machine learning within the body of the work related to model innovations. It’s a pretty dense publication in terms of concepts being blended together without a lot of explanation. That is probably a byproduct of the authors trying to keep this to 5 pages vs. around 20 pages where that additional commentary would be flushed out.

Links and thoughts:

“Computex 2022 laptops 💻 Elon vs Twitter bots 🤖 Apple ‘testing’ foldable E Ink display 📱”

“See Where Your Electric Car’s Battery Will Go One Day”

“My Investment Pays Off - WAN Show May 20, 2022”

Top 5 Tweets of the week:

Footnotes:

[0] https://trends.google.com/trends/explore?q=machine%20learning 

[1] https://scholar.google.com/scholar?q=%22symbolic+machine+learning%22&hl=en&as_sdt=0&as_vis=1&oi=scholart 

[2] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.51.5260&rep=rep1&type=pdf 

[3] https://ieeexplore.ieee.org/abstract/document/8941071 

What’s next for The Lindahl Letter?

  • Week 74: ML content automation

  • Week 75: Is ML destroying engineering colleges?

  • Week 76: What is post theory science?

  • Week 77: What is GPT-NeoX-20B?

  • Week 78: A history of machine learning acquisitions

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed reading this content, then please take a moment and share it with a friend.

0 Comments
The Lindahl Letter
The Lindahl Letter
Thoughts about technology (AI/ML) in newsletter form every Friday