The Lindahl Letter
The Lindahl Letter
Neural networks (ML syllabus edition 5/8)
0:00
-13:00

Neural networks (ML syllabus edition 5/8)

Part 5 of 8 in the ML syllabus series

You may find in the literature that this topic of neural networks is sometimes called the zoo or more specifically, “the neural network zoo.” Corresponding to the articles that make this reference is a wonderful included graphic that shows a ton of different neural networks and can really give you a sense of how they work at the most fundamental level. Two papers that make this reference and include that wonderful graphic are from researchers at the Asimov institute that had papers published in 2016 and 2022. Both of those papers are great places to start learning about neural networks. 

Van Veen, F., & Leijnen, S. (2016). The neural network zoo. The Asimov Institute. https://www.asimovinstitute.org/neural-network-zoo/ 

Leijnen, S., & Veen, F. V. (2020). The neural network zoo. Multidisciplinary Digital Publishing Institute Proceedings, 47(1), 9. https://www.mdpi.com/2504-3900/47/1/9 

That brief introduction aside. We are now going to focus on specific types of neural networks and next week our focus will shift to the topic of neuroscience. I have separated the two topics on purpose. Briefly, I had considered trying to combine the two topics as one set of content, but I think it would have become unwieldy in terms of trying to present a distinct point of view on both topics. Digging into neural networks is really about digging into deep learning and trying to understand it as a subfield of machine learning. Keep in mind that while machine learning is exciting it's just a small part of the broader grouping of artificial intelligence as a field of study. I’m going to provide a brief introduction and some links to scholarly articles for 9 types of neural networks that you might run into. This list is in no way comprehensive and is built and ordered based on my interests as a researcher. A lot of speciality models and methods exist. One of them could end up displacing something on the list if it proves highly effective. I’m open to suggestions of course for different models or even orders of explanation.

  1. Artificial Neural Networks (ANN)

  2. Simulated Neural Networks (SNN)

  3. Recurrent Neural Networks (RNN)

  4. Generative Adversarial Network (GAN) 

  5. Convolutional Neural Network (CNN)

  6. Deep Belief Networks (DBN)

  7. Self Organizing Neural Network (SONN)

  8. Deeply Quantized Neural Networks (DQNN)

  9. Modular Neural Network (MNN)

Artificial Neural Networks (ANN) - This is the model that is generally shortened to just neural networks and it is a very literal title. An ANN is really an attempt or more accurately a computational model designed to either mimic or create a neural network akin to what is used within a biological brain using hardware or software. You can assume this model to be fundamental to any consideration of neural networks, but you are going to quickly want to dig into other more targeted models based on your specific use case. What you are trying to accomplish will certainly help you focus on a model or method that best meets the needs of that course of action. However, in the abstract people will consider how to build ANNs and what they could be used for as the technology progresses.

Jain, A. K., Mao, J., & Mohiuddin, K. M. (1996). Artificial neural networks: A tutorial. Computer, 29(3), 31-44. https://www.cse.msu.edu/~jain/ArtificialNeuralNetworksATutorial.pdf 

Hassoun, M. H. (1995). Fundamentals of artificial neural networks. MIT press. https://www.researchgate.net/profile/Terrence-Fine/publication/3078997_Fundamentals_of_Artificial_Neural_Networks-Book_Reviews/links/56ebf73a08aee4707a3849a6/Fundamentals-of-Artificial-Neural-Networks-Book-Reviews.pdf 

Simulated Neural Networks (SNN) - As you work along your journey in the deep learning space and really start to dig into neural networks you will run into those ANNs and very quickly a subset of machine learning adjacent to that type of model called the simulated neural networks. Creating a neural network that truly mimics the depth and capacity of the brain is something to strive for right now and with that constraint it makes sense that work is being done to simulate the best possible representation we can achieve currently or a very special use case that limits the simulation. Using models that generate a simulation based on some complex sets of mathematics, these SNNs are being created to challenge certain use cases. One of the papers shared below is associated with figuring out the shelf life of processed cheese for example. 

Kudela, P., Franaszczuk, P. J., & Bergey, G. K. (2003). Changing excitation and inhibition in simulated neural networks: effects on induced bursting behavior. Biological cybernetics, 88(4), 276-285. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.57.9281&rep=rep1&type=pdf 

Goyal, S., & Goyal, G. K. (2012). Application of simulated neural networks as non-linear modular modeling method for predicting shelf life of processed cheese. Jurnal Intelek, 7(2), 48-54. https://ir.uitm.edu.my/id/eprint/34381/1/34381.pdf 

Recurrent Neural Networks (RNN) - At some point you will want to move from simulating and modeling to accomplishing the hard work of applied machine learning for a specific use case. One of the models you will see being used actively are variations and direct implementations of recurrent neural networks. Within this type of model patterns are going to be identified within the data and the modeling will be based on those patterns to engage in a prediction of the most likely next scenario. This is a useful approach for speech recognition or handwriting analysis. You probably have run into an RNN at some point today with your smartphone or a connected home speaker. A lot of very interesting applied use cases exist for RNNs.

Lipton, Z. C., Berkowitz, J., & Elkan, C. (2015). A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019. https://arxiv.org/pdf/1506.00019.pdf 

Yin, C., Zhu, Y., Fei, J., & He, X. (2017). A deep learning approach for intrusion detection using recurrent neural networks. Ieee Access, 5, 21954-21961. https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8066291 

Generative Adversarial Network (GAN) - For me personally, this is where things get interesting. Instead of looking at one neural network this GAN model creates the possibility of gamification or more to the point direct competition between models in an adversarial way. Two generative models or potentially more can be compared to figure out an optimal approach. I think this is a very interesting methodology and one that could yield very interesting futur results. You can read a lot about this and see the early code published about 8 years ago from Ian Goodfellow over on GitHub [1].

Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., ... & Bengio, Y. (2014). Generative adversarial nets. Advances in neural information processing systems, 27. https://proceedings.neurips.cc/paper/2014/file/5ca3e9b122f61f8f06494c97b1afccf3-Paper.pdf 

Yi, X., Walia, E., & Babyn, P. (2019). Generative adversarial network in medical imaging: A review. Medical image analysis, 58, 101552. https://arxiv.org/pdf/1809.07294.pdf

Aggarwal, A., Mittal, M., & Battineni, G. (2021). Generative adversarial network: An overview of theory and applications. International Journal of Information Management Data Insights, 1(1), 100004. https://www.sciencedirect.com/science/article/pii/S2667096820300045 

Convolutional Neural Network (CNN) - You will run into use cases where you want to dig into visual imagery and that is where CNNs will probably pop up very quickly. You are building a model or algorithm that based on weights and biases can evaluate a series of images or potentially other content. The process of how layers are made and what exactly fuels a CNN is a very interesting process of abstraction. 

Albawi, S., Mohammed, T. A., & Al-Zawi, S. (2017, August). Understanding of a convolutional neural network. In 2017 international conference on engineering and technology (ICET) (pp. 1-6). Ieee. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6197001/pdf/CIN2018-6973103.pdf 

O'Shea, K., & Nash, R. (2015). An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458. https://arxiv.org/pdf/1511.08458 

Deep Belief Networks (DBN) - You may have run into this one in the news recently with all the coverage related to drug discovery. DBNs are frequently described as graphical in nature and generative. The reason it works for something as influential and interesting as drug discovery is that you can produce all the possible values for potential new drugs in a use case and evaluate those results. This is an area where I think the things being produced will be extremely beneficial assuming the methodology is used in positive ways. 

Salakhutdinov, R., & Murray, I. (2008, July). On the quantitative analysis of deep belief networks. In Proceedings of the 25th international conference on Machine learning (pp. 872-879). https://era.ed.ac.uk/bitstream/handle/1842/4588/MurrayI_On%20the%20Quantitative%20Analysis.pdf?sequence=1&isAllowed=y 

Hinton, G. E. (2009). Deep belief networks. Scholarpedia, 4(5), 5947. http://scholarpedia.org/article/Deep_belief_networks 

Self Organizing Neural Network (SONN) - Imagine a neural network model based on feature maps or Kohonen maps that is unsupervised and self-organizing. Within that explanation you are going to get a self organizing neural network model. This could be used for adaptive pattern recognition or just regular pattern recognition. The two references shared below will spell out how this works in more detail if you are interested. 

Carpenter, G. A., & Grossberg, S. (1988). The ART of adaptive pattern recognition by a self-organizing neural network. Computer, 21(3), 77-88. https://search.iczhiku.com/paper/bELWExDU1wAMpDkP.pdf 

Carpenter, G. A., & Grossberg, S. (Eds.). (1991). Pattern recognition by self-organizing neural networks. MIT Press. https://books.google.com/books?id=2u1fH0mxfz0C&lpg=PP19&ots=d_sdwFOQk3&dq=%22Self%20Organizing%20Neural%20Network%22%20machine%20learning&lr&pg=PP19#v=onepage&q=%22Self%20Organizing%20Neural%20Network%22%20machine%20learning&f=false 

Deeply Quantized Neural Networks (DQNN) - Within a neural network model when you are creating weights you could elect to use only very small ones from 1 to 8 bits and to that end you would be on your way to a deeply quantized neural network. Development tools exist for this type of effort like the Google team’s qKeras [2] and Larq [3]. Getting open access to papers on this topic is a little harder than some of the others, but you can pretty quickly get to the code on how to implement this type of neural network. 

F. Loro, D. Pau and V. Tomaselli, "A QKeras Neural Network Zoo for Deeply Quantized Imaging," 2021 IEEE 6th International Forum on Research and Technology for Society and Industry (RTSI), 2021, pp. 165-170, doi: 10.1109/RTSI50628.2021.9597341.

Dogaru, R., & Dogaru, I. (2021). LB-CNN: An Open Source Framework for Fast Training of Light Binary Convolutional Neural Networks using Chainer and Cupy. arXiv preprint arXiv:2106.15350. https://arxiv.org/ftp/arxiv/papers/2106/2106.15350.pdf 

Modular Neural Network (MNN) - Within this model you are going to want to create independent neural networks and moderate them. Within this framework each independent neural network is a module of the whole. This one always makes me think of building blocks for some reason, but that is a simplistic representation given the ability for moderation required to make this work. 

Devin, C., Gupta, A., Darrell, T., Abbeel, P., & Levine, S. (2017, May). Learning modular neural network policies for multi-task and multi-robot transfer. In 2017 IEEE international conference on robotics and automation (ICRA) (pp. 2169-2176). IEEE. https://arxiv.org/pdf/1609.07088.pdf 

Happel, B. L., & Murre, J. M. (1994). Design and evolution of modular neural network architectures. Neural networks, 7(6-7), 985-1004. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.54.8248&rep=rep1&type=pdf 

Conclusion - This is an intense way of starting to dig into neural networks and you will very quickly see that the use cases outside of pure machine learning or artificial intelligence are driving this field forward. A lot of these use cases are within the medical field or health care in general and are super interesting and somewhat related to neuroscience. That is where the next lecture will head in this series. Discussion will move from specific types of neural networks and the research associated with them to the broader topic of neuroscience and how it relates to machine learning. 

Links and thoughts:

“Types of Neural Network Architectures”

“[ML News] AI models that write code (Copilot, CodeWhisperer, Pangu-Coder, etc.)”

“Trust Me Bro - WAN Show August 12, 2022”

Top 5 Tweets of the week:

Twitter avatar for @SchmidhuberAI
Jürgen Schmidhuber @SchmidhuberAI
Yesterday @nnaisense released EvoTorch (evotorch.ai), a state-of-the-art evolutionary algorithm library built on @PyTorch, with GPU-acceleration and easy training on huge compute clusters using @raydistributed. (1/2)

Footnotes:

[1] https://github.com/goodfeli/adversarial 

[2] https://github.com/google/qkeras 

[3] https://github.com/larq/larq 

Research Note:

You can find the files from the syllabus being built on GitHub. The latest version of the draft is being shared by exports when changes are being made. https://github.com/nelslindahlx/Introduction-to-machine-learning-syllabus-2022

What’s next for The Lindahl Letter?

  • Week 85: Neuroscience (ML syllabus edition 6/8)

  • Week 86: Ethics, fairness, bias, and privacy (ML syllabus edition 7/8)

  • Week 87: MLOps (ML syllabus edition 8/8)

  • Week 88: The future of publishing

  • Week 89: Understanding data quality

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed this content, then please take a moment and share it with a friend. If you are new to The Lindahl Letter, then please consider subscribing. New editions arrive every Friday. Thank you and enjoy the week ahead.

0 Comments
The Lindahl Letter
The Lindahl Letter
Thoughts about technology (AI/ML) in newsletter form every Friday