Jun 24 • 5M

ML content automation; Am I the prompt?

At this point, I started to ask myself if I’m the prompt of an exceedingly large language model.

1
 
1.0×
0:00
-5:23
Open in playerListen on);
Thoughts about technology (AI/ML) in newsletter form every Friday
Episode details
Comments

Thank you for tuning in to this audio only podcast presentation. This is week 74 of The Lindahl Letter publication. A new edition arrives every Friday. This week the topic under consideration for The Lindahl Letter is, “ML content automation.”

At this point, I started to ask myself if I’m the prompt of an exceedingly large language model. Perhaps a better trained and curated model than any foundational model. Prompt engineering is a wonderfully interesting part of ML content automation in terms of generative models. You have to sort of know how to prime the pump or in this case the prompt to get the right content to start to flow from something like GPT-3 or one of the larger foundational models that are starting to be floated around. Previously I have warned about content flooding and the potential for the entire internet to just be astroturfed with nearly endless content if the wrong Web3 comes into being. 

Machine learning elements can certainly be used for content generation and extending that functionality to the practice of content automation. You can set up a workflow that just automatically generates content. It could be a bot implementation use case that generates content in response to people. That is an easy method of feeding the model prompts as the only method to interact with a chatbot is to engage in the prompt base activity of sending something from the user to the chatbot. At that point, the prompt has been opened and something is being exchanged to the model which will cause the generation of content. The use case could be extended nearly indefinitely at that point. 

Let’s go beyond just a chatbot use case and jump into the complexity of “automated journalism” or more generally the use of machine learning or artificial intelligence to generate articles for the purpose of reporting news. This is where I worry about the use case being expanded from news rooms to a general effort of flooding or astroturfing topics. Bad actors could step in and create such a flood of content that figuring out what was real and what was synthetic could become the greatest challenge facing the internet. Truth could be swept away into a totality of coverage that covers all potential prompt lines given that synthetically generated content may have no association with reality whatsoever.  My concern related to this path of course started with the advent of GPT-3 and the potential for the synthetic creation of prose that is believable like the article that was published in The Guardian in 2020 [1]. A lot of the content I run into during the course of reading news during the day could very well be synthetically created from a large language or foundational model. We are seeing things like the Microsoft corporation reducing news staff at MSN to replace them with automation [2].

We are now starting to see models like DALL-E 2 from places like OpenAI that are able to make realistic images from a prompt [3]. That takes a prose based use case for the creation of content to a much wider range of use cases. I’m sure the model will go from images to videos at some point and that type of model would be a legitimate game changer for content creation. Automating the ability for a machine learning model to create video from a prompt would be a huge advancement within the world of content automation. You could open and stage an art gallery with live prompt based installations. I think it would actually be an interesting use case for OpenAI to demonstrate the potential of the model. Bad actors within this space could also decide to create endless images and videos to flood the online world with content dedicated to a specific topic or general theme. 

I have spent a lot of time worrying about how to deal with or manage the problematic elements content flooding could create for society in general. The very fabric that binds civil society together might already have seen the breakdown of a curated common thread based on shared experiences. We may have created such a curated content bubble that any shared experience might be limited to commercials and knowledge of products. Being willing to make that assumption might explain a lot of the things happening within society in general at the moment as we face the reality of the intersection of technology and modernity. 

Links and thoughts:

The WAN Show was full of wild Linus stories about home automation today. “Story Time! - WAN Show May 27, 2022”

“Stanford Seminar - Leveraging Human Input to Enable Robust AI Systems”

“Ask the Experts: Scaling responsible MLOps with Azure Machine Learning | CATE21”

Top 5 Tweets of the week:

Footnotes:

[1] https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3 

[2] https://www.seattletimes.com/business/local-business/microsoft-is-cutting-dozens-of-msn-news-production-workers-and-replacing-them-with-artificial-intelligence/ 

[3] https://openai.com/dall-e-2/ 

What’s next for The Lindahl Letter?

  • Week 75: Is ML destroying engineering colleges?

  • Week 76: What is post theory science?

  • Week 77: What is GPT-NeoX-20B?

  • Week 78: A history of machine learning acquisitions

  • Week 79: Bayesian optimization

I’ll try to keep the what’s next list forward looking with at least five weeks of posts in planning or review. If you enjoyed reading this content, then please take a moment and share it with a friend.