This content originally appeared on Hidde's blog and was authored by Hidde de Vries
Morten described a possible feature, maybe even reality, in which AI-generated content is rampant. But when we start to employ machine learning for content creation, we start to regard content as a means more than an end. In that process, won't we lose what's worth caring about?
In his post, Morten explains he sees three types of AI-generated content emerge. The first two, AI-curated content (it helps assemble content and provide you with what it thinks is the most relevant), and AI-assisted content creation (it contributes to content creation) are a thing now. The third, AI-synthesised content, will likely become a thing in the future. Morten's post gives a great overview of what to expect.
It reminded me of a project I did in university about automating the arts. My conclusion: we can write code to generate creative works, but code or models can't capture intentions, experiences or beliefs. This requires human input, therefore creating art (or content) requires human input, was my reasoning. There are nuanced differences between AI, machine learning, big data and bots, but in this post I won't go into them.
When I want to find a recipe for pizza dough on the web, I would consider myself lucky if I could get ahold of a blog post from someone who cares passionately about the right kind of dough, who maybe ran an artisan pizza kitchen in Naples for the past 30 years or has a background in baking. ‘Dream on’, you think. Well, these people exist on the web and the web is awesome for being an open platform that anyone with a passion can write on. I don't want to find text produced just because someone saw “pizza dough” is a common search phrase and a potential for top result ad money to be extracted. The passion that drives them isn't the pizza dough—that's fine, but it makes their content less relevant to me. Similarly, I also don't want to find text generated by a machine learning model. It can't possibly bring the knowledge and experience I'm hoping for.
When I write an email or reply, I try to put what I want to convey into words that I choose. I might choose to include an inside joke that me and the recipient share, fit in an appropriate cultural reference, be extremely polite, or terribly rude. I mean, my intentions and attitude are in that interaction. I don't want Google or LinkedIn or others to suggest what to reply, to reinforce reminiscences of the historical content they trained their machine learning models with. It dehumanises my conversation. Its suggestion may or may not align with my intentions.
When I listen to music, I can be touched by the experiences and world views that inspired the artist. Whether you're into the Eagles, Eels or Ella Fitzgerald, their songs capture things that machine learning systems can't because the artists have attitudes. Robots don't love and they don't have opinions. Maybe they can come up with interesting rhythms and melodies, or utter sentences like “I love you”, but the association of their work with intentions and memories needs humans.
When I read a newspaper, the order of pages, the focus a layout provides and the choice of photography… they are decided by human beings who have a lot of experience. People who work as a journalist after being a professional sports player for decades. People who followed politics for decades and therefore understand which scandal is worth extra attention. People who can make bold choices based on their world views. Bots don't have world views. Algorithmic prioritisation of content isn't as good as prioritisation by humans, even if it gets close. See also algorithmic timelines on social media versus human-curated lists and contextualisation.
When I have a consumer issue, I want to talk to a human representative of the company. Someone who has the authority to make decisions. Who can take us on the shortest path to a mutually satisfactory solution. Did you ever see a chat bot provide more than a repeat of the data it has been fed? Did you see a chat bot make enquiries with empathy? Lack of empathy isn't a bug in bots that we just haven't fixed yet, it arguably isn't empathy if it isn't human-to-human (ok maybe animals can be part of this equation).
All these examples lead me to think: the human side of data isn't measurable or computable. The human side of art, content or communication is not just a different side of the same coin, it's a bigger coin. There is more to reality than data can capture, like lived experiences from actual people and intentions and beliefs. Propositional attitudes that robots can only pretend to have.
Basically, I'm worried about overestimating how many human capacities machine learning can take over. At the same time, I don't think machine learning is useless. Quite the opposite, it is fantastic. I love it that computers are getting better at automated captions, translation or even generating images based on prompts. The latter may create a whole new level of art where artists use it as a material for their creations (see also AI is your new design material by Josh Clarke). Medical applications where machine learning notices abnormalities that a human might miss. Audio recognition engines that will tell you what song is playing. Email spam filters that save us all a lot of time. It's all super valuable and genuinely impressive. And a lot of these use cases don't need lived experiences, intentions or beliefs to be incredibly useful.
Originally posted as Re: AI for content creation on Hidde's blog.
This content originally appeared on Hidde's blog and was authored by Hidde de Vries
Hidde de Vries | Sciencx (2022-08-30T00:00:00+00:00) Re: AI for content creation. Retrieved from https://www.scien.cx/2022/08/30/re-ai-for-content-creation/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.