Thoughts on the current state of AI

Here we are! AI is the big thing everyone was expecting for years now. It’s not a new idea. More than half a century ago science fiction writers were already talking about it. But now it is here. What will we do with it? Where will it lead to? What hap…


This content originally appeared on DEV Community and was authored by Red Ochsenbein (he/him)

Here we are! AI is the big thing everyone was expecting for years now. It's not a new idea. More than half a century ago science fiction writers were already talking about it. But now it is here. What will we do with it? Where will it lead to? What happens to our society?

To be honest: Nobody knows what the future will be like. 10 years ago I was overly optimistic about autonomous cars. I predicted that within 5 years every new car will be self-driving. Well, 10 year in and it still seems to be coming within 5 years (probably not).

But what do I think about LLMs like GPT-4 or ChatGPT, or things like DALL-E and Midjourney? I'll try to take you on a journey through my thoughts.

The optimistic view

Chances in healthcare

Advancements in technology have the potential to revolutionize healthcare in a number of ways, from more efficient diagnosis and treatment to better access to care for underserved populations. Telemedicine and digital health platforms are becoming increasingly popular, allowing patients to receive remote consultations and access to medical information and resources.

I think, most importantly AI and machine learning can also help healthcare professionals make more accurate diagnoses and develop personalized treatment plans for patients. No medical doctor on this world is able to keep up with all the new developments and papers. Especially when it comes to rare diseases it will be helpful to be able to feed an AI with symptoms and other data and retrieve pointers to things the doctor might not even have heard of.

Those developments will increase the quality of life for many, when we manage to make it accessible and affordable for everyone.

Chances for society

Emerging technologies also offer many benefits for society at large. The use of autonomous vehicles could improve road safety, reduce traffic congestion, and cut down on greenhouse gas emissions. Smart cities could help us better manage resources, from water and electricity to waste disposal. The technologies can help to address social issues such as poverty and inequality, by providing new tools and opportunities for education and economic empowerment.

Increased automation could lead to a redefinition of work. Society might finally be able to fulfill the promise of a more purposeful life and not having to rely on job just to be able to survive.

Chances for the environment

The new technologies have the potential to help us address some of the biggest environmental challenges we face today. Renewable energy technologies such as solar and wind power can help us transition away from fossil fuels and reduce our carbon footprint. AI can help us find improved ways of reducing unnecessary transportation, make supply chains and distribution of resources more efficient. Advanced agricultural technologies such as precision farming can help us reduce waste, conserve water, and increase food production in a sustainable way.

The pessimistic view

Black box problem

One of the main concerns with AIs is the so-called "black box problem." As machine learning algorithms become more complex and powerful, it becomes increasingly difficult for humans to understand how they are making decisions. In a neural network you can't just go in, take a look at any part of it and extrapolate the finale result from there.

As far as I see it, never before we built something we can't comprehend in such a way. If you think about it, we build incredible machines, but we always knew how each of its parts contributed to the whole. This is no longer true in neural networks and other algorithms. This lack of transparency can be problematic, particularly in areas such as healthcare, where decisions can have life-or-death consequences.

There is also the risk of algorithmic bias, where the algorithms reflect and reinforce existing biases and inequalities in society. As a heard a CEO saying once: "Why should we care about biases in algorithms? Humans have biases, too." This sums up the problem quite well. There seems to be a lack of awareness in the field.

Alignment problem

Quite similar to the black box problem is the Alignment problem. Since we can't really see into the inner workings of the decision making process of those models we really can't know if the goal of the AI is actually aligned with the goal we think we gave it. There are several examples of this. Can we be sure a image classifier identifies something because of the thing itself, or just because something always happens to be on those images. Is a doctor a doctor because she has a stetoscope? Or is it the gender? Or anything else?

Social acceptance automaton

If you think about how we train a language model. First we train a model which should learn how a human would interact with the model and then we train the actual model with the other model. Does the language model really learn to give accurate and correct information, or does it learn to please the person interacting with it? In other words a langauge model might just learn to tell us what we want to hear.

How can we make sure the model actually learns to be correct? Are there enough experts in every field involved with the training of the models? I think this is one of the hardest problem we'll have to address, and honestly, I'm not sure if this can be fixed in any way.

Knowledge gap

There is also a risk that emerging technologies will widen the gap between the haves and have-nots. As AI and automation disrupt industries and create new jobs, there is a risk that some people will be left behind, lacking the skills and education necessary to participate in the new economy. This could lead to a world where a small elite holds most of the wealth and power, while the rest of society struggles to get by.

I recently heard someone talk about how AIs will wipe out software development jobs. It was suggested a Senior Developer will no longer have to rely on Juniors because he could just assing the tasks to AI codegenerators and then review their code. My question woulde be: Where would those Seniors come from if there are no more Junior positions? How would anyone gain the experience and knowledge to properly assess the generated code?

Another problem with that might be that the lonely Senior Dev might be stuck with a suboptimal AI system for the tasks to be done. It would probably be harder to switch to a different system compared to having a pool of different people.

Source acknowledgement

AIs requires transparency and accountability from those who are developing and deploying the technologies. There is often a lack of clarity around who is responsible for ensuring that emerging technologies are used ethically and responsibly, particularly in cases where multiple stakeholders are involved. The black box problem mentioned earlier does not make this any easier.

If we are not even able to describe how exactly a system comes to the result, how will we be able to acknowledge the sources? If the text is basically a string of the most probable words, how will we be able to acknowledge the source. But being able to assess the sources is important in todays and future world.

Technological feudalism

There is a concern that new technologies could lead to a new kind of feudalism, with a few powerful corporations and individuals controlling vast amounts of wealth and power. As data becomes the new oil, those who control it will wield immense power over society. There is also a risk of monopolies forming in key industries, stifling competition and innovation.

Final thoughts

The new developments in AI and Machine Learning offer both great promise and significant risks. It is up to us to ensure that these technologies are developed and deployed in a way that maximizes their benefits while minimizing their risks. This will require careful consideration of the ethical, social, and environmental implications of emerging technologies, as well as a commitment to inclusive and equitable development. By working together, there is some hopw we can build a better future for all. But to be frank: I'm sceptical.

Acknowledgment: This article was written with the help of but not by ChatGPT.


This content originally appeared on DEV Community and was authored by Red Ochsenbein (he/him)


Print Share Comment Cite Upload Translate Updates
APA

Red Ochsenbein (he/him) | Sciencx (2023-03-19T13:22:14+00:00) Thoughts on the current state of AI. Retrieved from https://www.scien.cx/2023/03/19/thoughts-on-the-current-state-of-ai/

MLA
" » Thoughts on the current state of AI." Red Ochsenbein (he/him) | Sciencx - Sunday March 19, 2023, https://www.scien.cx/2023/03/19/thoughts-on-the-current-state-of-ai/
HARVARD
Red Ochsenbein (he/him) | Sciencx Sunday March 19, 2023 » Thoughts on the current state of AI., viewed ,<https://www.scien.cx/2023/03/19/thoughts-on-the-current-state-of-ai/>
VANCOUVER
Red Ochsenbein (he/him) | Sciencx - » Thoughts on the current state of AI. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2023/03/19/thoughts-on-the-current-state-of-ai/
CHICAGO
" » Thoughts on the current state of AI." Red Ochsenbein (he/him) | Sciencx - Accessed . https://www.scien.cx/2023/03/19/thoughts-on-the-current-state-of-ai/
IEEE
" » Thoughts on the current state of AI." Red Ochsenbein (he/him) | Sciencx [Online]. Available: https://www.scien.cx/2023/03/19/thoughts-on-the-current-state-of-ai/. [Accessed: ]
rf:citation
» Thoughts on the current state of AI | Red Ochsenbein (he/him) | Sciencx | https://www.scien.cx/2023/03/19/thoughts-on-the-current-state-of-ai/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.