This content originally appeared on DEV Community and was authored by LemonBoy
A few years ago, AI was an abstract concept. But that has changed since the boom of generative AI apps such as ChatGPT, Bard, Bing Chat, and more. Now, we can interact with AI chatbots and see their true powers in action.
AI is evolving rapidly. Development in Large Language Models (LLMs) is evident and will continue growing. This growth represents what every emerging technology brings, pros and cons. For business, it has the following advantages;
- Reducing the time taken to perform repetitive tasks.
- Communicating with customers through chatbots.
- Mass-market potential since it is deployable in many industries.
- Making faster and smarter decisions.
AI can be detrimental despite the potential to make groundbreaking changes in our lives. It was the key talking point at the Google I/O 2023, and advancements in language models, continue to push the AI agenda. Let us delve into scenarios of harmful use.
Scenarios of Bad AI Use
Social Surveillance
The Chinese government uses AI's facial recognition technology to track citizens' movements. The data show patterns like;
- Places you visit
- Political views
- Personal relationships, etc., further breaching privacy.
In the US, AI predicts crime hotspots based on arrest rates, opening Pandora's box of bias. The prediction algorithms show bias towards areas of minority communities. Recent reports claim that facial recognition technology can't differentiate black people. What does that mean? Racial profiling.
Deep Fakes
Deep Fake is AI technology that uses deep learning to make images and videos of fake events. It maps faces on other people's bodies, syncs lip movements, and matches voices. In February 2023, a video of US President Joe Biden responding to journalists appeared online. Watch Joe Biden's deep fake video review below;
Later, photos of Donald Trump's imagined arrest also surfaced. This narrative spreads misinformation to people, who may never discover if these videos or images are fake or real. Many deep fake videos may emerge leading up to the US 2024 general elections.
Drawbacks of AI
Wait, there are more drawbacks to AI.
Job Losses
AI used in machines performs tasks faster and more efficiently than humans. AI will create new jobs and take some.
Unfair Bias
Humans (who develop AI) are naturally biased. The algorithms learn from data chosen by humans and hence return biased results.
Voice Phishing
AI, through Machine Learning, can learn a person's voice. In 2023, an array of "non-existent" collaborations have been released. If this lands in the wrong hands, voice phishing can be misused.
Impersonation
Through voice learning, impersonation becomes a criminal haven. Criminals can use it to ask for favors, perform transactions, etc. Recently, there have been reports of an AI voice call scam.
Misinformation and Disinformation
On April 2023, a realistic photo of a moon landing surfaced. As realistic as it seemed, it was an AI-generated image. Images and video media generated by AI can spread misinformation.
Environmental Impact
According to reports, Large Language Models' resource-intensive datasets produce high emissions. Experts state that a medium-sized data centre uses 360,000 gallons of water daily for cooling.
What AI Labs Are Doing to Ensure Responsible AI
- Developing systems that are hard to fool by AI-generated voices.
- Raising awareness to users to be sceptical of audio or video whose sources they cannot verify.
- The National Cyber Security Alliance has a guide on how to spot AI-generated voice scams and the risks of AI-generated voice.
- Creating responsible AI teams to ensure that AI products and services are developed and used responsibly.
- AI ethics guidelines to ensure the ethical use of AI products and services.
Researchers are also working on systems that can detect AI-generated audio. The University of Washington researchers have developed a system with 94% accuracy.
Google's Principles that Guide AI Development and Adoption
On June 7, 2018, Google laid out seven principles to guide the development and assessment of AI applications.
Be Socially Beneficial
AI development should consider the users and benefit their needs while minimizing risks. High-quality and accurate information should be available using AI while continuing to respect cultural, social, and legal norms in the countries where Google operates in.
Avoid Creating or Reinforcing Unfair Bias
Google seeks to avoid unjust impacts on people. It avoids biases like race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
Be Built and Tested For Safety
AI should be developed to avoid unintended results that create risks of harm. Testing and monitoring their operation after deployment is vital.
Be Accountable to People
Google will design AI systems that accept suitable user feedback. Its AI technologies will be subject to appropriate human direction and control.
Incorporate Privacy Design Principles
User privacy is at the forefront. Google will ensure notice and user consent and build with privacy safeguards. It will provide appropriate transparency and control over the use of data.
Uphold High Standards of Scientific Excellence
AI tools should be developed with high standards of scientific excellence in mind. AI knowledge will be shared through publishing educational materials, best practices, and research.
Be Made Available For Uses That Accord With These Principles
Google will restrict harmful or abusive applications. Before deployment, apps are evaluated based on;
- Primary purpose and use
- Nature and uniqueness
- Scale and Nature of Google's Involvement
The Steps Google is Taking to Ensure Responsible AI
At Google IO 2023, James Manyika spoke about Google's bold and responsible approach to AI use.
Misinformation has prompted the development of tools to evaluate information. Evaluation will be done as follows;
- The "About this image" feature will show where and when similar images have appeared on Google's image search. I.e. on social media, articles, etc.
- Google has tools to help people verify the authenticity of audio and video. i.e., the "Heart Voice Assistant" can help people verify the authenticity of audio recordings.
- Image Metadata. Creators can add metadata to images to show they are AI-generated. Google Images will show this to users.
- Watermarking images to show AI-generated images.
- Guard rails to help prevent misuse of the universal translator which can be used to create deep fakes.
- Google provides authorized access to partners who wish to use the universal translator.
- Automated Adversarial Testing. Large Language Models use the Perspective API to detect toxicity in their models.
Conclusion
While AI development will continue, we should realize that every coin has two sides. AI is part of our daily lives, so we must embrace it and adapt so we don't get replaced.
Google and AI labs are working on solutions (guided by principle) to safeguard users and prevent misinformation, which affects the gullible. Through criminal activities, anyone is liable to lose their money, identity, etc. We should all beware and stay vigilant in the AI sea.
Thank you for reading 🥳. You can support me by buying me a cup of coffee here ☕. For business, reach me through my email here 📧.
See you at the next one
Peace ☮️✌️
This content originally appeared on DEV Community and was authored by LemonBoy
LemonBoy | Sciencx (2023-05-20T20:50:05+00:00) What Google Is Doing About AI, Deep Fakes, and Impersonation. Retrieved from https://www.scien.cx/2023/05/20/what-google-is-doing-about-ai-deep-fakes-and-impersonation/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.