Is AI Safe?

This article examines the good and bad of AI, highlighting its advancements alongside its potential dangers. Explore the paradox of AI's potential and risks for humanity's future.

Is AI Safe?
The Paradox of Progress: Is AI Safe for Humanity’s Future?

With the advent of AI, the impossible seemed possible. I might not be someone who has sat in the engine room to develop AI but here's my two cents of this new buzzword shaping our future. Why on earth would you care to take my opinion on AI? There’s none. However, if you ever happen to check out MagicStudio.com (by the way you definitely should) which I have been building with a stellar team, know that AI has been the backbone of our services. More than a million people are using our services and I can assure you that the behavior analytics suggest that repetitive customers using our products are quite the count.

I am not going to talk to you about how amazing AI is for anything and everything(It is not). As someone who’s been able to inculcate AI in tailoring products and services for a wide range of people and professionals in the photo editing space, I am actually here to throw some light on both sides of the coin.

What do the experts say about AI safety?

I could tell you about a hundred different opinions from experts in a hundred different domains but the overlapping concern of all these opinions center around only one thing: Safety. Integrating AI into human livelihood was meant to enhance human well-being but not at the cost of existential risk to the entire human species itself. Existential risk? For the entire human species? Probably so. But hey, do not get carried away with the blues of what AI could doom. In his speech at the recent world summit on Artificial Intelligence in the UK; Rajeev Chandrasekhar, India’s Electronic and Information Technology Minister, used the term ‘techno-optimism’ for AI and the larger emerging tech space. According to him, India is looking at AI through a ‘prism of openness, safety, trust and accountability’.

Review of high-profile opinions of AI’s risks

Geoffrey Hinton who is known as the ‘Godfather of AI’ for his pioneering work across machine learning and neural network algorithms solemnly acknowledged the dangers that AI possesses. With some of the biggest names in the Tech industry like Elon Musk and Sam Altman having expressed the need for boundaries and regulatory practices to come into effect; it becomes quite evident that the potential risks of AI are unfathomable.

On the 1st of November 2023, the very first global summit on AI safety took place. The forum was successful in bringing leaders, researchers, and relevant authorities from across the world to discuss AI and the risks associated with it.

While almost all government leaders focused on bringing strict regulations in the usage of AI in their respective countries, Chandrasekhar clearly mentioned that for India, the boom in the digital economy through AI is of paramount importance. He emphasized that India views ‘startup innovation’ and ‘safety of AI’ in equal eyes and hence, cannot make a ‘binary choice’ of one over the other.


AI’s Pros and Cons

It was 1950 when John McCarthy coined the term ‘Artificial Intelligence’. Let me take you to the core of what AI is. A machine develops insights from the data and algorithms that are fed into it and then performs tasks by analyzing and contextualizing the same data. The ability of the machine to perform is an acquired ability that’s designed and built by humans themselves. AI is this emulation of human thought and performance of human tasks that could result in outcomes beyond human capabilities. Through the use of automation, AI is indeed a savior in carrying out regular, repetitive mundane tasks faster and more efficiently.

Application of AI automatically reduces if not completely eradicates the chances of human error. You and I need vacations. AI doesn’t. Not even sleep. This means AI can work 24*7. Unlike a human mind that enters a never-ending loop of overthinking, AI can actually help in making quicker decisions (with no emotional quotient involved); thereby increasing efficiency in any task.

However, applying AI in solving real-world problems could yield counterproductive results. Faster and more efficient delivery of services have drastically caused massive layoffs and unemployment already. Algorithm biases in building AI products have inherently exhibited socio-economic biases. Violation of privacy of personal data continues to grow as one of the biggest threats. The high-end possibility of AI growing more and more self-aware with eventual developments in artificial general intelligence and artificial superintelligence could very much lead to a global AI arms race among nations.

AI and Privacy on Social Media

Social Media is undoubtedly the biggest bet of AI implications (albeit negatively). The recent deepfake video of Actress Rashmika Mandanna which garnered more than millions of views on Instagram and X is a perfect example. A simple video of a British Indian woman walking out of an elevator was morphed and fabricated using generative AI and replaced with the actresse’s face. Thankfully, this caused a massive stir among the citizens and people with influence. With such similar instances on the rise lately, to have legal regulations in place to moderate and control the extent of AI’s usage to prevent ‘identity theft’ and scarier repercussions is the need of the hour.

Debating AI

Let me take you to more nuanced real-world solutions built using AI. Ama Krush AI launched in Odisha happens to be one of India’s first AI-enabled platforms in the agricultural sector. With a Chatbot mechanism, users can send and understand queries and learn about updated agricultural insights. The feature is available in text-to-speech as well as speech-to-text formats. This AI-leveraged technology is primarily used by farmers and hence reduces the asymmetry of information by middlemen.

Similarly, the Zomato AI launched in September helps customers in exploring and deciding what their food order could possibly be. But the question remains about where to draw the boundaries in entitling AI with the power to make decisions for humans. The ultimate risk of AI is if it can kill humans. Remember when I spoke to you about the existential crisis earlier in this article? Opinions around Safety with AI continue to vary among researchers, tech founders, and governments. If sentience becomes a key AI feature, the outcomes could be berserk.


The Known Risks of AI

I think of a cross-body bag in my mind and the next thing I see on my feed on Facebook are these ads for various cross-body bags. Magic? No. Sheer play of algorithms. I cannot recommend you enough to watch “The Social Dilemma.”

With our personal data being consistently stolen by social media sites, Privacy has become a myth. Identity Theft is a no-brainer. In 2023 itself, there has been a surge of 315% of people losing their jobs as compared to last year primarily because of AI replacing them.

While Tech tops the job displacement scenario, every other field equally laid off its employees. The Content Industry went crazy for ChatGPT to replace writers. AI-enabled applications could write faster with brief prompts and deliver well-crafted articles in a fraction of seconds.

The Known Risks of AI

The Potential Pitfalls of AI

In the writing sector, for example, when ChatGPT entered the market, content writers especially freelancers lost multiple gigs and a regular inflow of work. ChatGPT sounded polished and acted as a major cost and time-effective alternative for clients. Human biases have made their way into AI as well. These biases could be algorithmic or cognitive but can have a major impact on the already existing social, cultural, political, and economic biases of a country.

In an independent research at Carnegie Mellon University in Pitsburg, men were displayed to be in higher paying positions than women in one of Google’s online advertising systems. This is not about portraying income disparity. This shows how gender bias gets reinforced because machine algorithms happen to have biased sets of data.


The Hidden Dangers of AI

The benefits of AI are promising but so are its consecutive threats. We stand in the era of fake news. AI is predominantly the backbone of misguided information in the digital space. (Hint: Think of the deepfake video.)  We are already living in a world of distorted reality and trust me, this is only going to get worse if not strictly regulated. AI exposed the human world to an infinite loop of cybersecurity vulnerabilities. Over-dependence on AI systems is likely to reduce human connection in the days to come. And if tomorrow, AI is endowed with EQ, we might just get warmed up to welcome our own destruction.

Insights of AI Risks from Research

According to Dr. Christian Witt, a Postdoctoral RA in AI at the University of Oxford, the need for AI safety and cybersecurity researchers to come together and ‘develop a holistic blueprint for AI safety’ is crucial. With all the debate around existential safety in AI; Dr. Carissa Veliz at the Institute of Ethics in AI at Oxford University believes that ‘the current threat on people and democracy’ is often overlooked by emphasizing more on ‘existential threat’.

Prof Martyn of Gresham College had a refreshing take on AI’s existential threat. He said, “If a large number of super-intelligent aliens had access to the internet, might they be able to devise a way to eliminate humans?” and “If some leader of an an-end-of-the-world cult were able to control and assist such super-intelligent aliens, would that increase the threat?” I am thinking what you are thinking. AI might overrule us. Oops.


Building AI Responsibly and Keeping AI in Check

I have religiously picked on the topic of regulating AI to enjoy the best of it. Now let’s rewind to the AI summit I mentioned earlier. Organized in the UK, the first of its kind, the Bletchley Park Declaration acknowledged the risks of AI, particularly frontier AI. Leaders around the world signed the declaration in an attempt to bring regulatory practices in combating the ill effects produced by deploying AI.

Apart from the summit, building and consuming AI responsibly has to be a conscious approach for individuals and communities alike. Biases need to be identified and addressed. AI-deployed products should comply with the legal and ethical policies of a nation. But most importantly, the need to create a strict boundary in handing over the power to AI has to be reinforced.

Balancing AI Development

How do we go about enjoying the drink but also prevent the hangover? Moderation and Regulation. Yes. Restrictive Governance in dealing with malicious users leveraging AI to threaten cybersecurity. Yes. AI must be promoted as a supplementary tool in making everyday hassle easier and not as a replacement for human labor/ connection. Siri is cool but can never replace your sister. Right? Sister and Siri together sounds even cooler! AI is algorithm intensive. Not everyone can decode machine language. Hence, there has to be a system of transparency brought to the surface for common people to understand AI and partake in discussions surrounding AI so that citizen engagement increases.

Balancing AI Development

Policy and AI Safety

No amount of AI awareness is enough without concentrating on policy formulation and implementation. Rigorous policies need to be developed both by governments and tech founders to ensure ethical compliance and human safety. It was only after the AI-leveraged fake video of Actress Mandanna went viral that the Indian government has been pretty vocal about ‘detection, prevention, strengthening of reporting mechanisms, and raising user awareness’.

We are yet to see the challenges and gaps in implementing the Digital India Act, of 2023. This means that the whole safety quotient regarding AI usage and AI-specific regulation still looks pretty blurred. However, India has expressed solidarity at the AI global summit in establishing regulatory bodies at the domestic and international levels. At the global level, the Bletchley Park declaration was signed by 28 nations that including the US, India, The European Union, and even (to everyone’s surprise), China!


Conclusion

Is AI Safe?

If I am asked to blatantly answer whether AI stands safe, I’d say yes. But. Always but; right? But only when AI is designated to enhance human well-being and human lifestyle without coming as an add-on threat to human existence. Leveraging AI in tech has given birth to multiple innovations. I’d shamelessly vouch for Magic Studio.com where you could easily edit photos without having to go through unnecessary fuss.

Among all sectors, the Health domain has the potential to boost greatly by integrating AI. Deploying predictive AI has helped in diagnosing diseases like cancer at earlier stages thereby facilitating the treatment to be kick-started earlier. Apparently, it is actually not AI that’s harmful. It is the hands that design the AI. Man remains to be man’s greatest enemy. In order to win the power dynamic struggle, may we, as a civilization, not give away the power to AI and bring in our doom. With robust regulatory practices and compliance with ethical standards, AI’s truest potential could be easily tapped.


Subscribe to The Magic Studio Academy

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe