.shareit

Home // Marketing Moments

What does it mean to regulate AI?

BY Nilanjana Basu

Share It

Over the last few months, Artificial Intelligence (AI) as a technology has seen a rapid pace of development and recognition. Industries all over the world are trying to make the best use of the tool, and that has gotten authorities to look into the dangers that AI brings with it.

Along with data protection issues, the fast-paced growth of AI has compelled authorities around the world to take actions to regulate it. China last week released draft regulations which said its tech companies need to register generative AI products with China’s cyberspace agency and submit them for security assessment before being released to the public. The European Union has come up with an AI Act that will govern anyone who provides a product that uses AI. EU lawmakers also called on world leaders to control development of AI systems like ChatGPT. France and Italy have voiced concerns about privacy violations related to Open AI’s ChatGPT. The United States, although has not officially set out a regulation, is seeking public comments on potential accountability measures for AI systems.


India, however, has said that it is not looking at any sort of regulation for AI for now. Regulating a technology as powerful as AI can be a tricky business. Experts in the field talk about what it means to regulate AI, the complex areas around it and if it’s a necessary business to attend to.


Regulation of AI can be challenging

Niraj Ruparel, Head of Mobile & Emerging Tech - GroupM India Emerging Tech Lead - WPP India feels that before regulating AI, the authorities need to consider the potential benefits and the risks of AI, along with perspectives and concerns of various stakeholders. “The regulation of AI is a complex and multifaceted issue that involves many different stakeholders, including governments, businesses, researchers and the general public. On the one hand, AI has the potential to bring significant benefits in various fields, from healthcare and transportation to manufacturing and finance. On the other hand, there are concerns about the potential negative consequences of AI, such as biasness, job displacement and privacy violations.”

“One key challenge in regulating AI is defining what exactly AI is and what types of AI applications should be subject to regulation. Some experts argue that AI systems that pose significant risks to human safety or autonomy, such as self-driving cars and facial recognition technology, should be subject to stricter regulation than other AI systems.

Another challenge is ensuring that AI regulation keeps pace with the rapidly evolving technology. As AI continues to develop and become more sophisticated, new risks and ethical concerns are likely to emerge, requiring ongoing evaluation and adaptation of regulations,” shares Ruparel.

Talking about what authorities can do to find a way to regulate AI and help innovate at the same time, Ashray Malhotra, CEO of Rephrase.ai, says, “No one knows exactly what they’re setting out to build. They get down to it and, along the way, figure out everything that a particular model can do. So there’s no way to push the brakes on a particular niche development.” “What can be done, perhaps, is a government body that works with the same or comparable agility as AI companies, which can help regulate what’s good and bad. The key, again, would be to educate the members of this body not just about the intricacies of public policy but also about the immense positive impact that AI can have on humankind. That’s when such a body can bring balance and cater to both regulation and innovation.”

Need for mandates?

Speaking on recent safety measures published by Open AI and if companies need to give disclosures, Ashray Malhotra agrees that there should be a government-mandated structure of the right category of disclosures that the companies need to make.

He says, “This includes providing warnings and safety notes to the public about the platform beforehand, particularly in cases where the technology could have significant impacts on individuals or society as a whole. By being transparent and accountable, companies can help to build trust in their technology and ensure that it is used in a responsible and ethical manner. At the same time, user awareness that content can be manipulated is very important because it empowers individuals to make informed decisions about how they interact with the technology. So they can protect themselves, and make choices that align with their values. This, in turn, builds trust in the technology and helps prevent unintended consequences that may arise from a lack of understanding or awareness.”

Rajat Ojha, CEO Gamitronics, also believes AI must be regulated but it shouldn’t slow down the research, but make for more responsible use.

“We have already started seeing the signs of how AI is changing everything around us and also how Italy and other countries in Europe are banning particular AI tools. The problem is that advancement in AI is happening so rapidly that people are not able to quickly assess the opportunities and risks, so outright bans are happening.”

“AI must be regulated by the lists of high risk areas, set expectations of AI in those high risk areas and wrap it up in governance structure. Similarly for any new AI system to be deployed, the assessment to governance structure journey must be charted. AI Safety is as important (or more) as AI research and the amazing use cases it offers. I’m personally excited about the future of AI but I also understand that regulations should be enforced and should be done in such a way that AI research doesn’t slow down or create panic but just helps tech companies be more responsible,” he says.

Share It

Tags : Digital marketing Internet advertising internet advertising India digital digital news digital ad Ad campaign campaign digital advertising digital ad campaign digital campaign digital India digital media