Expand +



Should We Regulate “Dangerous” AI Now or Wait?

by Bernard Marr, Independent Advisor

June 25, 2018

Artificial intelligence has greatly improved business processes and while many experts see its benefits, some are concerned about potential risks it could create. Around the world, business experts are beginning to discuss the considerations around regulating artificial intelligence.

Now that artificial intelligence (AI) is no longer just a what-if scenario that gets tech gurus frenzied with the possibilities, but it’s in use and impacting our everyday lives, there is renewed discussion about how this exciting — yet powerful and potentially problematic — technology should be regulated. On one side of the issue are those who feel it’s premature to begin the discussion, while at the other end of the spectrum, there are those who feel as if the discussion is dreadfully behind.

No matter where you stand on the issue, we can all agree that AI is here to stay. so regardless of how challenging it is to figure out the regulatory rabbit hole, it is now time to start seriously considering what needs to be in place at a national and international level to regulate AI.

Why Do We Need AI Regulation?

Proponents of AI regulation such as Stephen Hawking fear that AI could destroy humanity if we aren’t proactive to avoid the risks of unfettered AI such as “powerful autonomous weapons, or new ways for the few to oppress the many.” He sees regulation as the key to allowing AI and humankind to coexist in a peaceful and productive future. Bill Gates is also “concerned about super intelligence” and doesn’t “understand why some people are not concerned.” The trifecta is complete with Elon Musk, who states we should regulate artificial intelligence “before it’s too late.” In 2015, 20,000 people, including robotics and AI researchers, intellectuals, activists, Hawking and Musk, signed an open letter and presented it at the International Conference on Artificial Intelligence that called for the United Nations to ban further development of weaponized AI that could operate “beyond meaningful human control.”

Our society is already impacted by the explosion of AI algorithms that are deployed in financial institutions, employers, government systems, police and more. These AI algorithms can — and do — make decisions that create significant and serious issues in people’s lives. A school teacher who in the past had received rave performance reviews got fired after her district implemented an algorithm to assess teacher performance. The school couldn’t explain why this happened except that others were “gaming” the system. An anti-terrorism facial recognition program revoked the driver’s license of an innocent man when it mistakenly identified him as another driver.

AI is maturing quickly, while government and regulation decisions move very slowly. What Musk and others believe is that the time is now to start debating how and what AI regulation will look like so that we aren’t too far behind when regulation is actually passed. If nothing else, regulatory bodies and oversight agencies should form even if regulation isn’t instituted so that they can get properly informed and be prepared to make decisions when it’s necessary.

Why Is It Premature to Regulate AI?

Many people feel it’s premature to talk about AI regulation because there is nothing specific that requires regulation yet. Even though there have been tremendous innovations from the AI world, the field is very much in its infancy. Regulation could stifle innovation in an industry that is exploding, while Trouva cofounder Alex Loizou believes we need to understand its full potential before it’s regulated.

One study from Stanford University found “that attempts to regulate AI in general would be misguided, since there is no clear definition of AI (it isn’t any one thing), and the risks and considerations are very different in different domains.”

What Could AI Look Like?

In 2018, Britain and other European member states are taking their first foray into AI legislation that allows automated decisions to be challenged. The General Data Protection Regulation (GDPR) law represents an initial step into trying to create laws around how AI can be challenged and to prevent the perils of profiling and discrimination. GDPR allows people the right to find out what logic was involved in making decisions against them, a procedure known as the “right to explanation.” There is a realization that if any type of standard or guideline will have any power, oversight for people to follow the guidelines will need to be assigned to a governing body.

Currently, those who are debating what AI regulations might look like have considered some of the following:

  • AI should not be weaponized.
  • There should be an impenetrable “off-switch” that can be deployed by humans.
  • AI should be governed under the same rules as humans.
  • Manufacturers should agree to abide by general ethical guidelines mandated by international regulation.
  • There should be understanding of how AI logic and decisions are made.
  • Should AI be liable if something goes wrong?

These are complex questions with no easy answer.

An email has been sent to:


Bernard Marr

Bernard Marr is an internationally best-selling business author, keynote speaker, and strategic advisor to companies and governments. He is one of the world’s most highly respected voices anywhere when it comes to data in business and has been recognized by LinkedIn as one of the world’s top 5 business influencers. You can join Bernard’s network by clicking here, explore his website here:, or follow him on Twitter @bernardmarr.


More from SAPinsider


Please log in to post a comment.

No comments have been submitted on this article. Be the first to comment!