As with all things there are two schools of thought with regards to answering this question. One group sees Artificial Intelligence (AI) as the cause of a dystopian future where the computers eventually take over and use us as slaves to power them as in the Matrix. The other side says that AI should not be regulated so that we can do research and solve all problems without restriction so that our future will be filled with unicorns and rainbows. Like all things the issue is not black or white, and there needs to be a balance between these two schools of thought. Georgetown University Professor Mark MacCarthy says it best, “The path forward is not deregulation or prohibitions, but smart, proactive regulation that establishes a framework for both public protection and innovation growth.”
The main issue, that is the divide between the two sides, is the fuel for AI, data. AI requires data to do its job. The more data it has and the better the quality of the data the better AI is at doing the various tasks that are asked of it. Companies are continuing to make use of their customer’s data either with or without their knowledge. There are two issues here, first successful use of AI is predicated on a public trust with regards to the storage and use of that data, and second the public wants to know how their data is being used and how this use will benefit them. Regulation should be focused so as to address both of these issues.
Michael Hayes, Senior Manager of Government Affairs at the Consumer Technology Association (CTA), a tech industry working group, acknowledges there is an additional complicating factor, “What might be acceptable use for data in the United States might not be acceptable to those in Europe or Asia.”
In an interview with Dr. Alex Wissner-Gross a professor with a number of appointments with MIT and Harvard, Wissner brings up a serious issue for regulation that even the industry wrestling with, stating that, “there is no commonly accepted definition of AI. In fact, we can’t even define intelligence itself.” When these issues are added to the regulatory environment it complicates things greatly.
Unfortunately the direction in the US government is headed seems to be from the wrong standpoint. The White House recently released draft guidance for the regulation of AI applications, “avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth.” Where this falters is in the framing of the argument, it supposes that regulation is a cost, hindrance, delay, or even a barrier which should be reluctantly accepted as a last resort and only if absolutely necessary. MacCarthy states that, “measures such as transparency, accountability, and fairness might promote AI growth and innovation is foreign to this framework. But in today’s world, the real task for AI regulators is to create a rules structure that both protects the public and promotes industry innovation—not to trade off one against the other.”
An example of this is the use of AI to prevent fraudulent transactions. The public and retailer are protected from loss, as is the bank that is able to identify and block the transaction from completing. Without the bank being required to do this, such innovation in outlier detection would not have happened and there would have been a cost to one of the three parties involved, none of which were at fault.
A unique solution to regulation has been provided by Gillian Hadfield director of the Schwartz Reisman Institute for Technology and Society, University of Toronto, with only self or private regulation private regulators would be trying to outbid each other to be as lenient as possible. Hadfield recommends having private regulators that are licensed by the government, “A private regulator would require a license to compete, and could only get and maintain that license if it continues to demonstrate it is achieving the required goals. The wisdom of the approach rests on this hard government oversight; private regulators have to fear losing their licenses, if they cheat, get hijacked by the tech companies they regulate, or simply do a bad job.” This way the government sets the goals such as, “How many accidents are acceptable with self-driving cars?”, and the private regulators invent ways to streamline the achievement of these goals for example, “building apps that detect when another app is violating its own privacy policies.”
The future of AI depends on striking a balance between the promises of more intelligent machines helping us in our daily lives with addressing the concerns of potential disruption. If the incentive is for innovation that benefits society as whole then regulation can be valuable for AI’s continued success.
Disclaimer: The author of this text, Robin Trehan, has an undergraduate degree in Economics, Masters in international business and finance, and MBA in electronic business. Trehan is Senior VP at Deltec International www.deltecbank.com. The views, thoughts, and opinions expressed in this text are solely the views of the author, and not necessarily reflecting the views of Deltec International Group, its subsidiaries, and/or employees.