Kalev Leetaru - Contributor
As the public becomes increasingly aware of the dangers of AI algorithmic bias and concerned over surveillance and militaristic applications of deep learning, there have been a growing number of calls for AI regulation. Whether new laws governing AI fairness or policies constraining the use of autonomous weapons systems, the challenge confronting policymakers is that AI is very much like encryption: it is not a single controlled algorithm that can be regulated, it is a portfolio of techniques that no single country controls and which are being advanced every day by researchers all across the world.
The almost unimaginably rapid progression of deep learning over the past half-decade into every corner of modern life has ushered in profoundly existential questions about how to ensure accurate, fair and beneficial use of this rapidly evolving technology.
When it comes to biased algorithms, the fundamental fairness of current AI systems has been largely left to market forces. In turn, basic economics has ensured that free but heavily biased data wins over costly but minimally biased data. How precisely could legislators mandate “fair” AI systems in a way that it could be quantitatively tested for compliance? Mandating that systems cannot exhibit differing accuracy across demographics is one possibility, but leaves open the door to myriad other ways in which algorithms can discriminate.
As AI systems encroach on regulated industries like the financial and housing sectors it is likely that existing anti-discrimination laws will begin to influence AI design. A mortgage lending algorithm that systematically denies applications according to protected characteristics like race will find itself afoul of the law without any further legislative action. As companies adapt their algorithms to the landscape of these regulated fields, it is likely that the practices and development workflows they develop will find their way into the rest of the AI landscape.
Regulating surveillance use of AI will encounter far more obstacles. As biometric authentication becomes more popular in consumer devices like smartphones and as global security conditions increase the use of biometrics as a counter-terrorism tool, societies will face increasing pressures against tighter regulation. Even those governments that have moved to regulate facial recognition in some way have all yielded laws that still explicitly or indirectly permit its use for a wide swath of applications.
Any attempt to regulate military use of AI will run headfirst into the simple fact that governments across the world are already building autonomous capabilities into their weaponry. A nation today that passes legislation absolutely outlawing nuclear weapons has no impact on its adversaries that possess or are working to possess such weapons. Similarly, countries which restrict military use of AI will find themselves at an existential disadvantage should conflict arise.
In reality, most harmful applications of AI are merely inverses of beneficial applications. The police facial recognition that surveils dissidents is the same that prevents those police from unlocking dissidents’ phones without them being present. The autonomous flight and targeting controls of a lethal drone are the same that allows a mapping drone to catalog the devastation after a natural disaster to direct emergency aid.
Most importantly, however, like encryption, deep learning does not refer to any single algorithm or process. The field of deep learning encompasses a broad swath of approaches and algorithms and myriad variants, developed by countless individuals across many different countries. Attempts to regulate the field of AI in one country will have no impact on other countries. Moreover, the economic forces propelling AI development means even if governmental funding were to be restricted by legislation, private funding would more than make up the difference.
In the end, the simple fact is that AI cannot be regulated. Legislation can help curb specific applications in regulated industries like housing and the financial sector with regards to their impact in a single country, but when looking globally, AI will simply evolve wherever market and military forces take it.