How to prevent malevolent and dangerous AI through standards and legislations, and in turn promote AI-driven economic development

 In Blog, Press Release

Roman Yampolskiy and Federico Pistono recently wrote a paper entitled Unethical Research: How to Create a Malevolent Artificial Intelligence, which poses the questions of powerful actors intentionally creating malevolent AI and/or stealing and deploying. 

This is a crucial problem to be reckoned with, in addition to the main problem occupying the AI safety community, the runaway AI problem, or the problem in which AI escapes from the control of their creators and create great damage to humanity.

Since human intentions can almost never be ascertained – except very partially via lie detector systems, in the limited cases in which those can be applied – the two problems are really one big problem.

As evidence starts building on what technical, socio-technical, oversight and governance safeguards are most probably conducive to safer narrow and general AI, and which are not, one way would be to use a combination of authoritative international standards body and national legislations to make it illegal to research, develop or use AI in certain ways, regardless of the entity’s intention.

The one great thing that comes into help is that today – given the complexity of high performance ICT and the pervasive vulnerabilities planted or sanctioned by governments –  none of these actors would be able to be beyond the surveillance capabilities of the Law Enforcement Agencies (LEAs) or Intelligence Agencies (IAs) of some democratic nations.

If only we had the right national laws, and possibly highly binding international treaties through EUROPOL or INTERPOL, then when LEAs or IAs independently acquire evidence that some actor is “creating dangerous AI” and/or “creating or deploying AI dangerously”, their could ask for judicial legal authorisation following due legal process – if they can’t already go ahead without one – to allow their hacking into their systems to spy the suspects communications and the software and hardware of the AI systems they are building or deploying. 

But we would need to have very clear legislations that defines what would exactly be “creating dangerous AI” and “creating AI dangerously”. This would mean having clear and measurable technical, socio-technical and governance and oversight processes that inspire a very stringent and comprehensive standards and standards setting processes, and voluntary certification processes, that would continuously certify a given AI research or development program as “safe enough” for society.

And then national legislations should make such certification either mandatory or required in order to receive public research funds. They could be regulated like ICT devices dealing with state secret information. In all EU states, these need to be mandatorily certified according to high-assurance international standards (mostly Common Criteria  EAL 5+),and according to other national requirements, coordinated in EU by SO-GIS, including specific Protection Profiles and Security Problem Definitions. The same concept could be applied to AI, albeit in much more consistent and comprehensive manner.

Some initiatives are leading to setting international standards for AI safety, and related certifications, such as in the mission of the newly established IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems. Another is the Trustless Computing Association and related proposed Trustless Computing Certification Body promoted by the Free and Safe in Cyberspace workshop series.While the latest EUROPOL in their Appendix A.3 of their latest yearly 2015 The Internet Organised Crime Threat Assessment report goes as far a suggesting new criminal laws. The Future of Humanity institute has been exploring all these prospects in its Director’s book Superintelligence, as well as in other papers.

Many proponents of self-regulating industry bodies or light governmental recommendation commissions, such as Ryan Calo’s proposal for a Federal Robotics Commission, could possibly be hurting the economic prospects of AI more than a stringent, comprehensive and mandatory regulation. The latter, in fact, may well be needed to reasonably reduce the chance that a catastrophic failure that would stop large scale AI deployments in their track, or preempt large-scale market uptake and sustainable legal authorization for the use of autonomous systems in certain domains, as we suggested in the author’s blog post What is the current market demand for certifications of narrow AI systems with direct influence on human physical. After all the creation of Federal Aviation Administration, which started in 1926 setting very stringent requirements for all and any commercial airliners wanting to offer service in the US, was responsible the growth in commercial airline passengers from  5.782 to 172.405,  from the year of its establishment until 1929.

We’ll be talking about this and more in the next US and EU editions of the Free and Safe in Cyberspace workshop series, come joins us.

ADDED June 27th:
Policy option to regulated robotics and AI are also included in a Draft Report by the EU Parliament Committee on Legal Affairs with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).
Also, legislation in this area may want to be primarily, initially, in regard to the low-level deterministic sub-systems containing the most critical function of advanced AI, for the reasons described in the author’s blog post  IT security research needs for artificial intelligence and machine super-intelligence. In turn, applying ultra-high assurance levels to such subsystems may make the work of LEAs and IAs very difficult and impossible, and so therefore mechanism for lawful access request compliance may need to be devised that protect both the AI systems under development from criminal abuse and the access to those systems by LEAs and IAs with legal authorization.

Recent Posts