The creation of radically competent, enforceable and accountable standardization and certification processes for those narrow AI systems that direct influence on human physical environments – such as robots, autonomous drone and vehicles – may have a huge impact on the growth rate and sustainability of the market for such systems, as well as reduce risk of arm to humans. Even more importantly, perhaps, may also provide at part of the socio-technical and governance basis for future international standards or treaties to promote the safety of systems approaching machine superintelligence.

In a recent panel Stuart Russell, one of the most recognised AI experts, at minute 14.50 illustrated the prospect that a domestic robot in the near future may misinterpret orders from its owners and purposely kill a domestic animal or human. He concluded therefore that:

“There’s an enormously strong economic incentive for companies that are building AI, to take this questions very seriously. Otherwise any company, any startup company, that doesn’t pay attention to this could ruin it for everybody else. So they’re going to have to figure out how to make machines behave ethically … avoid doing things; even if they are told to do something by their human master, they have to know what’s right or wrong, so that they don’t do something catastrophic”

The AI sector may need to do what the aviation industry did very successfully in 1929. There seems to be a huge market need to establish radically reliable and enforceable standards and certification processes – through transparency, oversight and accountability – for all those AI systems can cause direct human arm – such as robots, autonomous drone and vehicles, in much the same way as was done for civil aviation in 1929 with the establishment of the Federal Aviation Commission.

On March 24th 2015, flight 9525 of Germanwings airline crashed on the French Alps. The crash was most probably caused by pilot suicide, which makes up about 10 of the 323 average yearly deaths due to airline accidents out of 3BN global yearly travelers. These deaths, while still a tragedy, are incredibly low when compared to passengers every year. Such low death rate has its origins in the 1920’s US, when leading EU and US socio-technical scientists, through the US FAA and some airlines, developed breakthrough socio-technical systems, through a mix of fail-safe, oversight technologies, certification, procedures, and organizations. It was this socio-technical innovation, rather than any aviation technological breakthrough, that increased security of commercial flight to levels that were previously deemed inconceivable or impossible, a consequent economic and aviation research boom. In fact, from 1926 and 1929, as FAA issued its certification standards, passengers in US civilian aviation skyrocketed from 5.782 to 172.405.

The requirements for certification process for such AI systems will need to be substantially more stringent than those of the Federal Aviation Administration, because they will need to protect against:

  • Hi-level algorithms that may result in unwanted actions that physically arm humans or cause other great damage
  • Low-level technical and organization infrastructure for the end-2-end provisioning and life-cycle of the certified AI system
  • Catastrophic failures in confidentiality or integrity of AI operations, that can go undetected by its victims for years or even decades. (You can’t hide a plane that goes down, but you can hide the extensive hacking or failure of an AI system design to protect the US stock market for years.)

Furthermore, the resulting institutional capital and expertise in AI systems assurance, assurance certification, and certification governance processes, could become of great use for similar standards, certification, or international treaties dealing with more advanced projects aiming and the realization of machine superintelligence.