CHALLENGE CUltra-high assurance IT standards for Artificial Intelligence safety?
Can the early adoption by EU or a critical mass of nations of ultra-high assurance low-level IT certifications and related governance models jump start an actionable path, from the short to the long-term, to: (1) restore meaningful digital sovereignty to citizens, businesses and institutions, (2) cement their economic and civil leadership in the most security-critical IT and narrow Artificial Intelligence sectors, and (3) substantially increase the chances of utopian rather than dystopian long-term artificial intelligence prospects?

DEBATE ANSWERS:
(1) YES. Here is in detail the actions that would lead us to such outcome.
(2) NO. Here is why

BACKGROUNDER

In recent years, rapid developments in AI specific components and applications, theoretical research advances, high-profile acquisitions from hegemonic global IT giants, and heart-felt declarations about the dangers of future AI advances from leading global scientists and entrepreneurs, have brought AI to the fore as both (A) the key to private and public economic dominance in IT, and other sectors, in the short-to-medium term, as well as (B) the leading long-term existential risk (and opportunity) for humanity, due to the likely-inevitable “machine-intelligence explosion” once an AI project will reach human-level general intelligence.

Google, in its largest EU acquisition this year acquired for 400M€ a global AI leader, DeepMind; already invested by Facebook primary initial investors Peter Thiel and by Elon Musk. Private investment in AI has been increasing 62% a year, while the level of secret investments by multiple secretive agencies of powerful nations, such as the NSA, is not known – but presumably very large and fast increasing – in a winner-take-all race to machine super-intelligence among public and private actors, which may well already have started.

A recent survey of AI experts estimates that there is a 50% chance of achieving human-level general intelligence by 2040 or 2050, while not excluding significant possibilities that it could be reached sooner. Such estimates may even be biased towards later dates because: (A) there is an intrinsic interest in those that are by far the largest investors in AI – global IT giants and USG – to avoid risking a major public opinion backlash on AI that could curtail their grand solo plans; (B) it plausible or even probable that substantial advancements in AI capabilities and programs may have already happened but have successfully kept hidden for many years and decades, even while involving large numbers of people; as it has happened for surveillance programs and technologies of NSA and Five Eyes countries.

Some of the world’s most recognized scientists, and most successful technological entrepreneurs believe that progress beyond such point may become extremely rapid, in a sort of “intelligence explosion”, posing grave questions on humans’ ability to control it at all. (See Nick Bostrom TED presentation). Very clear and repeated statements by Stephen Hawking (the most famous scientist alive), by Bill Gates, by Elon Musk (main global icon of enlightened tech entrepreneurship), by Steve Wozniak (co-founder of Apple), agree on the exceptionally grave risks posed by uncontrolled machine super-intelligence.

Elon Musk, shortly after having invested in DeepMind, even declared, in an erased but not retracted comment:

The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most. This is not a case of crying wolf about something I don’t understand.I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognise the danger, but believe that they can shape and control the digital super-intelligences and prevent bad ones from escaping into the Internet. That remains to be seen…”

Some may argue why extreme IT security to support AI safety is needed now if its consequences may be far away. One clear and imminent danger is posed by self-driving and autonomous vehicles (aerial and terrestrial) – which utilize increasingly wider narrow AI systems – and the ease with which they can be “weaponized” at scale. Hijacking the control of a large number of drones or vehicles could potentially cause hundreds of death or more, or cause hardly attributable hacks that can cause grave unjustified military confrontation escalations.

Richard Hawkings summarised it most clearly when he said “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all”. Control relies on with IT assurance to ensure that who formally controls it is also who actually controls it, and possibly with international IT assurance certification governance that may provide a governance model for international efforts to regulate advanced AI projects or better to guide international democracy projects to develop “friendly AI” before unfriendly AI gets to human-level general intelligence.

On Jan 23rd 2015, nearly the entire “who’s who” of artificial intelligence, including the leading researchers, research centers, companies, IT entrepreneurs – in addition to what are possibly the leading world scientists and IT entrepreneurs – signed Open Letter “Research priorities for robust and beneficial artificial intelligence” with an attached detailed paper.

Such an Open Letter, although it is a greatly welcome and needed general document, overestimates the levels of trustworthiness and comparability of existing and planned high-assurance IT standards, as well as the at-scale costs of “high-enough for AI” assurance levels, and focuses on security research as a way to make AI “more robust”.

Such an Open Letter emphasizes the need for “more robust AI”. A very insufficiently-secure AI system may be greatly “robust” in the sense of business continuity, business risk management and resilience, but still be extremely weak in safety or reliability of control. This outcome may sometimes be aligned with the AI sponsor/owner goals – and those of other third parties such as state security agencies, publicly or covertly involved – but be gravely misaligned to chances to maintain a meaningful democratic and transparent control, i.e. having transparent reliability about what the system, in actuality, is set out to do and who, in actuality, controls it.
More important than “robustness”, sufficiently-extreme security assurance levels may comprise the most crucial foundation for AI safety in the short and long terms, and serve to increase transparency of who is actually in control, as well as a precondition for verification and validity.

As AI systems are used in an increasing number of critical roles, they will take up an increasing proportion of cyber-attack surface area. It is also virtually certain that AI and machine learning techniques will themselves be used in cyber-attacks. There is a large amount of evidence that many advanced AI techniques have long been and are currently being used by the most powerful states intelligence agencies, to attack – often in contrast with national or international norms – end-users and IT systems, including IT systems using AI. As said above – while the levels of investment of public agencies of powerful nations, such as the NSA, is not known – it is presumably very large and fast increasing, in a possibly already started race among public and private actors. A race that could in the near future accelerate into a winner-take-all race. The distribution of such funding by secretive state agencies, among offensive R&D rather than defensive R&D (i.e. AI safety), will most likely follow the current ratio of tens of times more resources for offensive R&D.

The above Open letter states that “Robustness against exploitation at the low-level is closely tied to verifiability and freedom from bugs”. This is correct; but may be only partially so, especially for use in critical and ultra-critical use cases, which will become more and more dominant. It is better to talk about auditability in order not get confused with (formal) IT verification. It is crucial and unavoidable to have complete auditability and extremely diverse, competent and well-meaning actual auditing of all critical HW, SW and procedural components involved in an AI system’s life-cycle, from certification standards setting, to CPU design, to fabrication oversight. (Such auditing may need to happen in secrecy, because public auditability of SW and HWs design may pose a problem in so far as project pursuing “unfriendly Super-intelligence” could get advantage in a winner-take-all race). Since 2005 the US Defense Science Board has highlighted how “Trust cannot be added to integrated circuits after fabrication” as vulnerabilities introduced during fabrication can be impossible to verify afterwards. Bruce Schneier, Steve Blank, and Adi Shamir, among others, have clearly said there is no reason to trust CPUs and SoCs (design and fabrication phases). No end-2-end IT system or standards exist today that provide such complete auditability of critical components.

It is impossible, and most probably will remain so, to ensure perfectly against critical vulnerabilities – given the socio-technical complexity of IT socio-technical systems – even if they were to be simplified by 10 or 100 times, and with radically higher levels of auditing relative to complexity.

Nonetheless, it remains extremely crucial and fundamental that adequate research could devise ways to achieve sufficiently-extreme level confidence about “freedom from critical vulnerabilities” through new paradigms. We may need to achieve sufficient user-trustworthiness that sufficient intensity and competency of engineering and “auditing efforts relative to complexity” have been applied, for all critical software and hardware component. No system or standard exist today to systematically and comparatively assess – for such target levels of assurance for a given end-2-end computing service, and its related life-cycle and supply-chain.

As stated above, all AI systems in critical use cases – and even more crucially those in advanced AI system that will soon be increasingly approaching machine super-intelligence – will need to be so robust in terms of security to such an extent that they are resistant against multiple extremely-skilled attackers willing to devote cumulatively even tens or hundreds of millions of Euros to compromise at least one critical components of the supply chain or lifecycle, through legal and illegal subversion of all kinds, including economic pressures; while having high-level of plausible deniability, low risk of attribution, and (for some state actors) minimal risk of legal consequences if caught.

In order to reduce substantially these enormous pressures, it may be very useful to research socio-technical paradigms by which sufficiently-extreme level of AI systems user-trustworthiness can be achieved, while at the same time transparently enabling due legal process cyber-investigation and crime prevention. The possible solution of such dichotomy would reduce the level of pressure by states to subvert secure high-assurance IT systems in general, and possibly – through mandatory or voluntary standards international lawful access standards – improve the ability of humanity to conduct cyber-investigations on the most advanced private and public AI R&D programs. Cyber-investigation may be crucial to investigate some criminal activities that aimed at jeopardizing AI safety efforts.

There is a need to avoid the risk of relying for guidance on high-assurance low-level systems standard/platform projects from defense agencies of powerful nations, such as the mentioned DARPA SAFE, NIST, NSA Trusted Foundry Program, DARPA Trust in Integrated Circuits Program, when it is widely proven that their intelligence agencies (such as NSA) have gone to great lengths to surreptitiously corrupt technologies and standards, even those that are overwhelmingly used internally in relatively high-assurance scenarios.

The cost of radically more trustworthy low-level systems for AI could be made very comparable to commercial systems mostly used as standard in AI systems development. Those cost differentials could possibly be reduced to being insignificant through production at scale, and open innovation models to drive down royalty costs. For example, hardware parallelization of secure systems and lower unit costs (due to lower royalties), could make so that adequately secure systems could compete or even out compete in cost and performance those other generic systems.

There is a lot of evidence to show that R&D investment on solutions to defend devices from the inside (that assume inevitable failure in intrusion prevention) could end up increasing the attack surface if those systems life-cycle are not themselves subject to the same extreme security standards as the low level system on which they rely. Much like antivirus tools, password storing application and other security tools are often used as ways to get directly to a user or end-point most crucial data. The recent scandals of NSA, Hacking Team, JPMorgan show the ability of hackers to move inside extremely crucial system without being detected, possibly for years. DARPA high-assurance program highlight how about 30% of vulnerabilities in high-assurance systems are introduced by internally security products.

Ultimately, it may be argued that IT assurance that is high enough for critical scenarios like advanced AI is about the competency and citizen-accountability of the organizational processes critically involved in the entire life-cycle, and the intrinsic constraints and incentives bearing on critically-involved individuals within such organizations.

Maybe, the dire short-term societal need and market demand for radically more trustworthy IT systems for citizens privacy and security and societal critical assets protection can be aligned in a grand strategic vision for EU cyberspace to satisfy – in the medium and long-term – both the huge societal need and great economic opportunity of creating large-scale ecosystems able to produce AI systems that will be high-performing, low-cost and still provide adequately-extreme levels of security for AI most critical usage scenarios.

It is worth considering if short-term and long-term R&D needs of artificial intelligence “(“AI”) and information technology (“IT”) – in terms of security for all critical scenarios – may become synergic elements of a common “short to long term” vision, producing huge societal benefits and shared business opportunities. The dire short-term societal need and market demand for radically more trustworthy IT systems for citizens’ privacy and security and societal critical assets protection, can very much align – in a grand strategic cyberspace EU vision for AI and IT – with the medium-term market demand and societal need of large-scale ecosystems capable to produce AI systems that will be high-performing, low-cost and still provide adequately-extreme levels of security for AI critical scenarios.