The 4 Challenges
The FSC event series is sharply focused on solving the following 4 Challenges, more or less in a sequence:
Can ultra-high assurance IT and related certification governance models radically increase the security, privacy or safety of complex and critical IT, AI and cyber-physical systems, such as for example self-driving cars, robo-advisors, or even Facebook? Read more >>
The World is rapidly turning into a Hacker Republic, where economic and political power increasingly accrues to those state and non-state actors with the most informational and hacking superiority in critical political, enterprise, financial and autonomous IT systems.
The situation is nothing short of catastrophic, with even the most secure IT systems can be undetectably compromised by even often mid-level attackers and increasingly so – even those used by top executives, presidential candidates, and by critical civilian and military infrastructure. Though cybersecurity spending has grown 30 times in the last 10 years to $120 billion per year, the cost of cybercrime is skyrocketing accruing to $8 trillion by 2022. Not to mention the cost to our ordinary and active citizens rights democratic institutions, which seem held at ransom from state and non-state groups, each accusing the other.
A recent PwC survey highlighted how “investors see cyberthreat as the main obstacle to enterprise growth. The CEO of IBM stated that “Cybersecurity has become the greatest threat to any company in the World”.
Most enterprises are spending more and more for the security of their critical IT systems, and awareness is fast emerging – via scandals like Spectre and Meltdown and CIA Vault 7 – about how their most critical systems are scalably vulnerable to even non-state mid-level attackers that too easily acquire access to state-grade hacking tools. While most internal hacks have remained undisclosed, the new GDPR regulation will mandate from May 28th their disclosure within 72 hours, posing a great reputation and stock quotation damage.
Last year, the German Minister of Defense identified cybersecurity as the “single greatest threat to global stability”. This is not surprising given the increasing vulnerability, complexity and lack of adequate standards for critical civilian and military systems, which makes their hacking attribution inherently very difficult to attribute. The inadequate standards, obscurity, hyper-complexity, and forensic-unfriendliness of even the most critical systems and processes, in fact, renders cyber-incidents very difficult to attribute in an internationally recognized way as International Atomic Energy Agency and the International Criminal Court have enabled, at least partly, for nuclear and war crimes incidents.
Recent rulings of the European Court of Justice have raised substantial doubts that most current western legislation, even when they nominally respect citizens’ rights, are not supported by implementation regulations or external standards or certification processes (e.g. Common Criteria, SOGIS, eIDAS, ETSI-LI, etc.) that provide sufficient transparency, accountability and oversight safeguards to users with reasonable confidence of their compliance with European Charter of Fundamental Rights.
Meanwhile, financial institutions are ever more victim of fraud and privacy abuse than their customers, with mounting cash and reputational costs. Their historical role, as providers of core trustworthy financial services, is being gravely threatened by cryptocurrencies and blockchains – perceived as potentially safer and cheaper long-term stores of value – and by small and large competitors, unleashed by the EU Directive PSD2, who will be able to offer e-services “over the top” while claiming as much or higher trustworthiness than banks.
Are key assets and capabilities of nations’ law enforcement, defense and intelligence themselves highly vulnerable to attackers – foreign, domestic and internal – due to the lack of sufficiently comprehensive, translucent and accountable socio-technical standards, such as in IT facility access, device fabrication or assembly
Our democracies and politicians appear increasingly held for ransom by the best and most-resourced threat actors. Hacking of electoral and primary democratic processes, critical autonomous systems, and social media are fast becoming the military weapons of choice of nations willing to subvert, subjugate and destabilize other nations. Military systems are often no less vulnerable, but less is publicly known since the most serious hacks become state secret when they happen.
But then, paradoxically, even though they can remotely compromise any computing device, public security agencies are often unable to make such evidence stand in court, given for the inherent corruptibility of its means of acquisition — as jointly declared by the Ministers of Interior of Germany and France. Furthermore, the tools they use for targeted interception suffered many of the same vulnerabilities of secure enterprise solutions- as highlighted by the Hacking Team and the Inslaw Promis scandals.
How vulnerable are systems that security-critical Artificial Intelligence system providers are promising to deploy in large scale in the near future, movable and otherwise, to attacks via their critical socio-technical low-level subsystems? We may not even have a chance of achieving levels of safety and security assurance that are sufficient for a sustainable wide-market deployment of advanced and critical AI systems, such as autonomous movable vehicles, unless their most low-level critical components and chips are subject to radically unprecedented levels of assurance.
Although, it is increasingly clear that those that will entrench their superiority in AI will rule the World, as Vladimir Putin stated, the recent hacking prowess demonstrated of certain nations and groups, and the astonishing vulnerability of the most critical systems and processes of the most powerful nation in the World, leads one to believe that informal control utilising advanced AI systems combined with hackers usage of AI, maybe turn out to be even more relevant than their formal control through ownership and design.
Root Causes of Cyber-insecurity of Critical Systems
The root causes of this sorry state are mostly two: (1) hyper-complexity due to market demand for ever richer digital experiences (2) to the fact that powerful nations inability to prevent internal and external abuse of the critical vulnerabilities and back-doors that they insert, discover, purchase, let be, and stockpile. For starter, the speed of IT for everyday computing requires complexity that is hopelessly incompatible with ultra-high assurance* IT security and privacy. There is nothing we can do about it, democracies will need to adapt their rules around it, but we are ready to accept that for 99% of our computing. But then again, there is that 1% of sensitive or critical computing whereby we all have a great need and demand for ultra-high constitutionally-meaningful levels of trustworthiness, even if it requires a great sacrifice in speed, features, and cost.
Powerful nations understandably felt the need that every IT system should be promptly hackable at all times – in an era of rampant terrorism, algorithmically unbreakable encryption and lack of formal back-doors, i.e. built-in mandated remote access mechanisms. These nations have resorted to stockpiling discovered vulnerabilities instead of fixing them, promoting inadequate and flawed standards, and outright inserting back-doors all the way down to CPU and chip fabrication.
The combination of market forces driving the increasing complexity of IT systems and life-cycles – far beyond any meaningful verifiability – and the huge investments being made by states to ensure access to all IT systems for cyber-investigations at all costs, have caused nearly all IT systems to become remotely and scalably exploitable by many nations and other actors. All the while, a lucky few have access to technologies that are impregnable, or nearly so, resulting in a huge asymmetry of informational power, and therefore of societal and economic power.
While “security through obscurity” and related private and cozy certification processes have time and again proven to be inadequate, some critical components of every IT system are still not available in the market that are publicly verifiable, let alone verified extremely enough. A general-purpose CPU does not even exist in the EU, and arguably in the US, that is publicly verifiable in both HW and SW.
“Trust cannot be added to integrated circuits after fabrication“, wrote the US Defence Science Board back in 2005. Nonetheless, even the US Department of Defense is forced to renounce to sufficiently comprehensive fabrication process oversight when they procure their typically low amounts of medium- and high-performance chips since they are available today only in foreign megafabs. This would not be the case if in the near future ultra-high assurance fabrication oversight processes were required by mass-market civilian chips in diverse sub-domains in the tens of millions of units, or more. But then, for that to become a reality, prevention of the malevolent use of such chips will need to be ensured.
Since Snowden, all mainstream commercial ICT providers – like Apple, Samsung, Google, IBM – are under increasing market pressures to provide levels of assurance that are simply impossible for chips with modern performance and richness of features. A possible future wide-market availability of ICT with constitutionally-meaningful levels of assurance – that are necessarily minimally-featured and low-performance, but highly user-friendly – could let off of such pressure to deliver the impossible and create huge new market opportunities for them and others to satisfy a huge unmet demand.
Enterprises and citizens alike are drowning in a sea of disinformation, hypocrisy, confusion, Utopian cyber-libertarianism, and much of outright deception. It is often hard to distinguish friend from foe.
There is a vast natural uncoordinated alignment of interests to wildly overstating of the security of secure apps and devices, even on the face of incredible revelations. This includes secure IT providers (from Apple, to defense contractors, to open-source Signal and Tor); journalists looking for new security agencies very happy to push less expert criminals to use broken techs. Military technologies are often just as broken, but failure mostly by default becomes state secret.
On one side, a club of leading digital rights NGOs and self-appointed Big Tech “privacy crusaders” (like Apple) are fighting against the introduction of state-mandated back-doors – a very implausible, socio-technically infeasible solution, that would reduce both civil freedoms and public safety, and mostly bait by LEAs.
On the other side, paradoxically, all computing devices – including those developed or promoted by such crusaders, perhaps hypocritically or delusionally – are scalably hackable through state-sanctioned back-doors, i.e. critical vulnerabilities beyond the point of encryption – down to OS, CPU, fabrication and standard setting – that nations develop, discover, purchase, rent, let be and stockpile. And most importantly, they keep failing to keep these hands of large numbers of even non-state mid-level attackers.
Back-doors are already everywhere. We have nothing to protect, no meaningful capability to preserve. Everything was taken away a long time ago. Soon after algorithmically unbreakable encryption was made widely available in 90s and nations had to resort to breaking everything beyond it in the lower technical stacks.
So what have those crusaders proposed to solve this much bigger and present problem? Big Tech and new startups rip great profits by riding the wild overstatements of most nations claiming their “going dark” because of end-to-end apps or Tor. Meanwhile leading NGOs to propose virtually nothing to solve this, except paradoxically to intensify and regulate their lawful hacking (“Going Bright” proposal) to bypass secure apps and anonymization tools.
Towards Solutions or Meaningful Advances
We are told daily by nearly all privacy experts and government officials that we must to choose between meaningful personal privacy and legitimately authorized cyber-investigations. But both are essential to democracy and freedom. What if it was not a choice of “either or”, a zero-sum game, but instead primarily a “both or neither” challenge, yet to be proven unfeasible?
So the real challenge is: can we build the World 1st IT systems that can be plausibly expected to be without back-doors”!? Can we then create technologies and certifications – for at least some IT systems – that radically increase by orders of the magnitude expected resistance against such state-sanctioned back-doors? But then, even if we succeed, these would, of course, be used to facilitate grave crimes unless democratic nations could cyber-investigate designated suspects? So is it an “either-or” choice or a “both-or-neither” challenge?
Is meaningful personal freedom and effective public safety in cyberspace an “either-or” choice? Or is it not instead a “both-or-neither” challenge, that can be overcome by applying to the security of IT and lawful access processes the same uncompromisingly Trustless certification and oversight governance models that proved so successful in safeguarding paper-based democratic elections, nuclear sites safety, and commercial aviation?
Can ultra-high assurance IT be transparently reconciled with lawful access, so that it can be made available to our institutions, enterprises, and citizens without creating a public safety risk? Can we be both free and safe in Cyberspace? or do we have to choose? Can we even choose, really, or is it a “both or neither” challenge???
Can new cybersecurity standards, certifications, and compliant open comprehensive ecosystems – that are radically uncompromising down to the CPU, fabrication oversight and standard setting, and complementary to ENISA, SOGIS and eIDAS initiatives – deliver both of freedom and safety in critical IT and AI and, therefore become the basis of wide economic competitive advantage and ethical leadership for a group of leading stakeholders?
How can we build and certify IT systems that are radically more secure than state-of-the-art?
Can new voluntary international standards and certifications – within the EU Charter and most constitutional frameworks – provide ordinary citizens access to affordable and user-friendly end-to-end IT with constitutionally-meaningful* levels of trustworthiness, data security and privacy, as a supplement to their every-day computing services?
No standards or certifications exist today that are comprehensive enough to even remotely allow a user to meaningfully assess the trustworthiness of a given end-to-end IT service. Nor any end-to-end IT service or system is available that does not contain multiple “black boxes”, i.e. critical technical or organizational components which require blind trust in something or someone.
Constitutionally-meaningful?! “While perfect assurance is impossible, we will say that a given end-to-end IT service and its related life-cycle has constitutionally-meaningful levels of trustworthiness when its levels of confidentiality, authenticity, integrity and non-repudiation are sufficiently high to make its use, in ordinary user scenarios, rationally compatible to the full and effective Internet-connected exercise of their core civil rights, except for voting in governmental elections. In more concrete terms, it is the end-to-end ICT service and life-cycle that warrants extremely well-placed confidence that an attacker aiming at continuous or pervasive compromising would incur costs and risks that exceed the following: (1) for the compromising of the entire life-cycle, tens of millions of euros, significant discoverability and unlikely liability, that are typically sustained by well-financed and advanced public and private actors, for high-value supply chains, through legal and illegal subversions of all kinds, including economic pressures; or (2) for the compromisation of a single user, tens of thousands of euros, and a significant discoverability, such as those associated with enacting similar levels of abuse through on-site, proximity-based user surveillance, or non-scalable remote endpoint techniques, such as NSA TAO”.
State of Security for commercial and high-assurance IT systems
Nowadays, IT security and privacy are a complete debacle from all points of view, from the ordinary citizen to the prime minister, from baby monitors to critical infrastructure, to connected cars and drones.
A lack of sufficiently extreme and comprehensive standards for critical computing, and the decisive covert action of states to preserve pre-internet lawful access capabilities, have made so that, while unbreakable encryption is everywhere, nearly everything is broken; and while state-mandated or state-sanctioned back-doors are nearly everywhere 1, the most skilled or well-financed criminals communicate unchecked.
Nearly all endpoints, both ordinary commercial systems and high-trustworthiness IT systems, are broken beyond point of encryption, and exploitable at scale by powerful nations and a relatively large number of other mid- and high-level threat actors.
EU citizens, businesses and elected state officials do no access, even at high cost, to IT and “trust services” that are NOT remotely, undetectable and cheaply compromising by a large number of medium- and high-threat actors. Criminal entities that are most well-financed avoid accountability through effective use of ultra-secure IT technologies, or by relying mostly on advanced non-digital operational security techniques (OpSec). National defenses are increasingly vulnerable to large-scale attacks on “critical infrastructure” by state and non-state actors, increasingly capable of causing substantial human and economic harm.
The critical vulnerabilities that make nearly everything broken are mostly always either state-mandated or state-sanctioned back-doors, because the state has either created, acquired or discovered them, while keeping that knowledge hidden, legally or illegally.
Nearly all critical computing services include at least some critical components whose complexity is way beyond adequate verifiability. Design or fabrication of critical components or processes (CPU, SoC fabrication, etc.) are not publicly verifiable, and there are no reasons to trust providers’ carefulness and intent, when plausible deniability is very easy, liability is almost non-existent, and state pressures to accidentally leave a door open are extremely high.
Everything is broken mostly because of two structural, and highly interlinked, problems:
The lack of sufficiently extreme and comprehensive standards for high-assurance IT services that provide meaningful confidence to end-users that the entire life-cycles of their critical components are subject to oversight and auditing processes that are comprehensive, user-accountable, publicly assessed, and adequately intensive relative to complexity.
The decisive actions taken by state security agencies to maintain pre-internet lawful access capabilities – since the popularization of algorithmically-unbreakable software encryption in the 1990s – through huge sustained investments in the discovery and creation of critical vulnerabilities, permeate the life-cycle and supply chain of virtually all ordinary and high-assurance IT technologies. Furthermore, the covert nature of such programs has allowed such agencies (and other advanced actors) for decades to remotely and cheaply break into virtually all end-points thought to be safe by their users – with extremely vague accountability – as well as covertly overextend their preventive surveillance capacities.
After Snowden, nearly all IT privacy activists are up in arms to fight what they see as a 2nd version of the 1990s’ Crypto Wars to prevent back-doors in IT systems from being mandated by nations in the wake of “terrorism threats”. Meanwhile, they overwhelmingly refrain from proposing anything about what we should do about those state back-doors and critical vulnerabilities that already exist everywhere. Almost no-one challenges state security agencies pretense that they are “going dark” to trumpet how much they are missing capabilities to enable lawful access, when they overwhelmingly are not, not even for scalable targeted attacks. Most activists are focused on:
pushing existing Free/Open Source Software privacy tools to the masses, while making them more user-friendly and incrementally safer with small grants (no matter if from USG) and
going full-out there to fight a 2nd Crypto War to prevent the government to create a state backdoor.
First off, the Crypto Wars in the 1990’s were won in appearance, but utterly lost in essence. In fact, while the US and other governments backtracked on their proposal for an ill-conceived mandatory backdoor (such as the Clipper Chip) and algorithmically-unbreakable encryption became accessible to anyone, the most powerful states security agencies won many times over. In fact, over the following two decades:
powerful state security agencies have surreptitiously and undetectably placed back-doors nearly everywhere, with nonexistent or very insufficient due process oversight, compared to the already inadequate oversight lawful interception systems;
Tons of valuable targets, even very “up there”, have kept using IT devices that they thought lacked back-doors, but which had been snooped upon for years or decades without their knowledge;
The general perception that “free crypto for all” had won prevented even a demand for meaningful IT devices to be developed, which could minimize, isolate, simplify and do away with the need of trust in very many of untrustworthy actors along critical phases of the device life-cycle.
A mix of government self-serving disinformation and self-interested over-representation of the strength of current FLOSS solutions has brought many to believe that endpoint exploitation of well-configured FLOSS device setup is not scalably exploitable by advanced attackers.
Many of such IT privacy activists and experts over-rely on NSA documents’ reference to the fact that some advanced attackers take care not to overuse certain exploitation techniques to avoid burning them. But careful analysis of the capabilities of NSA Turbine, NSA FoxAcid, Hacking Team RCS document shows how advanced endpoint exploitation techniques allow them to scale highly and prevent such “burning” risk by using exploits that are beyond the ability of the target to discover, and other techniques. More recently, one leader of such activists, Jacob Appelbaum, said clearly (min 37.15-38.15): “It does seem to indicate to me that cryptography does stop them … I have seen that the Tor browser stops them from doing passive monitoring, and they have to switch to targeted. And that’s good. We want them to go from bulk, or mass, surveillance to targeted stuff. Now, the targeted stuff, because it is automated, is not different in scale but just different in methodology, .. actually. And usually they work together“
All the while, the media success of such security agencies in wildly overstating the “going dark” problem has enabled them to gather substantial political and public opinion consensus for:
unconstitutional surveillance practices gravely affecting non-suspect citizens, and often setting up multiple redundant legal authorities;
convincing politicians and the public opinion of the need to “outlaw” encryption and/or extend inadequate lawful access mandate, traditionally reserved to telephone operators, to all digital communications.
All the while, by breaking everything, they expose US government, US private interests, and law enforcement and intelligence to grave damage for state security and espionage by foreign states and non-state actors.
IMPACT ON IT SECURITY BUSINESS
EU IT security/privacy service/systems providers are increasingly unable to sustainably compete and innovate as they are unable to differentiate on the basis of meaningful and comprehensive benchmarks. They are also increasingly unable to convince users to investing in fixing vulnerabilities in one part of their systems, when most-surely many vulnerabilities remain in other critical parts, which are known to the same kind of threat actors. In a post-Snowden world, the success of even high-assurance cyber-security systems is increasingly “security theatre”, because even the highest-assurance systems in the civilian market contain at least one critical vulnerability, accessible in a scalable way by even mid-level threat actors, with very low risk of discoverability and attribution.
So therefore it is almost impossible to measure and sustain the current overall security added value of any new security service, and related risk management strategies, even before assessing the increase in attack surface and vulnerabilities that any new product entails. The only reliable measure of the effectiveness of an high-assurance IT security provider, private and public, relies on its “closeness” to major stockpiles of vulnerabilities, mostly few large powerful states, creating pervasive intelligence network effects 2, gravely undermining society sovereignty, freedoms and competitiveness.
As blogger Quinn Norton said, everything is broken. Revelations about systems and programs like NSA Turbine, NSA’S FoxAcid and Hacking Team, have shown the huge scalability – in terms of low risk and cost – of completely compromising of endpoint devices, by numerous public and private actors, and even more numerous actors that do or could trade, lend or steal such capabilities. It’s become clear that no IT system that assumes need for trust in any one person or organization – and there are none today – can be considered meaningfully trustworthy.
The exception to this rule is that there are some people in the world – top criminals, billionaires, or highest state official – who do have access to devices that are most likely not compromised by external entities. This results in a huge asymmetry of power between them and all others, i.e. two sets of citizens.
This situation will not be changed by a nation’s law, or international treaty. Stockpiling of zero day vulnerabilities, through investment in discovery, creation and purchase by powerful state and non-state actors will keep accelerating, and there is no chance any law or international treaty can significantly slow an acceleration of stockpiling of 0-days. Non proliferation of IT weapons is very different from other weapons like biological and nuclear, as their nature makes them easier to hide and reproduce, and they are used and spread daily by powerful actors to pursue their cyber-investigation goals.
In summary, there is a wide unavailability, for both citizens and for lawful access schemes, of end-2-end IT services of meaningfully high-trustworthiness levels.
THE PROBLEM WITH CURRENT CERTIFICATIONS AND CERTIFICATIONS GOVERNANCE
Over the last decades, in addition to sanctioning back-doors everywhere, states have repeatedly proven to be utterly incapable of either socio-technically designing, or legally managing, or issuing proper technical and organizational certification requirements for lawful access compliance. Fittingly, states have been similarly unable to create voluntary or mandatory IT security standards that would be anything like sufficiently extreme and comprehensive.
A recent ENISA report, highlights: “At the time of writing, there is no single, continuous ‘line of standards’ related to cyber security, but rather a number of discrete areas which are the subject of standardisation”. The current EU Cybersecurity Strategy (2013) calls for new standards and certification schemes – including supply chains and hardware – and calling for “a renewed emphasis on dialogue with third countries, with a special focus on like-minded partners that share EU values”, “possibly establish voluntary EU-wide certification schemes building on existing schemes in the EU and internationally”. Recognizes “the need for requirements for transparency, accountability and security is becoming more and more prominent”. Nonetheless, ECSEL and EDA no “EU computing base” exist in most IT domains that is even publicly verifiable in all its critical parts.
Consensus-based decision making processes at the core of EU institutions – and international public and mixed standard-setting bodies (such as ETSI) – have made it impossible to resist the firm will of even a single powerful country to corrupt or dilute-to-meaninglessness the standard setting process.
Industry-driven standards are no better; standards bodies like Trusted Computing Group and Global Platform have focused on increasing user convenience, interoperability and reducing overall costs of violations to content copyright and integrity of financial transactions, while playing lip service to the security and privacy demands of end-users that were at odds with state security agencies.
Therefore, the governance of such paradigms and certifications may need to be primarily independent, international, highly-competent and citizen-accountable, and the role of the national, and international governmental institutions (EU, UN, etc) – and major global IT industry players – can only be that of recognizers, adopters, and minority stakeholders. A process similar to that of the World Wide Web Consortium could be followed, but with much wider user- or citizen-accountability to keep companies from having too much control.
What is the role of the Free/Open Source Software movement for the prospects of wide availability of computing with meaningful user control.
Over the last thirty years, a huge amount of volunteer and paid work has been devoted to developing Free Software with the aim of promoting users’ civil freedom in computing.
Why then, to date, is no end-user computing device available at any cost which would give the user meaningful confidence that its computing is not completely compromised undetectably at insignificant cost and risk?
Why is no end-user device available today that does NOT contain at least some “critical” software/firmware components that (a) are not sufficiently verified relative to complexity? or (b) are non-verifiable in its source code (without NDA) or even proprietary?
What should be the priorities of the free software community and short and long-term objectives in a Post-Snowden World?
Free/Open Source Software, while providing important civil freedom, does not directly improve trustworthiness of a software application or stack, in comparison to that whose source code is merely publicly-verifiable without NDA. At times, on the contrary, it has constrained available business models in ways that prevented the sustainable attraction of the very large resources necessary to guarantee a sufficiently-extreme auditing relative to complexity.
Nonetheless, an adequate new standard may need to very strictly mandate Free/Open Source Software and firmware, with few and/or temporary exceptions for non-critical parts, because it strongly promotes incentives for open innovation communities, volunteer expert auditing and overall ecosystem governance decentralisation.
These, in turn, substantially contribute to actual and perceived security of IT, and promotes an ecosystem that is highly-resilient to very strong economic pressures, as well as short- and long-term changing technological, legislative and societal contexts.
Most importantly, without a highly active and well-meaning participation (paid and not paid) of many of the world’s best IT security experts and “communities”, it would not be possible to achieve a sufficiently-extreme necessary auditing intensity and quality, relative to the complexity that is needed to achieve the project aims. Without such participation, it would be unlikely that a project even with a budget of over hundreds of millions of Euros could have reasonable expectations to prevent successful remote attacks from the numerous and varied entities, which have access to remote vulnerabilities that are regularly devised, commissioned, acquired, purchased or discovered, by entities that are extremely well-financed, have unprecedented accumulated skill-sets and often low or nonexistent actual liability.
If we can solve Challenge A, how can we concurrently solidly ensure legitimate lawful access?
(B.1) Can new international non-governmental certification processes for end-to-end IT service providers – with sufficiently-extreme transparency, accountability, and oversight safeguards, such as multi-jurisdiction offline oversight processes based on peer-jury or peer-witness – ensure unprecedented and constitutionally-meaningful* levels of trustworthiness, effective onsite in-person lawful access, and prevent malevolent use?
(B.2) Similarly, can extreme third-party safeguards – forcibly adopted by states for their use of remote endpoint lawful access schemes (i.e., lawful hacking) – reduce, to acceptable levels, the risk of both grave compromises of investigative processes and of highly-scalable abuse of innocent citizens?
Answers to both of these challenges mostly fall in two classes:
NO, It is impossible or nearly impossible. It is safer not to discuss or substantially research or discuss the option. Here is in detail why: …
YES, It is largely or substantially probable. Very extensive joint discussions, research, and concrete proposals are direly needed. Here are the binding high-level requirements of such standard-setting and certification processes.
Citizens are continuously asked by states and by digital rights activists: “Would you rather be free or safe?”. We suspect digital privacy and public safety are not an ‘either-or’ question, but instead a ‘both or neither’ challenge. Albeit acknowledging that solving one or both of the challenges may not be possible, we believe extensive intellectual and monetary resources should be devoted to such attempts. Refusing to make counter-proposals that acknowledge the crucial need for constitutional lawful access and attempt to take it into account – as most digital rights activist organizations have done to date – is backfiring for digital civil rights by ensuring that governments go ahead maintaining the status quo or implement “sub-optimal” standards and laws.
EU Cybersecurity Strategy (2013) calls for “The same laws and norms that apply in other areas of our day-to-day lives apply also in the cyber domain.”. “Fundamental rights, democracy and the rule of law need to be protected in cyberspace. But freedom online requires safety and security too”
ASSURANCE OF CURRENT LAWFUL ACCESS SCHEMES
Today, state-mandated back-doors – hidden or public like the telephone interception systems – or state-sanctioned back-doors – such as undisclosed critical vulnerabilities created, acquired, discovered or used, legally or illegally – exist in nearly all IT devices.
Over the last decade, in addition to sanctioning back-doors everywhere, individual nations states have repeatedly proven to be utterly incapable in designing, legislating or setting adequate technical and organizational standards for state lawful access. Nonetheless, the dire need to reconcile privacy and cyber-investigation remains as crucial as ever.
US and most western states have already in place plenty of legislations, and legally-authorized intelligence programs, that enable them to access a suspect’s communications following a legal due process authorization, including: mandatory key disclosure, lawful hacking laws, national security letters, and other laws.
Powerful states invest tens of millions of dollars every year in pressure of all kinds in order to ensure that IT systems of meaningfully high-trustworthiness levels are not available to the civilian market and, indirectly, to nearly all of the internal intelligence, military and lawful access systems markets. Such pressures are in the form of creation and discovery of symmetrical backdoor, onsite subversion of various kinds, economic (CIA venture capital, procurement pressures, etc), patenting (NSA secret patents), legal (crypto export) pressures, and strong pressures to establish high-trustworthiness IT standards, that are incomplete (Common Criteria, FIPS, etc.) and compromised (Dual_EC_DRBG). That is in addition to similar activities by other powerful states, and tens of millions of Euros of investments by zero market companies.
Nonetheless, a few of the most knowledgeable and well-funded criminals, state and non-state, regularly do use and could use custom-made end-to-end IT infrastructures that manage to avoid the use of components where critical vulnerabilities are known by powerful states. On the other hand, commercial vendors like Apple – having uniquely full control of their life-cycle, and not being mandated to store a master key – are in theory positioned to render their future systems inaccessible to lawful access. However that is very unlikely because of: the huge relative complexity of their systems and life-cycle, which inherently makes the creation of weakness via subversion, legal or illegal, by powerful state actors, as well as to independent discovery of vulnerabilities; and high-level of plausible deniability in a scenario in which Apple may be purposely leaving highly-safeguarded and asymmetrical back-doors for a few states. The same arguments are valid for current high-assurance IT systems, which in all known cases add the lack of control of a number of critical life-cycle phases.
CURRENT BEST PRACTICES, APPROACHES AND PROPOSALS
Dominant strategy of mainstream digital civil rights activists
Almost all citizens and many activists recognize the benefits of enabling due process lawful access for criminal investigation, but the grave incompetence and abuse by states have brought most experts to believe that such access cannot be ensured without unacceptable risks for citizens’ liberty.
The IT security industry is creating solutions that either are based on or add to systems which are non verifiable in critical parts, and whose complexity is way beyond what can ever be sufficiently audited. Meanwhile, IT privacy activists push similarly inadequate existing Free and Open source privacy tools to the masses, while just increasing usability, or at best seeking inadequate small grants for very inadequate complexity reduction, and increases in isolation and auditing.
Proposals “hinted at” by US/UK governments
In recent statements, NSA, Europol, UK Cameron, Obama, US Dept of Justice, and FBI have proposed to solve the “going dark” problem by mandating a some kind of backdoor into all IT systems. The FBI has more specifically proposed a “legislation that will assure that when we get the appropriate court order . . . companies . . . served . . . have the capability and the capacity to respond”, while the NSA has been generically referring to organization or technical safeguards ensuring backdoor access authorization approval by multiple state agencies 5, and Obama referring to a possible safeguard role of non-state entities 6.
From Snowden and Hacking Team revelations, it has become clear that – in addition to covertly introducing, purchasing and sanctioning symmetric back-doors everywhere – most western nations have consistently proven incapable or unwilling to design, standardize, legally oversee or certifying lawful access, by LEA or intelligence agencies, both for traditional phone wiretaps and for IT systems. Current schemes and systems have very poor or no citizen or legislative-branch accountability, because of lack of legal mechanisms as well as adequately accountable socio-technical systems.
Such precedents and a number of technical facts make so that such solution would most likely turn out to be ineffective towards the most serious criminals and causing great risks for civil liberties abuse 7. Among the infeasibilities is the fact that – short of mandating a complete and impossibly draconian control over any connected IT devices through unbreakable remote attestation – how can any master-key for lawful access in IT products prevent a suspect to encrypt its messages a second time, possibly through steganography, rendering the master-key useless in reading the plain text or audio, and even hard to prove the suspect has sent an unlawfully encrypted message?
Proposals for recommendations by EU Parliament
A new report has been commissioned by the EU Parliament which seems to advice that “lawful cracking” lawful access systems “when they are used in Europe with the appropriate oversight and safeguards could have legitimate purposes”. Although, currently such systems appear to unconstitutional in Italy and Germany (except for intelligence purposes) for example, though they are legal in the US.
“No new state-mandated backdoor” proposal, by 14 leading US/UK IT security experts (1997-2014)
In an open letter published last July 6th 2015, Keys under Doormats – 14 among the most renowned US computer security experts have made a detailed case against the introduction of new national legislations in the US and elsewhere, and possibly part of international agreements. They also list questions that any such proposal should answer in order for the public and experts to assess the foreseeable risks of grave civil liberties abuses. The document follows is intended as an upgrade to recent development of a very extensive and influential similar proposal from the 1990’s, The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption.
Proposal for formalization and regulation of “lawful cracking” by Bellovin, Maze, Landau and Clark
Even some IT security experts that have been the most staunch opposers to lawful access solutions for IP communications for decades, have acknowledge that some “going dark” problems exist or could potentially exist and – regardless of quite varying opinions about its gravity – a solution will need to be found as political pressures will keep mounting 8.
Three of the most prominent among the 14 experts mentioned above, and Sandy Clark, have proposed[through Going Bright, 2013, and Lawful Hacking: Using Existing Vulnerabilities for Wiretapping on the Internet, 2014] an alternative solution to the problem that requires the state to “exploit the rich supply of security vulnerabilities already existing in virtually every operating system and application to obtain access to communications of the targets of wiretap orders”, and properly regulate it. It basically proposes to formalize and strictly regulates the state’s ability to hack citizens pursuant a court order. It proposes very extensive measures and safeguards to mitigate the consequent negative effects, including:
Creation of new vulnerabilities is not allowed, but only discovery and creation of exploit for existing vulnerabilities.
Mandatory reporting of vulnerabilities to IT vendors on discovery or acquisition, with some exceptions. It counts on the fact that new will be found and that it takes time for vulnerabilities to be patched;
Limitation of lawful access software to only authorized access actions (whether intercept, search, or else).
They propose to formalize and regulate the use of “lawful cracking” techniques as a way to enable the state to pursue cyber-investigation:
“We propose an alternative to the FBI’s proposal: Instead of building wiretapping capabilities into communications infrastructure and applications, government wiretappers can behave like the bad guys. That is, they can exploit the rich supply of security vulnerabilities already existing in virtually every operating system and application to obtain access to communications of the targets of wiretap orders.
We are not advocating the creation of new security holes, but rather observing that exploiting those that already exist represents a viable—and significantly better—alternative to the FBI’s proposals for mandating infrastructure insecurity. Put simply, the choice is between formalizing (and thereby constraining) the ability of law enforcement to occasionally use existing security vulnerabilities—something the FBI and other law enforcement agencies already do when necessary without much public or legal scrutiny or living with those vulnerabilities and intentionally and systematically creating a set of predictable new vulnerabilities that despite best efforts will be exploitable by everyone.”
Some emerging innovative solutions and best practices
The Brazilian state IT agency SERPRO has internal regulations that intrinsically requires 4 state officials of 4 different public agencies need to be physically present at a specific hosting room and consent in order to allow access to the emails of a state employee based on a court order. More recently, they are increasing the assurance of their solution for both citizens and law enforcement on the server side with additional safeguards, through Kryptus solutions. Leading Brazilian IT security expert Roberto Gallo, CEO of Kryptus, suggests similar solutions.
Another best practice is the law enforcement access to a user’s keys in Austria for digital passports, which currently require 3 officials from different state agencies in in-person secret-sharing and “threshold secret” processes.
Such an approaches, however, still does not deal adequately with the assurance of several other potential vulnerabilities in the life-cycle, such as: client devices HW-SW, other critical SW and HW stack on the server side, the systems use by law enforcement to manipulate and store the acquired info, hardware fabrication of critical HW components.
In addition to much higher and more comprehensive assurance requirements – given the low level of citizen trust in government – citizen-witnesses or citizen-juries may want to be added to the officials from different state agencies, in order to add an additional layer of guarantee.
What are the effects on public safety and well being of the current unavailability of IT devices that are reliably resistant to undetected remote compromises by mid- or high-threat entities, legal or illegal?
What are the foreseeable effects on public safety and well being of the wide availability of such IT devices, therefore resistant even to remote access by public security agencies with legal due process?
Could new lawful access schemes for high-assurance IT services and systems rely, not on states, but on provider-managed voluntary “key recovery” schemes certified by “trustworthy 3rd parties”, such as radically citizen-accountable, independent and competent international bodies?
Could the inevitable added risk be essentially shifted from technical systems to on-site organizational processes?
Can ultra-high assurance IT and related certification governance models radically increase the security, privacy or safety of complex and critical IT, AI and cyber-physical systems, such as for example self-driving cars, robo-advisors, or even Facebook?
Can the early adoption by EU or a critical mass of nations of ultra-high assurance low-level IT certifications and related governance models jump-start an actionable path, from the short to the long-term, to:
- restore meaningful digital sovereignty to citizens, businesses and institutions,
- cement their economic and civil leadership in the most security-critical IT and narrow Artificial Intelligence sectors, and
- substantially increase the chances of utopian rather than dystopian long-term artificial intelligence prospects?
In recent years, rapid developments in AI specific components and applications, theoretical research advances, high-profile acquisitions from hegemonic global IT giants, and heart-felt declarations about the dangers of future AI advances from leading global scientists and entrepreneurs, have brought AI to the fore as both (A) the key to private and public economic dominance in IT, and other sectors, in the short-to-medium term, as well as (B) the leading long-term existential risk (and opportunity) for humanity, due to the likely-inevitable “machine-intelligence explosion” once an AI project will reach human-level general intelligence.
Google, in its largest EU acquisition this year acquired for 400M€ a global AI leader, DeepMind; already invested by Facebook primary initial investors Peter Thiel and by Elon Musk. Private investment in AI has been increasing 62% a year, while the level of secret investments by multiple secretive agencies of powerful nations, such as the NSA, is not known – but presumably very large and fast increasing – in a winner-take-all race to machine super-intelligence among public and private actors, which may well already have started.
A recent survey of AI experts estimates that there is a 50% chance of achieving human-level general intelligence by 2040 or 2050, while not excluding significant possibilities that it could be reached sooner. Such estimates may even be biased towards later dates because: (A) there is an intrinsic interest in those that are by far the largest investors in AI – global IT giants and USG – to avoid risking a major public opinion backlash on AI that could curtail their grand solo plans; (B) it plausible or even probable that substantial advancements in AI capabilities and programs may have already happened but have successfully kept hidden for many years and decades, even while involving large numbers of people; as it has happened for surveillance programs and technologies of NSA and Five Eyes countries.
Some of the world’s most recognized scientists, and most successful technological entrepreneurs believe that progress beyond such point may become extremely rapid, in a sort of “intelligence explosion”, posing grave questions on humans’ ability to control it at all. (See Nick Bostrom TED presentation). Very clear and repeated statements by Stephen Hawking (the most famous scientist alive), by Bill Gates, by Elon Musk (main global icon of enlightened tech entrepreneurship), by Steve Wozniak (co-founder of Apple), agree on the exceptionally grave risks posed by uncontrolled machine super-intelligence.
Elon Musk, shortly after having invested in DeepMind, even declared, in an erased but not retracted comment:
“The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast-it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe. 10 years at most. This is not a case of crying wolf about something I don’t understand.I am not alone in thinking we should be worried. The leading AI companies have taken great steps to ensure safety. They recognise the danger, but believe that they can shape and control the digital super-intelligences and prevent bad ones from escaping into the Internet. That remains to be seen…”
Some may argue why extreme IT security to support AI safety is needed now if its consequences may be far away. One clear and imminent danger is posed by self-driving and autonomous vehicles (aerial and terrestrial) – which utilize increasingly wider narrow AI systems – and the ease with which they can be “weaponized” at scale. Hijacking the control of a large number of drones or vehicles could potentially cause hundreds of death or more, or cause hardly attributable hacks that can cause grave unjustified military confrontation escalations.
Richard Hawkings summarised it most clearly when he said “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all”. Control relies on with IT assurance to ensure that who formally controls it is also who actually controls it, and possibly with international IT assurance certification governance that may provide a governance model for international efforts to regulate advanced AI projects or better to guide international democracy projects to develop “friendly AI” before unfriendly AI gets to human-level general intelligence.
On Jan 23rd 2015, nearly the entire “who’s who” of artificial intelligence, including the leading researchers, research centers, companies, IT entrepreneurs – in addition to what are possibly the leading world scientists and IT entrepreneurs – signed Open Letter “Research priorities for robust and beneficial artificial intelligence” with an attached detailed paper.
Such an open letter, although it is a greatly welcome and needed general document, overestimates the levels of trustworthiness and comparability of existing and planned high-assurance IT standards, as well as the at-scale costs of “high-enough for AI” assurance levels, and focuses on security research as a way to make AI “more robust”.
Such an open letter emphasizes the need for “more robust AI”. A very insufficiently-secure AI system may be greatly “robust” in the sense of business continuity, business risk management and resilience, but still be extremely weak in safety or reliability of control. This outcome may sometimes be aligned with the AI sponsor/owner goals – and those of other third parties such as state security agencies, publicly or covertly involved – but be gravely misaligned to chances to maintain a meaningful democratic and transparent control, i.e. having transparent reliability about what the system, in actuality, is set out to do and who, in actuality, controls it.
More important than “robustness”, sufficiently-extreme security assurance levels may comprise the most crucial foundation for AI safety in the short and long terms, and serve to increase transparency of who is actually in control, as well as a precondition for verification and validity.
As AI systems are used in an increasing number of critical roles, they will take up an increasing proportion of cyber-attack surface area. It is also virtually certain that AI and machine learning techniques will themselves be used in cyber-attacks. There is a large amount of evidence that many advanced AI techniques have long been and are currently being used by the most powerful states intelligence agencies, to attack – often in contrast with national or international norms – end-users and IT systems, including IT systems using AI. As said above –while the levels of investment of public agencies of powerful nations, such as the NSA, is not known – it is presumably very large and fast increasing, in a possibly already started race among public and private actors. A race that could in the near future accelerate into a winner-take-all race. The distribution of such funding by secretive state agencies, among offensive R&D rather than defensive R&D (i.e. AI safety), will most likely follow the current ratio of tens of times more resources for offensive R&D.
The above Open letter states that “Robustness against exploitation at the low-level is closely tied to verifiability and freedom from bugs”. This is correct; but may be only partially so, especially for use in critical and ultra-critical use cases, which will become more and more dominant. It is better to talk about auditability in order not get confused with (formal) IT verification. It is crucial and unavoidable to have complete auditability and extremely diverse, competent and well-meaning actual auditing of all critical HW, SW and procedural components involved in an AI system’s life-cycle, from certification standards setting, to CPU design, to fabrication oversight. (Such auditing may need to happen in secrecy, because public auditability of SW and HWs design may pose a problem in so far as project pursuing “unfriendly Super-intelligence” could get advantage in a winner-take-all race). Since 2005 the US Defense Science Board has highlighted how “Trust cannot be added to integrated circuits after fabrication” as vulnerabilities introduced during fabrication can be impossible to verify afterwards. Bruce Schneier, Steve Blank, and Adi Shamir, among others, have clearly said there is no reason to trust CPUs and SoCs (design and fabrication phases). No end-2-end IT system or standards exist today that provide such complete auditability of critical components.
It is impossible, and most probably will remain so, to ensure perfectly against critical vulnerabilities – given the socio-technical complexity of IT socio-technical systems – even if they were to be simplified by 10 or 100 times, and with radically higher levels of auditing relative to complexity.
Nonetheless, it remains extremely crucial and fundamental that adequate research could devise ways to achieve sufficiently-extreme level confidence about “freedom from critical vulnerabilities” through new paradigms. We may need to achieve sufficient user-trustworthiness that sufficient intensity and competency of engineering and “auditing efforts relative to complexity” have been applied, for all critical software and hardware component. No system or standard exist today to systematically and comparatively assess – for such target levels of assurance for a given end-2-end computing service, and its related life-cycle and supply-chain.
As stated above, all AI systems in critical use cases – and even more crucially those in advanced AI system that will soon be increasingly approaching machine super-intelligence – will need to be so robust in terms of security to such an extent that they are resistant against multiple extremely-skilled attackers willing to devote cumulatively even tens or hundreds of millions of Euros to compromise at least one critical components of the supply chain or life-cycle, through legal and illegal subversion of all kinds, including economic pressures; while having high-level of plausible deniability, low risk of attribution, and (for some state actors) minimal risk of legal consequences if caught.
In order to reduce substantially these enormous pressures, it may be very useful to research socio-technical paradigms by which sufficiently-extreme level of AI systems user-trustworthiness can be achieved, while at the same time transparently enabling due legal process cyber-investigation and crime prevention. The possible solution of such dichotomy would reduce the level of pressure by states to subvert secure high-assurance IT systems in general, and possibly – through mandatory or voluntary standards international lawful access standards – improve the ability of humanity to conduct cyber-investigations on the most advanced private and public AI R&D programs. Cyber-investigation may be crucial to investigate some criminal activities that aimed at jeopardizing AI safety efforts.
There is a need to avoid the risk of relying for guidance on high-assurance low-level systems standard/platform projects from defense agencies of powerful nations, such as the mentioned DARPA SAFE, NIST, NSA Trusted Foundry Program, DARPA Trust in Integrated Circuits Program, when it is widely proven that their intelligence agencies (such as NSA) have gone to great lengths to surreptitiously corrupt technologies and standards, even those that are overwhelmingly used internally in relatively high-assurance scenarios.
The cost of radically more trustworthy low-level systems for AI could be made very comparable to commercial systems mostly used as standard in AI systems development. Those cost differentials could possibly be reduced to being insignificant through production at scale, and open innovation models to drive down royalty costs. For example, hardware parallelization of secure systems and lower unit costs (due to lower royalties), could make so that adequately secure systems could compete or even out-compete in cost and performance those other generic systems.
There is a lot of evidence to show that R&D investment on solutions to defend devices from the inside (that assume inevitable failure in intrusion prevention) could end up increasing the attack surface if those systems life-cycle are not themselves subject to the same extreme security standards as the low level system on which they rely. Much like antivirus tools, password storing application and other security tools are often used as ways to get directly to a user or end-point most crucial data. The recent scandals of NSA, Hacking Team, JPMorgan show the ability of hackers to move inside extremely crucial system without being detected, possibly for years. DARPA high-assurance program highlight how about 30% of vulnerabilities in high-assurance systems are introduced by internally security products.
Ultimately, it may be argued that IT assurance that is high enough for critical scenarios like advanced AI is about the competency and citizen-accountability of the organizational processes critically involved in the entire life-cycle, and the intrinsic constraints and incentives bearing on critically-involved individuals within such organizations.
Maybe, the dire short-term societal need and market demand for radically more trustworthy IT systems for citizens privacy and security and societal critical assets protection can be aligned in a grand strategic vision for EU cyberspace to satisfy – in the medium and long-term – both the huge societal need and great economic opportunity of creating large-scale ecosystems able to produce AI systems that will be high-performing, low-cost and still provide adequately-extreme levels of security for AI most critical usage scenarios.
It is worth considering if short-term and long-term R&D needs of artificial intelligence “(“AI”) and information technology (“IT”) – in terms of security for all critical scenarios – may become synergistic elements of a common “short to long term” vision, producing huge societal benefits and shared business opportunities. The dire short-term societal need and market demand for radically more trustworthy IT systems for citizens’ privacy and security and societal critical assets protection, can very much align – in a grand strategic cyberspace EU vision for AI and IT – with the medium-term market demand and societal need of large-scale ecosystems capable to produce AI systems that will be high-performing, low-cost and still provide adequately-extreme levels of security for AI critical scenarios.
What governance models can best maximize the trustworthiness and resilience of an ultra-high assurance certifications body in critical IT and AI domains? Can such certifications and their governance models provide the basis for radically more trustworthy, trusted, resilient and enforceable national policies and international treaties?
Although the creation of new or improved international standardization and certification bodies for critical end-2-end societal ICT systems can be highly effective within current legislative and constitutional frameworks and treaties – i.e. without government recognition or legislative changes – they could provide the crucial socio-technical basis of the oversight, standardization and certification to ensure the meaningful enforceability of their recognition or adoption, for certain classes of services, single nations, or by intergovernmental agreements and treaties.
Examples of such international agreements or treaties could be the Geneva-Convention like treaty proposed by the UN Special Rapporteur on the Right of Privacy, the proposed Snowden Treaty, or standard bodies called for by Bruce Schneier for the “so-called” World-Sized Web. Constituent processes for the creation of such intergovernmental treaties could get inspiration from those of the Coalition for International Criminal Court, successfully co-lead lead by the World Federalist Movement, or a proposed constituent process based on UN Caucuses, which was approved by the World Federalist Movement 2008 Congress (post).