Table of Contents
- PANEL 1 – CHALLENGE A:
- PANEL 2 – The role of the Free Software.
- What is the role of the Free/Open Source Software movement for the prospects of wide availability of computing with meaningful user control
- PANEL 3 – CHALLENGE B
- Certification Governance
- PANEL 4: High-assurance IT and AI
- PANEL 5 – Solving Challenges A and B
- Linking Challenges A and Challenge B
- NEW PROPOSALS
PANEL 1 – CHALLENGE A:
Is it feasible to provide ordinary citizens access to affordable and user-friendly end-2-end IT services with constitutionally-meaningful levels of user-trustworthiness, as a supplement to their every-day computing devices? If so, how? What scale of investments are needed? What standards/certifications can enable a user to reliably distinguish them from other services?
IT security and privacy is a complete debacle from all points of view. EU citizens, businesses and elected state officials have no access, even at high cost, to IT and “trust services” that are NOT remotely, undetectably and cheaply compromisable by a large number of medium- and high-threat actors. Criminal entities that are most well-financed avoid accountability through effective use of ultra-secure IT technologies, or by relying mostly on advanced non-digital operational security techniques (OpSec). National defenses are increasingly vulnerable to large-scale attacks on “critical infrastructure” by state and non-state actors, increasingly capable of causing substantial human and economic harm.
The critical vulnerabilities that make everything broken are nearly always either state-mandated or state-sanctioned backdoors, because the state has either created, acquired or discovered them, while keeping that knowledge hidden, legally or illegally.
After Snowden, nearly all IT privacy experts and NGOs are up in arms to fight a 2nd version of the 90s’ Crypto Wars to prevent backdoors in IT systems from being mandated by nations, in the wake of “terrorism threats”. Most are focused on (a) pushing existing Free/Open Source Software privacy tools to the masses, while making them more user-friendly and incrementally safer with small grants and (b) going out there to fight the 2nd Crypto Wars to prevent the government to create official backdoors.
Most IT privacy experts and activists have not noticed, and are fighting on a far away imaginary frontline, when their cities are occupied by the “enemy”, undefended. Meanwhile, they overwhelmingly refrain from proposing anything about what we should do about those state backdoors and critical vulnerabilities that already exist everywhere. Almost no-one challenges state security agencies pretence that they are “going dark” to trumpet how much they are missing capabilities to enable lawful access, when they overwhelmingly are not, not even for scalable targeted attacks.
First off, the Crypto Wars in the 90’s were won in appearance, but utterly lost in essence. In fact, while the US and other governments backtracked on their proposal for an ill-conceived mandatory backdoor (such as the Clipper Chip) and algorithmically-unbreakable encryption became accessible to anyone, the most powerful states security agencies won many times over. In fact, over the following two decades: (1) powerful state security agencies have surreptitiously and undetectably placed backdoors nearly everywhere, with nonexistent or very insufficient due process oversight, compared to the already inadequate oversight lawful interception systems; (2) Tons of valuable targets, even very “up there”, have kept using IT devices that they thought lacked backdoors, but which had been snooped upon for years or decades without their knowledge; (3) The general perception that “free crypto for all” had won, prevented even a demand for meaningful IT devices to be developed, which could minimize, isolate, simplify and do away with the need of trust in tons of untrustworthy actors along critical phases of the device life-cycle1.
Business. EU IT security/privacy businesses are increasingly unable to sustainably compete and innovate as they are unable to differentiate on the basis of meaningful and comprehensive benchmarks. They are also increasingly unable to convince users to investing in fixing vulnerabilities in one part of their systems, when most-surely many vulnerabilities remain in other critical parts, which are known to the same kind of threat actors. In a post-Snowden world, the success of even high-assurance cyber-security systems is increasingly “security theatre”, because even the highest-assurance systems in the civilian market contain at least one critical vulnerability, accessible in a scalable way by even mid-level threat actors, with very low risk of discoverability and attribution. So therefore it is almost impossible to measure and sustain the current overall security added value of any new security service, and related risk management strategies, even before assessing the increase in attack surface and vulnerabilities that any new product entails.
All the while – such security agencies’ media success in wildly overstating the “going dark” problem – has enabled them to gather substantial political and public opinion consensus for: (1) unconstitutional surveillance practices gravely affecting non-suspect citizens, and often setting up multiple redundant legal authorities; (2) convincing politicians and the public opinion of the need to “outlaw” encryption and/or extend inadequate lawful access mandate, traditionally reserved to telephone operators, to all digital communications.
All the while, by breaking everything, they expose US government, US private interests, and law enforcement and intelligence to grave damage for state security and espionage by foreign states and non-state actors.
All or nearly all end-points, both ordinary commercial systems and high-trustworthiness IT systems, are broken beyond point of encryption, and scalably exploitable by powerful nations and a relatively large number of other mid- or high-level threat actors.
A lack of sufficiently extreme and comprehensive standards for critical computing, and the decisive covert action of states to preserve pre-Internet lawful access capabilities, have made so that, while unbreakable encryption is everywhere, everything is broken; and while state-mandated or state-sanctioned backdoors are nearly everywhere2, the most skilled or well-financed criminals communicate unchecked.
Nearly all critical computing services include at least some critical components whose complexity that is way beyond adequate verifiability. Design or fabrication of critical components or processes (CPU, SoC fabrication, etc.) are not publicly verifiable, and there are no reasons to trust providers’ carefulness and intent, when plausible deniability is very easy, liability is almost non-existent, and state pressures to accidentally leave a door open are extremely high.
As blogger Quinn Norton says, everything is broken. Revelations about systems and programs like NSA Turbine, NSA FoxAcid and Hacking Team, have shown the huge scalability – in terms of low risk and cost – of completely compromising of end-point devices, by numerous public and private actors, and even more numerous actors that do or could trade, lend or steal such capabilities. It’s become clear that no IT system that assumes need for trust in any one person or organization – and there are none today – can be considered meaningfully trustworthy. The exception to this rule is that there are some people in the world – top criminals, billionaires, or highest state official – who do have access to devices that are most likely not compromised by external entities. This results in a huge asymmetry of power between them and all others, i.e. two sets of citizens.
Everything is broken mostly because of two structural, and highly interlinked, problems:
- The lack of sufficiently extreme and comprehensive standards for high-assurance IT services that provide meaningful confidence to end-users that the entire life-cycles of their critical components are subject to oversight and auditing processes, that are comprehensive, user-accountable, publicly-assesseable, and adequately intensive relative to complexity. A recent ENISA report, in line with many other expert reports, highlights: “At the time of writing, there is no single, continuous ‘line of standards’ related to cyber security, but rather a number of discrete areas which are the subject of standardisation”
- The decisive actions, by state security agencies, to maintain pre-Internet lawful access capabilities– since the popularization of algorithmically-unbreakable software encryption in the 90s – through huge sustained investments in the discovery and creation of critical vulnerabilities, throughout the life-cycle and supply chain of virtually all ordinary and high-assurance IT technologies. Furthermore, the covert nature of such programs has allowed such agencies (and other advanced actors) for decades to remotely and cheaply break into virtually all end-points thought to be safe by their users – with extremely vague accountability – as well as covertly overextend their preventive surveillance capacities.
The only reliable measure of the effectiveness IT security provider, private and public relies on its “closeness” to major stockpilers of vulnerabilities, mostly few large powerful states, creating perversive intelligence network effects3, gravely undermining society sovereignty, freedoms and competitiveness.
In sum, there is a wide unavailability, for both citizens and for lawful access schemes, of end-2-end IT services of meaningfully high-trustworthiness levels,
This situation will not be changed by a nation’s law, or international treaty. Stockpiling of zero day vulnerabilities, through investment in discovery, creation and purchase by powerful state and non-state actors will keep accelerating, and there is no chance any law or international treaty can significantly avoid that. Non proliferation of IT weapons is very different from other weapons like biological and nuclear, as their nature makes them easier to hide and reproduce, and they are daily used and spread by powerful actors to pursue their cyber-investigation goals.
The legislation affecting high-assurance IT systems and lawful access systems is constituted of national laws, which are mildly influenced in their regulatory implementations by voluntary international public and/or private standards (Common Criteria, ETSI, NIST, etc.).
High-assurance IT services are regulated with the over-riding aim to prevent malevolent use, and therefore focused on limiting export (crypto export laws) and use of certain technologies, and increasingly their research (such as in the ongoing Wassenaar Agreement national implementations).
Lawful access processes, in both state security and civilian scenarios, are instead subject to very limited or inexistent technical regulation of the security of their technical infrastructure against abuse by state agencies on their citizens, or by external actors against such state agencies. They are subject to articulated, though largely inadequate, organizational and socio-technical regulations and oversight procedures. Multiple “Mutual legal assistance” treaties regulate the often-crucial international cooperation in pursuit of cyber-crimes. Such arrangements are so insecure and slow, taking often months, that most times workarounds “at the edge of legal” are deployed4 , which are overwhelmingly unregulated.
Are wide investments in Challenge A realistic or sustainable
in the absence of a concurrent solution to Challenge B?
It has emerged that almost all western nations, including the US and most EU countries, have one or more lawful ways under which state security agencies can intrude on the privacy of millions of citizens without a court order, including some sort of mandatory key disclosure legislation. Although, their existence has been hidden, they are currently politically based on two justifications: (1) to preserve or restore the traditional lawful intercept capabilities that have been lost in cyberspace; (2) to perform dragnet or large-scale targeted surveillance in order to prevent or prosecute grave crimes. The surprising public acceptance of the second justification is arguably dependent on the fact that currently it is the only way to achieve the first justification. The auspicable wide-market uptake of new high-assurance ICT standards and solutions is a challenge that may need to be solved concurrently with the challenge of devising ways to restore legitimate criminal investigation capabilities in cyberspace. It has become clear that citizens will choose perceived physical safety over cyber-privacy if given a stark choice. In fact, political and public opinion pressures to extend the outlawing or allowing subversion of such techs and standards would be huge, as they’ve been for decades, and would surely increase to become unbearable after major terrorist attacks, largely attributed to the use of such technologies. Such grave risks of legal sustainability comprise a major obstacle to private investment and public commitments in wide deployment of such high-assurance technologies and standards for the civilian market.
Over the last decades, in addition to sanctioning backdoors everywhere, states have repeatedly proven to be utterly incapable to neither socio-technically design, nor legally manage, and nor issue proper technical and organizational certification requirements, for lawful access compliance.
Fittingly, states have been similarly unable to create voluntary or mandatory IT security standards, that were nearly sufficiently extreme and comprehensive.
Consensus-based decision making processes at the core of EU institutions – and international public and mixed standard-setting bodies (such as ETSI) – have made it impossible to resist the firm will of even a single powerful country to corrupt or dilute-to-meaninglessness the standard setting process.
Industry driven standards are no better; standards bodies like Trusted Computing Group and Global Platform have focused on increasing user convenience, interoperability and reducing overall costs of violations to content copyright and integrity of financial transactions, while playing passing lip service to the security and privacy demands of end-user, that were at odds with state security agencies.
So therefore, the governance of such paradigms and certifications may need to be primarily independent, international, highly-competent and citizen-accountable, and the role of the national, and international governmental institutions (EU, UN, etc) – and major global IT industry players – can only be that recognizers, adopters, and minority stakeholders. A process similar to that of the World Wide Web Consortium could be followed, but with much wider user- or citizen-accountability to avoid having companies having too much control.
What is the role of the Free/Open Source Software movement for the prospects of wide availability of computing with meaningful user control
Over the last thirty years, a huge amount of volunteer and paid work has been devoted to developing Free Software with the aim of promoting users’ civil freedom in computing.
Why then, to date, no end-user computing device is available at any cost which give the user meaningful confidence that its computing is not completely compromised undetectably at insignificant cost and risk?
Why no end-user device is available that does NOT contain some “critical” software/firmware components that is (1) non-verifiable in its source code without NDA, or even proprietary? and/or that is not nearly sufficiently verified relative to complexity?
What should be the free software community priorities and short and long-term objectives in a Post-Snowden World?
Provided that Challenge A can be met, can new voluntary international IT certifications – within some nations’ current legislative frameworks – provide safeguards that are sufficiently-extreme to reconcile meaningful personal privacy, effective lawful access and prevention of malevolent use? If so, what are the core paradigms of such certification processes? Could such process rely, not on states, but on provider-managed voluntary “key recovery” schemes certified by radically citizen-accountable, independent and competent international bodies? Could the inevitable added risk be essentially shifted from technical systems to on-site organizational processes?
Today, state-mandated backdoors – hidden or public like the telephone interception systems – or state-sanctioned backdoors – such as undisclosed critical vulnerabilities created, acquired, discovered or used, legally or illegally – are in nearly all IT devices.
Over the last decades, in addition to sanctioning backdoors everywhere, individual states have repeatedly proven to be utterly incapable of either socio-technically designing, or legally managing, or setting proper technical and organizational requirements for state lawful access compliance. Nonetheless, the dire need to reconcile privacy and cyber-investigation remains as crucial as ever.
US and most western states have in place plenty of legislations, and legally authorized intelligence programs, that enable them to access a suspect communications following a legal due process authorization, including: mandatory key disclosure, lawful hacking laws, national security letters, and other laws.
Powerful states invest tens of millions of dollars every year in pressure of all kinds order to ensure that IT systems of meaningfully high-trustworthiness levels are not available to the civilian market and, indirectly, to nearly all of the internal intelligence, military and lawful access systems markets. Such pressures are in the form of creation and discovery of symmetrical backdoor, onsite subversion of various kinds, economic (CIA venture capital, procurement pressures, etc), patenting (NSA secret patents), legal (crypto export) pressure crypto export pressures, and strong pressures to establish high-trustworthiness IT standards, that are incomplete (Common Criteria, FIPS, etc.) and compromised (Dual_EC_DRBG). That is in addition to similar activities by other powerful states, and tens of millions of euros of investments by zero market companies.
Nonetheless, a few of the most knowledgeable and well-funded criminals, state and non-state, regularly do use and could use custom-made end-2-end IT infrastructures, that manage to avoid the use of components where critical vulnerabilities known by powerful states5. On the other hand, commercial vendors like Apple – having uniquely full control of their life-cycle, and not being mandated to store a master key – are in theory positioned to render their future systems inaccessible to lawful access, but that is very unlikely because of: the huge relative complexity of their systems and life-cycle, which makes it inherently creation of weakness via subversion, legal or illegal by powerful state actors, as well as to independent discovery of vulnerability; and high-level of plausible deniability in a scenario in which Apple may be purposely leaving highly-safeguarded and asymmetrical backdoors for a few states. The same arguments are valid for current high-assurance IT systems, which in all known case add the lack of control of a number of critical life-cycle phases.
Almost all citizens and many activists recognize the benefits of enabling due process lawful access for criminal investigation, but the grave incompetence and abuse by states have brought most experts to believe that such access cannot be ensured without unacceptable risks for citizens’ liberty.
The IT security industry is creating solutions that either are based on or add to systems which are non verifiable in critical parts, and whose complexity is way beyond what can ever be sufficiently audited. Meanwhile, IT privacy activists push similarly inadequate existing Free and Open source privacy tools to the masses, while just increasing usability, or at best seeking inadequate small grants for very inadequate complexity reduction, and increases in isolation and auditing.
In recent statements, NSA, Europol, UK Cameron, Obama, US Dept of Justice, and FBI have proposed to solve the “going dark” problem by mandating a some kind of backdoor into all IT systems. The FBI has more specifically proposed a “legislation that will assure that when we get the appropriate court order . . . companies . . . served . . . have the capability and the capacity to respond”6, while the NSA has been generically referring to organization or technical safeguards ensuring backdoor access authorization approval by multiple state agencies7, and Obama referring to a possible safeguard role of non-state entities8.
From Snowden and Hacking Team revelations, it has become clear that – in addition to covertly introducing, purchasing and sanctioning symmetric backdoors everywhere – most western nations have consistently proven incapable or unwilling to design, standardize, legally oversee or certifying lawful access, by LEA or intelligence agencies, both for traditional phone wiretaps and for IT systems. Current schemes and systems have very poor or no citizen or legislative-branch accountability, because of lack of legal mechanisms as well as adequately accountable socio-technical systems.
Such precedents and a number of technical facts make so that such solution would most likely turn out to be ineffective towards the most serious criminals and causing great risks for civil liberties abuse9. Among the infeasibilities is the fact that – short of mandating a complete and impossibly draconian control over any connected IT devices through unbreakable remote attestation – how can any master key for lawful access in IT products prevent a suspect to encrypt its messages a second time, possibly through steganography, rendering the masterkey useless in reading the plain text or audio, and even hard to prove the suspect has sent an unlawfully encrypted message?
Proposal against any legislation mandating compliance capability of all IT providers to lawful access request
In an open letter published last July 6th 2015, Keys under Doormats, – 14 among the most renowned US computer security experts have made a detailed case against the introduction of new national legislations, in the US and elsewhere, and possibly part of international agreements. They also list questions that any such proposal should answer in order for the public and experts to assess the foreseeable risks of grave civil liberties abuses.
Even some IT security experts that have been for decades the most staunch opposers to lawful access solution for IP communications, acknowledge that some “going dark” problem exists or could potentially exist and, regardless of quite varying opinions about its gravity, a solution will need to be found as political pressures will keep mounting10.
Therefore, three of the most prominent among the 14 experts mentioned above, and Sandy Clark, have proposed
- Creation of new vulnerabilities is not allowed, but only discovery and creation of exploit for existing vulnerabilities.
- Mandatory reporting of vulnerabilities to IT vendors on discovery or acquisition, with some exceptions. It counts on the fact that new will be found and that it takes time for vulnerabilities to be patched;
- Limitation of lawful access software to only authorized access actions (whether intercept, search, or else).
They propose to formalize and regulate the use of “lawful cracking” techniques as a way to enable the state to pursue cyber-investigation:
“We propose an alternative to the FBI’s proposal: Instead of building wiretapping capabilities into communications infrastructure and applications, government wiretappers can behave like the bad guys. That is, they can exploit the rich supply of security vulnerabilities already existing in virtually every operating system and application to obtain access to communications of the targets of wiretap orders.
We are not advocating the creation of new security holes, but rather observing that exploiting those that already exist represents a viable—and significantly better—alternative to the FBI’s proposals for mandating infrastructure insecurity. Put simply, the choice is between formalizing (and thereby constraining) the ability of law enforcement to occasionally use existing security vulnerabilities—something the FBI and other law enforcement agencies already do when necessary without much public or legal scrutiny or living with those vulnerabilities and intentionally and systematically creating a set of predictable new vulnerabilities that despite best efforts will be exploitable by everyone.”
Can standards for radically more trustworthy IT define a European actionable path, from the short to the long-term, to: (1) restore meaningful digital sovereignty to EU citizens, businesses and institutions, (2) cement a EU leadership in the most security-sensitive IT and Artificial Intelligence sectors (such as autonomous vehicles, surveillance, etc.), and (3) substantially increase the chances of utopian rather than dystopian long-term artificial intelligence prospects?
Joint definition of paradigms, and high-level certification requirements and processes, for constitutionally-meaningful computing services and lawful access systems and processes
Although solving Challenge A would provide substantial societal benefits, the concurrent solution of Challenge A and Challenge B would provide substantially higher and more-sustainable societal benefits because of: the interdependency of constitutional mandates for public safety and constitutional rights for personal privacy; the need to reduce the chance of abuse of Challenge A by criminals, including third state actors; and the need for effective provisioning and investments in Challenge A to be legally sustainable, even though grave public safety crimes might be substantially aided, allegedly or actually, by the use of Challenge A.
Most law enforcement agencies (LEAs) in their (possibly self-serving) public claims and most e-privacy experts believe that Challenge A is already available to ordinary citizens, willing to sacrifice substantial money and/or usability. Almost all privacy experts believe Challenge B is completely impossible, and all discussions or proposals to find a solution are either nonsense, insincere or both.
Many believe that Challenge A is impossible or very uneconomical.
Most privacy experts, even those privately admitting there may be some way to solve Challenge B right, believe that such a possibility is so remote that we should not publicly investigate it, as it would increase the risk that states may deploy the wrong solutions.
A few experts believe that Challenge A is possible, or economical, but will never be or should never be sustainably widely available unless Challenge B is also substantially solved. Almost all LEAs and very few privacy experts believe Challenge B may be feasible by deeply exploring innovative socio-technical paradigms, relying on concepts such as: secret-sharing, multi-party computation in different jurisdictions, secret-sharing relying on-site processes rather than IT, provider management, independent standardization and oversight, citizen-witness processes, and more.
- Could the solution to such challenges – through the creation of international certification processes and open and resilient ecosystems – cement a future global leadership of EU values and EU industry in the most security- and privacy-sensitive areas of IT, such as personal communications, state security and defense, IoT and advanced artificial intelligence? How much of the new paradigms needed to solve Challenge A can help solve Challenge B?
- Can solving Challenge A be legally sustainable unless we solve Challenge B? Can the wide-scale investments needed to bring meaningful privacy to all be secured if these are legally unsustainable in time?
- Can the feasibility of solving such problems be dramatically reduced by aiming at computing services that are supplementary to current ordinary commercial devices? A sort of meaningfully private sphere, though feature-wise limited, alongside a digital public sphere for more general computing?
- What are the effects on public safety of the current wide unavailability of meaningfully secure IT devices? What are the effects on public safety and public interest of a possible future wide availability of meaningfully secure IT, and therefore resistant to scalable remote access by even public security agencies through lawful access requests?
- Can independent citizen-accountable, citizen-witness or citizen-jury organizational processes – from standard setting, to fabrication and key recovery oversight – substantially or radically increase the actual and perceived trustworthiness of setting standards and critical lifecycle phases of IT devices?
- Can a critical mass of international actors lead the creation of independent, citizen-accountable new standard, platforms and ecosystems for trustworthy IT that can underpin EU values and EU business global leaderships in security- and privacy-critical computing?
- Can an actionable path for Europe envision, from the short to the long term, to radically restore the access by citizen and businesses to private civic communications, to safeguarding critical defense infrastructure, to provide a unique competitive advantage, and long-term safety, for the most critical EU Artificial Intelligence services?
What can be the basic paradigms and research priorities to attempt to devise the sufficiently-extreme technical and organizational safeguards that would constitute an acceptable risk? Could the added risk and complexities be radically reduced by shifting from the technical setups to the same on-site organizational processes that are needed to solve Challenge A? Could such processes rely not on states but provider-managed key recovery schemes, certified by radically citizen-accountable and technically-proficient international bodies? Should they be inter-governmental or non-governmental or mixed?
In this proposal, we argue that the establishment of a new international non-governmental standard and certification body for highest-assurance IT services and for lawful access schemes12– mostly within current legislative and constitutional frameworks of liberal nations – may play a decisive role in concurrently promoting the wide availability of IT systems of meaningfully high-trustworthiness13 levels, as well as increase the trustworthiness, intra-governmental oversight and citizen-accountability of existing lawful access schemes, both state-managed and provider-managed. We intend “trustless” in its primary meaning of “trusting” and “distrustful”, i.e. lacking the need or assumption of trust in anything and anyone, as it is the case in democratic election systems or in certain WMD safety socio-technical systems. It stands in contrast with the root untrustworthiness of the Trusted Computing concept1415.
Significant expected outcomes of such wide adoption would be to inspire possible law changes by states that mandate or highly-incentivize their internal use – including adequate formalization of existing lawful access authorities – and, independently, the economically and legally-sustainable emergence of providers for IT systems of meaningfully high-trustworthiness16 levels, including providers that decide to voluntarily offer provider-managed processing of lawful access requests, through certifiably-extreme safeguards.
To be clear, this proposal opposes any law proposal for a “state backdoor”, i.e. any state mandatory requirement that IT providers provide access to their users communication in plain text pursuant a legal authorization. It also opposes the introduction, legal formalization and regulation of lawful hacking state authorization legislation, except if those are not regulated by an international certification body of sufficiently-extreme technical proficiency, ethical standing and citizen-accountability, and which is primarily non-governmental.
As opposed to biological and nuclear weapons, however, state and non-state advanced IT weapons are completely useless against an infrastructure that does not contain a critical vulnerability known to the attacker. Perfect absence of vulnerability is impossible, but it is acknowledged that IT systems could be made 10 or 100 times more expert audited relative to complexity – in all their critical life-cycle components – and therefore be made tens of times more resistant to the most advanced threats. These could have a chance of making the entire lifecycle of critical systems verifiably resistant to persistent attack in the order of tens of millions of euros in “symmetric backdoors” and economic pressures, by skilled, covert and largely legally unaccountable actors.
The following draft high-level socio-technical paradigms are defined in the form of high-level certification requirements for IT service providers which, in their final version and detailed in specifications, will define a compliant service. They will constitute the terms that any provider needs to respect to be certified by the certification body (and what any active participant needs to respect to participate in a compliant ecosystem). They are therefore intended to guide not only the establishment of a certification standard, but also to ensure and sustain a suitable open ecosystem that is fully coherent with such standards.
Follows below a draft of the high-level paradigms for certification requirements of IT systems of meaningfully high-trustworthiness levels, applicable to both citizen communications and lawful access systems.
A compliant computing service by a given provider, will therefore be described as one which:
aims at constitutionally-meaningful levels of actual and perceived trustworthiness to the end-user of the privacy, anonymity, integrity and authenticity of data and metadata of his/her entire connected computing experience, and not mere substantial improvements;
extends these terms to all software, hardware and organizational processes critically involved during the entire lifecycle at endpoints, as well as to the overall architecture of midpoints relevant to ensuring of metadata privacy;
assumes that extremely skilled attackers are willing to devote even tens of millions of dollars to compromise the supply chain or lifecycle, through legal and illegal subversion of all kinds, including economic pressures, to the extent that the foreseeable cost and risks for such party to perform continuous or pervasive remote targeted surveillance of any users, through compromise or tampering, is several times smaller than the cost of typical continuous proximity-based surveillance techniques;
assumes an active and complete lack trust in anyone or anything, except in the assessable technical barriers and cumulative disincentives against decisive attacks to all organizational processes critically involved in the entire lifecycle, from standard setting to fabrication oversight;
provides extreme user accountability, independence and technical proficiency of all organizational and processes critically involved in the computing service lifecycle and operation, which ultimately rely on an international independent standard and certification body or bodies.
provides extreme intensity and competency of engineering and auditing efforts deployed, relative to complexity, for all critical software and hardware components, including through extreme software and hardware compartmentation;
includes an extreme level of cumulative liability, contractual/economic and legal, for all individuals and organizations critically involved for not strictly following procedures or willingly compromising the life-cycle.
includes only highly-redundant hardware and/or software cryptosystems, whose protocols, algorithms and implementations are either open, long-standing, standards-based and extensively verified and endorsed by recognized ethical security experts, albeit with lesser performance, and widely-recognized for their post-quantum resistance levels, aiming at a migration to post-quantum cryptography in the next 5-10 years.
integrates and develops only software and firmware whose source code and compiler allows for auditing without non-disclosure agreement (“NDA”), and which is developed openly and publicly in all its iterations;
strongly minimizes the inclusion of non-Free Software, including updatable and non-updatable firmware. Makes extensive reuse of existing Free/Open Source Software components – through extreme stripping down, hardening and re-writing. It strongly aims at realising the computing device with the least amount of non-free software and firmware in security-critical hardware components;
includes only critical hardware components whose firmware (and microcode) and full hardware designs that are publicly auditable without NDA at all times in open public structured format – by anyone without NDA. In the case of processors, it will include code, hardware description source files (such as VHDL or Verilog files), Spin interpreter and similar, programming tools, and compilers;
allows for complete hardware fabrication and assembly auditability, and extremely user-accountable and effective oversight, of all critical hardware components, in their manufacturing processes;
ensures the availability of one or more mirror physical copy of the complete client, midpoint server-side hosting room setups to enable easy independent testing by anyone, while being charged only the marginal cost of providing such access.
includes effective and exhaustive first-time in-person training for users, to ensure knowledge of basic operational security (OpSec) and the risk management for self and others.
ensures that current legislations and state agencies practices in the country of origin and/or localization of all critical process and components of the service, are consistent with a constitutional/lawful and feasible compliance to this standards.
includes only technologies and innovations with clear and low long-term royalties – from patenting and licensing fees – to prevent undue intellectual property right holders’ pressures, lock-ins, patent vetoes, and ensure an open platform with sustainably low costs, affordable to most western citizens.
This proposal agrees proposes – as suggested by the US/UK experts proposal – to formalize and regulate lawful hacking (better name “lawful cracking”) would be a substantial improvement in respect to the status quo. It would provide extremely valid and insightful technical requirements17 to radically raise the trustworthiness of lawful cracking tools of lawful access systems, that can also inspire current international voluntary standards, such as those currently maintained by ETSI in Europe and NIST in the US.
For ordinary commercial systems, such mitigations, although only partially effective, seem nonetheless acceptable for ordinary commercial systems (i.e. low and medium-trustworthiness systems), as it would not change significantly the overall vulnerability of such systems. In fact, such systems ratio of security auditing relative to complexity – and low or ineffective HW/SW systems compartmentation – will expectedly remain so low, as to guarantee state availability of at least one critical vulnerability, that enables full undetected remote endpoint comprimization. In lay terms, having 10 or 5 holes would not affect significantly the number of actors with access to at least one critical remote vulnerability.
For high-trustworthiness systems, on the other hand, making illegal for the state to create new vulnerabilities would in theory benefit the wide availability of IT systems of meaningfully high-trustworthiness levels. However, as discussed, it is very unlikely that a law in that regard will ever be approved and enforced. In fact, it seems highly implausible that powerful states would reliably enforce, with serious liability, an outlawing of creation of new vulnerabilities, as it would objectively put them at disadvantage towards other state and non-state actors that would continue doing so, through symmetric and asymmetric18 backdooring. And their effort would obviously focus on those system to which they yet do not have access to, i.e. high-trustworthiness IT systems.
Therefore, state and non-state pressures on breaking the life-cycle of high-trustworthiness systems would likely remain or increase, as would the current lack of standards for IT systems of meaningfully high-trustworthiness levels for both citizen communications and lawful access schemes.
In fact, such experts proposal does not specify sufficiently-extreme organizational and technical generic IT security requirements for the entire life-cycle of the critically involve HW, SW and organizational components
We therefore propose to:
- Amend such US/UK experts legislative proposal by:
- Requiring sufficiently-extreme organizational and technical generic IT security requirements for the entire life-cycle of the critically involve HW, SW and organizational components, which would be in addition to those specific to lawful hacking lawful access systems very well specified in the proposal.
- Mandate or incentivize certification of lawful access services, including lawful hacking, as well as for IT systems in all e-government critical use case scenarios, by approved international bodies with a very-high level of technical-proficiency, ethical standing and citizen-accountability.
- Forbid – and strongly enforce through severe liability provisions – the creation of new vulnerabilities in IT systems of meaningfully high-trustworthiness levels, which enact a voluntary socio-technical service to respond to lawful access requests, where both the IT systems and lawful access service comply to the above mentioned standards; and additional requirements to reduce even more the possibility of suspects circumvention of lawful access and of user abuse.
- Propose and promote the creation – independently from the above legislative proposal and our proposed amendments – of an international certification body, as described above, and of an initial open compliant ecosystem spanning the entire life-cycle and end-2-end computing experience.
6 FBI …
7NSA Director Rogers recently stated “I don’t want a backdoor … I want a front door. And I want the front door to have multiple locks. Big locks.”
8Obama stated in 2013: “Technology itself may provide us some additional safeguards. So for example, if people don’t have confidence that the law, the checks and balances of the court and Congress, are sufficient to give us confidence that government’s not snooping, well, maybe we can embed technologies in there that prevent the snooping regardless of what government wants to do. I mean, there may be some technological fixes that provide another layer of trustworthiness.”
9 See Keys under Doormats, and 1997 report …
10 From Lawful Hacking report: …
12 Means lawful access schemes (systems, processes, legislations, standards)
13 We’ll use “trustworthiness” to mean the same as “assurance”, too technical a word for the intended audience
14From Wikipedia on Trusted Computing: “Therefore, to trust anything that is authenticated by or encrypted by a TPM or a Trusted computer, an end user has to trust the company that made the chip, the company that designed the chip, the companies allowed to make software for the chip, and the ability and interest of those companies not to compromise the whole process.”
16 We’ll use “trustworthiness” to mean the same as “assurance”, too technical a word for the intended audience
17 See pages … of the Lawful Hacking report
18 An asymmetric backdoor, is a purposely created vulnerability which, in the intention and socio-technical plans of the creator, does not enable another attacker to exploit it on a given target, based on its mere knowledge. For example, encrypted exploits that are signed with a user’s MAC address may not be repurposed for other targets.