Preparatory Phase & Follow-On Meetings​

Jan – Apr, 2022. CLOSE-DOORS PREPARATORY MEETINGS.
A series of Zoom and in person meetings, with TCA advisors, prospective technical and nation-state governance partners.

  • January 10st- Feb 15th 2022, Rome, for meetings with governmental officials and partners.
  • February 15-20th, in Zurich, Bern and, Geneva.
  • March 10-30th Washington DC for meetings with prospective technical partners, US federal officials (and investors and clients of our spin-off) as part of our acceleration program at MACH37, the premier US cybersecurity accelerator. We are planning joint close-door (and hybrid?) meetings, close to the US State Department.
  • March 1-3, 2022, in Tel Aviv  at Cyber Tech.
  • March 10-15, Bonn and Helsinki, and for meetings with former and current gov officials.

May 12th- 15th, 2022. FOLLOW ON SPRINT (Tentative, conditional on Covid situation)
A 3-days gathering in Rome, Italy in the beautiful Crowne Plaza Hotel (20% discounted rates) of work, brainstorming, bonding, and fun, held concurrently with the Trustless Computing Association – Annual Meeting. Participate by TCA advisors, some of over 120 previous speakers of the 8 editions of Free and Safe in Cyberspace, and special guests.

Objectives & Goals​

Strengthen our support community, and core TCCB and Seevik Net founding documents:

Improve our plans, widen consensus, and create a feeling of community and hope around shared aims, bonding, and open brainstorming. Workshops and panels will focus on both high-level and detailed improvements to the socio-technical paradigms and to the governance of Trustless Computing Certification Body and Seevik Net.  and joint revisions of:

  1. TCCB Statute (legal review, remote digital deliberation, constituent process, etc.)
  2. Trustless Computing Paradigms (all, including TCCB Cloud and TCCB Fab) and
  3. Seevik Net Statute (win-win and balance democratic and liberty values)

Onboard of new nation-state governance for TCCB and other partners:

  1. Onboard one or more nation-states governance partners acting as co-proponents towards other nations, and ideally onboard 2-3 of them. USA, Israel, Germany, Italy, Switzerland, EU and China are seen as the preferred partners initially, given their geopolitical weight in cyber affairs.
  2. Onboard additional members of the Scientific and Governance Boards.
  3. Onboard additional R&D, Technical or End-User Partners.

The 2nd Free and Safe in Cyberspace – EU Edition 2016 was held in Brussels on Sept 22-23rd 2016 at Mundo-B.

As for previous editions,  catalysed a constructive dialogue and a wide informed consensus on new international standards and certification governance bodies for ultra-high assurance IT systems – for communications, constitutional lawful access and autonomous systems – to deliver access to unprecedented and constitutionally– meaningful* e-privacy and e-security to all, while increasing public safety and cyber-investigation capabilities.

Among the participating keynoters and panelists this year: EU most-recognised cryptographer and IT security expert, Bart Preneel; the Vice-president fo the EU Parliament Civil Right Committee, Jan Albrecht; the CIO of Austria and head of state secret standardization agency, Reinhard Posch; the research director at the France Atomic Energy, Renaud Sirdney.

This year we created a jazzy trailer…


… and most importantly a few concrete proposal for the Challenges have started emerging among the many speakers and panelists that have been involved in the event series: Trustless Computing Certification Body Initiative.
The event was co-organized by Open Media Cluster and Trustless Computing Consortium, lead by Rufo Guerreschi; and by EU EIT Digital Privacy, Security and Trust Action Line, lead by Jovan Golic.

Get your early-bird tickets today !

On Sept 22-23rd 2016, the Free and Safe in Cyberspace – EU Edition 2016 will be held at Mundo-B in Brussels, Belgium, EU.

Free and Safe in Cyberspace aims to catalyse a constructive dialogue and a wide informed consensus on new international standards and certification governance bodies for ultra-high assurance IT systems – for communications, constitutional lawful access and autonomous systems – to deliver access to unprecedented and constitutionally– meaningful* e-privacy and e-security to all, while increasing public safety and cyber-investigation capabilities.

The event series co-organized by Open Media Cluster (Now called Trustless Computing Association), lead by Rufo Guerreschi; and by EU EIT Digital Privacy, Security and Trust Action Line, lead by Jovan Golic.

Among the confirmed speaker: EU most-recognised cryptographer and IT security expert, Bart Preneel; the Vice-president fo the EU Parliament Civil Right Committee, Jan Albrecht; the CIO of Austria and head of state secret standardization agency, Reinhard Posch; the research director at the France Atomic Energy, Renaud Sirdney.

In a recent post, we framed the event relevance in respect to some of today’s most crucial privacy and safety topics: France and Germany ask EU Commission to compliance to lawful access for IT communication providers (link). Posts by our moderators –by David Meyer of Fortune and by Jennifer Baker of Arstechnica – have also reported such issue, which is central to our event.

Buy your early-bird tickets till Sept 7th on our Eventbrite page.

Share our Facebook event page

Share our tweets at: @freeandsafe

or contacts us at: info@free-and-safe.org

See you in Brussels!

Last week, France and Germany have proposed to the EU Commission to issue an update of of a current EU directive, that would mandate providers of IT communication services to be able to respond to lawful access requests.

There are signs that these may be more political posturing and knee jerk reaction to th recent terror attacks, rather than real proposals, as some noted on Fortune Magazine by David Meyer, moderator at our next FSC edition in Brussels. But only time can tell.

On the other hand, although details of the proposal are still lacking- as for those of their US counterparts – there is surely a call to invite shared solutions to the problem, and possibly already some well thought out solutions yet to be disclosed.

When France and Germany Interior Ministers state in such proposal…

What we are saying, however, is that exchanges more systematic operated via some applications, such as Telegram, must be able, as part of court proceedings — and I stress this — to be identified and used as evidence by the investigation and magistrates services.” 

… they are acknowledging that current “lawful hacking” tools and (mostly nonexistent) standards – although supported by recent legislations or courts in Italy and in Germany – cannot produce evidence that can be solid enough to stand up in court (and probably to withstand constitutional challenges …).

That’s why they are proposing some kind of server-side access, that could replace remote lawful hacking. Nothwithstanding the huge increase over the state-of-the-art of technical and procedural safeguards that both such solutions would require – in order to reduce to acceptable levels the risks of abuse of citizen privacy, as well as of error or tampering of evidence in cyber-investigation – arguably, a server-side access would arguably be substantially less difficult to properly regulate and standardize than lawful hacking.

During our next Free and Safe in Cyberspace event in Brussels on Sept 22-23rd, we’ll explore what should be such radically-enhanced safeguards for lawful access compliance – and the related primarily non-governmental standard setting and certification bodies – and explore to what extent many of those safeguards are those needed to give citizen access to IT device and services that provide constitutionally-meaningful levels of trustworthiness.

Some of the speakers of the Free and Safe in Cyberspace (FSC) event series and advisors to the Trustless Computing Association, led by Rufo Guerreschi, have joined together to research and propose a comprehensive solution, in a 1-pager Manifesto and a long Study of tens of pages: “The Trustless Computing Certification Body: a new standard and certification body for wide-market ultra-high assurance IT systems, with voluntary compliance to “constitutional” lawful access requests.

 UPDATE (Aug 26th 2016): We corrected that it is not a new directive but an update of a previous one (Thanks Jennifer). And corrected some typos.

Some speakers of the Free and Safe in Cyberspace (FSC) event series and advisors to the Trustless Computing Association, lead by Rufo Guerreschi, have joined together to research and propose a comprehensive solution to Challenges A and B posted by FSC:

The Trustless Computing Certification Body: a new standard and certification body for wide-market ultra-high assurance IT systems, with voluntary compliance to “constitutional” lawful access requests

Among the co-authors: Bart Preneel (most recognized EU cryptographer), Jovan Golic (EIT Digital PST-AL Leader), Alptekin Kupcu (cryptographer), Henrique Kawakami (CTO of Kryptus), as well as CapGemini and Tecnalia (the EU leading IT security consulting companies that were awarded the most extensive studies on Mass Surveillance for the EU Parliament STOA and LIBE Committee.

We suspect that the challenge of reconciling privacy and cyber-investigation capabilities is essentially one of finding a substantial win-win solution, rather than finding an “acceptable balance” between safety and freedom, as believed by most. That is why our proposal is both about defining technical, socio-technical and organizational standards for the entire IT life-cycle that are aimed at achieving ultra-high IT assurance (Challenge A of FSC) and about enabling “constitutional” lawful access (Challenge B of FSC) to such IT services.

Find here a Google Doc draft (Gdoc) of that includes a 1-page Manifesto and a long Study of tens of pages.

We welcome the addition of co-authors, contributors, and comments until it’s finalization, no later than Sept 22-23rd 2016.

Contact us to join our quest!: info@free-and-safe.org

 

WORKSHOP REPORT

A small workshop was held on 21 July 2016 in New York City, as part of the “Free and Safe in Cyberspace” International event series, was focused on discussing and planning possible solutions to provide meaningful levels of e-privacy and e-security for all users, while also increasing public safety and cyber-investigation capabilities. Following the great success of the 2015 Edition, a larger two-days 2° EU Edition will follow on Sept 22-23rd 2016, again in Brussels, where a major comprehensive proposal will be presented by a number of speakers involved in the event series, as well as selected results of innovation projects of EIT Digital.

In introducing the July 21st event, Rufo Guerreschi (executive director of Open Media Cluster and event co-organizer) summarized a few crucial points for the entire Free and Safe in Cyberspace event series: “Recent episodes showed that, on the one hand, citizens and institutions suffer a great loss of civil rights and sovereignty, while, on the other, EU and US IT companies are struggling to seek ways to offer the levels of trustworthiness required by both National customers and legislations. But this clash about the need of ensuring public safety and security of state-nations and also user privacy actually could be reconciled. In fact, if you had to choose one of the two you will not be able to sustain democracy. Democracy and freedom require both citizen safety and privacy protection. We hope that our discussion events can reconcile such gap and find a common ground to build a more equitable, effective toolkit for all stakeholders involved”.

Expanding on this introduction, Jovan Golic (EIT Digital Privacy, Security and Trust Action Line Leader and renowned cryptographer) provided a general overview of the deeply complex technical issues at stakes: “It is not true that there is a tradeoff between cyber-security and cyber-privacy, they are both on the same side. We need to talk about more of both, and at the same time ensure data protection. If you don’t protect data then you cannot help cyber-security, because the data will be prone to attacks. However, there is a tradeoff between cyber-surveillance and cyber-security. And by talking about these topics, we can try to change the existing trend where governments have their own ways how to control things in the security area, including legislation, and big security companies prefer to just stay quiet and comply with government mandates. This is the reason why we are still lacking good solutions in regards to data protection practices

In his keynote speech, Professor Joe Cannataci (UN Special Rapporteur on Privacy, SRP) explained that “the safeguards and remedies available to citizens cannot ever be purely legal or operational”. Therefore, a much better option is to “involve all stakeholders in the development of International law relevant to privacy” and to “engage with the technical community in an effort to promote the development of effective technical safeguards including encryption, overlay software and privacy protection”. Both goals are at the forefront of the SRP overall efforts, added Cannataci, while also pointing out an important and recent advancement: “Both the Netherlands and the USA have moved more openly towards a policy of no back-doors to encryption, a step that should be encouraged by the UN and other International bodies”.

In the second keynote speech, Max Schrems (leading Austrian privacy activist) summarized the story of his lawsuit for the invalidation of the Safe Harbor Agreement that allows US companies to store European citizen personal data. “What was the reason for the lawsuit? Even if the European Union talks a lot about mass surveillance, with EU resolutions, angry letters and so on, we knew that this kind of ‘public outrage’ was not going anywhere. Therefore, we looked at what I call ‘public/private surveillance’: companies like Facebook are subject to both US and EU jurisdictions, so this law conflict that must be resolved. In turn, this gave us the possibility to bring a legal case (mostly opposing mass surveillance) in a European Court and even have jurisdiction there, because obviously we cannot have jurisdiction in other countries”. This lawsuit (and it on-going outcomes) was just a first step to make public some problems about global mass surveillance procedures. Another important issue, according to Schrems, is that “given the policies now being adopted and/or rewritten around the world, the de-identification and anonymization of data is no longer a sufficient safeguard if governments & corporations continue to re-purpose data originally collected for one specific purpose”. His possible solutions to move forward? “First we need some codes of conduct that could possibly be drafted by and implemented throughout the industrial sector. And then we should establish shared certification options and make sure that companies are fully compliant (with some help from an independent body monitoring)”.

The event included four discussion panels, or Challenges, focused on a series of inter-related challenges (A – How can we achieve ultra-high assurance ICTs?, B – Can ultra-high assurance ICT services comply with lawful access request while meaningfully protecting civil rights?, C – What is the role of AI in providing ultra-high assurance ICTs? D – What National policies or International treaties can we envision to support ultra-high assurance ICT standards?).

Here are a few highlights:

Jovan Golic delivered an introductory keynote for panel B about the interplay between cyber-security, cyber-privacy, and cyber-investigation, about the need to reconcile cyber-investigation with cyber-security and cyber-privacy by widely accepted transparent solutions, which would foster business opportunities in the area of digital security, and already practical advanced crypto techniques for data protection, including threshold cryptography based on shared key escrow and practical fully homomorphic encryption, as well as innovation & business results of EIT Digital in this area.

Roman Yampolskiy delivered an introductory keynote for panel C on the security threats related to modern AI systems and smart things, on one side, getting more and more powerful and helpful for humans, but possibly threatening their lives and work by improper designs and implementations, on the other

How do we create a situation where secure software and hardware systems can be developed? Let’s make a comparison with the construction industry, where developed countries established certain types of regulations and guidelines and today we have buildings that can sustain an earthquake or a fire. We got rid of poor standards and introduced a system based on specific building codes, inspectors and so on, thus achieving a level of safety that seemed impossible just a few years ago. We need to promote public-private partnerships and formalize strong standards and accountability in this area and pushing hard to have governments and businesses working together” (Daniel Castro, Vice President of the Information Technology and Innovation Foundation),

What can you do when you really, really worry about privacy? The answer is very simple. don’t use a smartphone. I do not carry a smartphone. Secondly, if you are worried about being eavesdropped, use paper and pen or do what the Russians have done for decades, use typewriters. But given that these are radical and extreme security options, will most people want to use them? No. Can we achieve today economically-feasible and effective security? The answer is no” (Yvo Desmedt, renowned cryptographer and pioneer of threshold cryptography).

Today’s ‘smart technologies’ (deployed via wi-fi in our homes or to help in natural disasters, etc.) are not at all resistant to hacking by criminals or by authorities. And despite recent advancement, technologists seem unable to ensure a decent level of individual privacy and there is little hope that National legislations can protect it either” (Rufo Guerreschi).

We currently do not have solutions which are meaningfully private, even if you pay a lot of money or are willing to deal with the inconvenience. That’s also proven by the fact that the market for crypto devices is completely inexistent. It’s a matter of a few thousand devices. Not to mention the fact that, if buy a crypto-phone, you’re flagging yourself, suggesting that probably you’re trying to hide something and most likely you have no clue about that.” (Rufo Guerreschi).

We need to look at the reality of data protection at different stages. At the first stage of data collection, there are privacy policies and user consent, but they do not prevent uncontrollable mass data collection by big Internet service providers. What is protected in practice is data communications, typically between a client and a server, rarely end2end between two clients. However, data encryption is endangered by various so-called backdoors at different levels of the data security chain, including crypto algorithms and protocols, key generation and management, and software and hardware implementations. Backdoors are by definition secret and proprietary before they get revealed to public and essentially mean that the used cryptosystem is inherently insecure due to them. In practice, they are used for cyber-investigation by privileged parties. But, they are also used by hackers and cyber-criminals, which renders the cyberspace insecure. Instead, for the same purpose, one may use the so-called frontdoors, which are by definition transparent and may be based on properly implemented threshold cryptography with shared key escrow providing forward and backward secrecy and focused cyber-surveillance. Data storage is protected by encryption and controlled access, but there are too many breaches of database servers storing sensitive data, because of cryptograhic key management issues and various software vulnerabilities. Data processing is practically not protected at all, not even for sensitive data such as the e-health data, because service providers work on plain data to provide their services, regardless of the emerging practical techniques for fully homomorphic encryption, which enable data processing in the encrypted domain. Consequently, what is needed in order to improve the current unsatisfactory situation and trends is the application of existing, but rarely applied, trustworthy technologies for data protection” (Jovan Golic).

A large majority of people think that secure products are already out there and easily available, including Apple iPhones and the Tor system. But there’s an incredible alignment of interest between Apple, Tor makers and security agencies. Why? Apple and Tour makers they have an interest that people believes their thing is secure so they buy their stuff instead of our stuff. Security agencies have a huge interest that this security is oversold so that people use this tool, communicate secret stuff and they can spy them using directly implanted backdoors or vulnerabilities that are by them discovered or bought and not publicized” (Rufo Guerreschi).

I think we can have highly regulated systems, for example financial systems, where we are going to want recovery, in general, to discuss what that looks like and how we enable lawful access. It even might make sense in some regulated communication services. There are multinational companies that have a large user base and we need to consider how to regulate them. In many cases, I can write software and have communications with someone else around the world and we are using software that we’ve written that nobody else has access to. That’s going to be secure and outside of the scope of what law enforcement. But, we still need to figure out how to deal on the policy side with what we are going to do with those situations” (Daniel Castro).

At least in the US, there are questions about the circumstances under which you can compel individuals to provide decrypted information. There are questions about the circumstances in which you can require the manufacturers of systems to build systems and networks in a way that clear-text data will always be available. There are questions about whether and under what circumstances you can compel device or app manufacturers to provide clear text data. … I don’t feel comfortable living in a world in which the law enforcement community doesn’t have the ability to infiltrate and take down” such communication networks” (Zachary Goldman).

———————

Organized by Open Media Cluster (Now called Trustless Computing Association)and EU EIT Digital Privacy, Security and Trust Action Line, the full-day New York City workshop in an invitation-only event. For more details:  free-and-safe.org/ or @freeandsafe

PastedGraphic-15

After Snowden, new int’l certifications for digital privacy (NYC, 7/21)

Free and Safe in Cyberspace is an international series of events that brings together institutions and experts across the atlantic to create an international wide-consensus on new non-governmental international ICT security and privacy standards and certifications for ICT system providers and lawful access schemes, that have ultra-high levels of assurance, and therefore can be expected to be judged by EU courts as solidly compliant with the EU Charter of Fundamental Rights. The event is co-organized by the Open Media Cluster and EU EIT Digital Privacy, Security and Trust.

After its first Brussels EU Edition 2015, the next event will be held in New York City on Thursday July 21, 2016. It will gather experts, advocates and professionals to further expand a constructive dialogue about the definition of new international standards and certifications on such crucial issues, including: Joseph Cannataci, UN Special Rapporteur on the Right of Privacy, and Max Schrems, leading Austrian privacy activist whose legal challenge lead to the invalidation of Safe Harbor agreement.

Recent evidence suggests that nearly all ICT devices and services are remotely, undetectably and scalably hackable by several actors, through vulnerabilities that are mostly state-mandated, state-planted or state-sanctioned. As a consequence, citizens and institutions suffer a great loss of civil rights and sovereignty, while EU and US IT companies are struggling to seek ways to offer the levels of trustworthiness that consumers demand, and US Constitution and EU Charter require, in order to innovate sustainably on the basis of measurable, comparable and meaningfully-high levels of trustworthiness. The relative success of new privacy solutions by Apple, new “end-2-end encryption” apps and new “cryptophones” – supported by self-serving statements of security agencies – has been met instead with high skepticism by experts as to the actual level of protection they provide against scalable attacks by state and non-states alike. Radically higher levels of security are especially critical to ensure the success and public benefit of safety-critical and privacy-critical autonomous systems.

The establishment and progressive recognition of such new certifications by industry and/or governments may provide a solution to the great economic uncertainties caused by invalidity of the Safe Harbour and the likely invalidation of Privacy Shield, albeit initially only for a few sectors. Such standards, in fact, will be necessarily very stringent, requiring very high level of security-by-design relative to complexity throughout their entire lifecycle, and therefore applicable initially only to the least complex ICT systems for the most critical use cases.

In order to succeed, such new standards for ultra-high assurance ICT systems, need to solve apparent dichotomy between privacy and safety. Most privacy experts and government officials insist we must choose between meaningful personal privacy and effective lawful access for criminal investigations. But what if the depth and comprehensiveness of such new technical and organizational oversight and safeguards needed to deliver meaningful personal privacy are overwhelmingly the same needed to certify privacy-respecting state lawful access and ICT providers compliance to such requests? After all both are essential to democracy and freedom and therefore the issue is not an “either or” choice but instead a “both or neither” challenge.

Needless to say, this comprehensive approach requires a direct involvement of world experts and a strong push for a broader discussion. Therefore, as a necessary follow-up to our fist EU event, the NYC workshop will feature many renowned speakers, including: Joseph Cannataci, UN Special Rapporteur on the Right of Privacy; Max Schrems, leading Austrian privacy activist; Daniel Castro, Vice President of Information Technology and Innovation Foundation;  Bill Pace. Executive Director, World Federalist Movement-Institute for Global Policy; Jovan Golic, renowned cryptographer and IT assurance expert; Rufo Guerreschi, director of Open Media Cluster and founder of the “Free and Safe in Cyberspace” event series.

Organized by the the Open Media Cluster (Now called Trustless Computing Association) and the EU EIT Digital Privacy, Security and Trust Action Line, the full-day New York City workshop in an invitation-only event. For more details: free-and-safe.org@freeandsafe

 

Roman Yampolskiy and Federico Pistono recently wrote a paper entitled Unethical Research: How to Create a Malevolent Artificial Intelligence, which poses the questions of powerful actors intentionally creating malevolent AI and/or stealing and deploying. 

This is a crucial problem to be reckoned with, in addition to the main problem occupying the AI safety community, the runaway AI problem, or the problem in which AI escapes from the control of their creators and create great damage to humanity.

Since human intentions can almost never be ascertained – except very partially via lie detector systems, in the limited cases in which those can be applied – the two problems are really one big problem.

As evidence starts building on what technical, socio-technical, oversight and governance safeguards are most probably conducive to safer narrow and general AI, and which are not, one way would be to use a combination of authoritative international standards body and national legislations to make it illegal to research, develop or use AI in certain ways, regardless of the entity’s intention.

The one great thing that comes into help is that today – given the complexity of high performance ICT and the pervasive vulnerabilities planted or sanctioned by governments –  none of these actors would be able to be beyond the surveillance capabilities of the Law Enforcement Agencies (LEAs) or Intelligence Agencies (IAs) of some democratic nations.

If only we had the right national laws, and possibly highly binding international treaties through EUROPOL or INTERPOL, then when LEAs or IAs independently acquire evidence that some actor is “creating dangerous AI” and/or “creating or deploying AI dangerously”, their could ask for judicial legal authorisation following due legal process – if they can’t already go ahead without one – to allow their hacking into their systems to spy the suspects communications and the software and hardware of the AI systems they are building or deploying. 

But we would need to have very clear legislations that defines what would exactly be “creating dangerous AI” and “creating AI dangerously”. This would mean having clear and measurable technical, socio-technical and governance and oversight processes that inspire a very stringent and comprehensive standards and standards setting processes, and voluntary certification processes, that would continuously certify a given AI research or development program as “safe enough” for society.

And then national legislations should make such certification either mandatory or required in order to receive public research funds. They could be regulated like ICT devices dealing with state secret information. In all EU states, these need to be mandatorily certified according to high-assurance international standards (mostly Common Criteria  EAL 5+),and according to other national requirements, coordinated in EU by SO-GIS, including specific Protection Profiles and Security Problem Definitions. The same concept could be applied to AI, albeit in much more consistent and comprehensive manner.

Some initiatives are leading to setting international standards for AI safety, and related certifications, such as in the mission of the newly established IEEE Global Initiative for Ethical Considerations in the Design of Autonomous Systems. Another is the Trustless Computing Association and related proposed Trustless Computing Certification Body promoted by the Free and Safe in Cyberspace workshop series.While the latest EUROPOL in their Appendix A.3 of their latest yearly 2015 The Internet Organised Crime Threat Assessment report goes as far a suggesting new criminal laws. The Future of Humanity institute has been exploring all these prospects in its Director’s book Superintelligence, as well as in other papers.

Many proponents of self-regulating industry bodies or light governmental recommendation commissions, such as Ryan Calo’s proposal for a Federal Robotics Commission, could possibly be hurting the economic prospects of AI more than a stringent, comprehensive and mandatory regulation. The latter, in fact, may well be needed to reasonably reduce the chance that a catastrophic failure that would stop large scale AI deployments in their track, or preempt large-scale market uptake and sustainable legal authorization for the use of autonomous systems in certain domains, as we suggested in the author’s blog post What is the current market demand for certifications of narrow AI systems with direct influence on human physical. After all the creation of Federal Aviation Administration, which started in 1926 setting very stringent requirements for all and any commercial airliners wanting to offer service in the US, was responsible the growth in commercial airline passengers from  5.782 to 172.405,  from the year of its establishment until 1929.

We’ll be talking about this and more in the next US and EU editions of the Free and Safe in Cyberspace workshop series, come joins us.

ADDED June 27th:
Policy option to regulated robotics and AI are also included in a Draft Report by the EU Parliament Committee on Legal Affairs with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).
Also, legislation in this area may want to be primarily, initially, in regard to the low-level deterministic sub-systems containing the most critical function of advanced AI, for the reasons described in the author’s blog post  IT security research needs for artificial intelligence and machine super-intelligence. In turn, applying ultra-high assurance levels to such subsystems may make the work of LEAs and IAs very difficult and impossible, and so therefore mechanism for lawful access request compliance may need to be devised that protect both the AI systems under development from criminal abuse and the access to those systems by LEAs and IAs with legal authorization.

The creation of radically competent, enforceable and accountable standardization and certification processes for those narrow AI systems that direct influence on human physical environments – such as robots, autonomous drone and vehicles – may have a huge impact on the growth rate and sustainability of the market for such systems, as well as reduce risk of arm to humans. Even more importantly, perhaps, may also provide at part of the socio-technical and governance basis for future international standards or treaties to promote the safety of systems approaching machine superintelligence.

In a recent panel Stuart Russell, one of the most recognised AI experts, at minute 14.50 illustrated the prospect that a domestic robot in the near future may misinterpret orders from its owners and purposely kill a domestic animal or human. He concluded therefore that:

“There’s an enormously strong economic incentive for companies that are building AI, to take this questions very seriously. Otherwise any company, any startup company, that doesn’t pay attention to this could ruin it for everybody else. So they’re going to have to figure out how to make machines behave ethically … avoid doing things; even if they are told to do something by their human master, they have to know what’s right or wrong, so that they don’t do something catastrophic”

The AI sector may need to do what the aviation industry did very successfully in 1929. There seems to be a huge market need to establish radically reliable and enforceable standards and certification processes – through transparency, oversight and accountability – for all those AI systems can cause direct human arm – such as robots, autonomous drone and vehicles, in much the same way as was done for civil aviation in 1929 with the establishment of the Federal Aviation Commission.

On March 24th 2015, flight 9525 of Germanwings airline crashed on the French Alps. The crash was most probably caused by pilot suicide, which makes up about 10 of the 323 average yearly deaths due to airline accidents out of 3BN global yearly travelers. These deaths, while still a tragedy, are incredibly low when compared to passengers every year. Such low death rate has its origins in the 1920’s US, when leading EU and US socio-technical scientists, through the US FAA and some airlines, developed breakthrough socio-technical systems, through a mix of fail-safe, oversight technologies, certification, procedures, and organizations. It was this socio-technical innovation, rather than any aviation technological breakthrough, that increased security of commercial flight to levels that were previously deemed inconceivable or impossible, a consequent economic and aviation research boom. In fact, from 1926 and 1929, as FAA issued its certification standards, passengers in US civilian aviation skyrocketed from 5.782 to 172.405.

The requirements for certification process for such AI systems will need to be substantially more stringent than those of the Federal Aviation Administration, because they will need to protect against:

  • Hi-level algorithms that may result in unwanted actions that physically arm humans or cause other great damage
  • Low-level technical and organization infrastructure for the end-2-end provisioning and life-cycle of the certified AI system
  • Catastrophic failures in confidentiality or integrity of AI operations, that can go undetected by its victims for years or even decades. (You can’t hide a plane that goes down, but you can hide the extensive hacking or failure of an AI system design to protect the US stock market for years.)

Furthermore, the resulting institutional capital and expertise in AI systems assurance, assurance certification, and certification governance processes, could become of great use for similar standards, certification, or international treaties dealing with more advanced projects aiming and the realization of machine superintelligence.