The discussion about standardisation in artificial intelligence is not new. It was in fact already an issue in the early days, even before the term “artificial intelligence” first appeared in 1956 at a seminar with many researchers, organized by the American mathematician, John McCarthy.
Indeed, in 1950, Alan Turing imagined, in the journal “Computing machinery and intelligence”, a test consisting in confronting a computer and a human being, without the latter knowing whether he was dealing with a machine or another human being.
The idea of the Turing test was that the day when man will no longer be able to know, in an unprepared real-time conversation, whether his interlocutor is a machine or a human being, then computers can be qualified as intelligent, so Alan Turing was already wondering about the possibilities for a machine to imitate a human conversation by creating his “game of imitation”, an artificial intelligence test. This is an attempt to define a standard that would qualify a machine as intelligent.
There is currently no legal framework specific to artificial intelligence. No French or European provision specific to artificial intelligence exists or takes into consideration the specificities of algorithms, its decision-making, learning and autonomy capacities, or even its cooperation with human beings.
Today, the term “artificial intelligence” appears only in two legal and normative definitions:
- ISO 2382-28 defines artificial intelligence as “the ability of a functional unit to perform functions generally associated with human intelligence, such as reasoning and learning”.
- The term “autonomy” is defined by ISO 8373:2012 as the “ability to perform planned tasks from the current state and detections, without human intervention”. This standard also defines the intelligent robot as a “robot capable of performing tasks by detecting its environment, and/or by interacting with external sources and adapting its behaviour”. This would be the case for instance of an autonomous vehicle of level 5.
A draft report was published by the European Parliament’s Committee on Legal Affairs (JURI) on 31 May 2016 containing recommendations to the European Commission on future civil law rules on robotics.
The protection of personal data is also at the heart of these issues, and GDPR applies also in this context.
In addition to compliance, the company must also rely on educational measures, awareness-raising and even extension measures for end customers. Understanding what is happening behind an “intelligent” machine is indeed a prerequisite for any individual and a necessity for any informed choice.
Privacy issues are also at the heart of the discussions, particularly in the case of marketing, which focuses more on learning consumer habits and preferences, on discovering atypical patterns rather than grouping similarities.
Legally, no one has an obligation of transparency. It is much more an ethical approach, specific to each company, but it is not always easy to invest in constraints. However, putting standards and morality into artificial intelligence becomes a necessity.
Let’s go back to Alan Turing who had elaborated a computational mode of the brain. His unpublished papers, discovered 14 years after his death, anticipated the development of “connectionist architectures”. These are architectures are more known today as Deep Learning.
Therefore, the point is to understand the challenges behind the standardisation of Deep Learning.
Standardization is associated with interoperability. What does interoperability mean of Deep Learning?
It is interesting to discuss the standardisation of the best practices of developing Deep Learning.
The objective is to allow the dissemination of existing solutions and the development of innovative solutions able to interoperate.
In conclusion, to promote predictable, reliable and efficient Deep Learning solutions, standardisation is a must.
The objective of standardisation is to make such that Deep Learning gets out of the research labs to be industrialized.