Journal: Handelsblatt  Responsible AI - Transparenz, Bias, und Verantwortung in der KI

Journal: Handelsblatt Responsible AI – Transparenz, Bias, und Verantwortung in der KI

Title: Responsible AI: Transparenz, Bias, und Verantwortung in der KI
Authors:  Ulli Waltinger und Benno Blumoser
Pub/Conf:  Handelsblatt Journal Future IT, 17. Dezember 2019, Pages 8-10

 

Responsible AI – Transparency, Bias, and Responsibility in AI

The field of artificial intelligence with their manifold disciplines in the field of of perception, learning, the logic and speech processing has in the last ten years significant progress has been made in their application. The decisive factors for this progress were: the quantity of available data, the increase in computing power, the availability of free software development environments and the further development of new Algorithms and architectures from the environment of machine learning. Systems from the field of Artificial intelligence and its algorithmic decision patterns influence more and more elements of everyday life: The interaction in the household by means of home assistants, the relevance of search and advertising offers, which offer mobile driving and traffic management, the diagnosis in medicine or also the allocation of the personal credit line.

“It’s not a human move. I’ve never seen a human
play this move.”
(Fan Hui on the 37th move of the Go game between Lee Sedol and AlphaGo, 2016)

The outstanding successes especially in the area of supervised machine learning and here especially the aspects of representative learning and deep neural networks, so-called deep learning, have significant impact not only in the academic world, as in the outstanding milestone in the Go game between Lee Sedol and AlphaGo, but also show their added value in industrial applications. Algorithms make our lives more efficient, increasingly support our decision-making processes, and sometimes even take over them. From speech recognition and translation, visual quality inspection, dynamic pricing, autonomous parameter optimization for data centers, improved adaptivity of robotic controls and AI-optimized supply chains to predictive maintenance in production.
In contrast to the industrial environment AI (B2B), which has always focused on increasing efficiency and productivity, the consumer sector (B2C) focuses primarily on aspects of predicting behavioral patterns and optimizing the attention horizon of the customers. From the placement of advertisements to the automated Filtering of news articles up to the AI-supported image review.

The strength – and at the same time the danger – of using deep learning to predict or classify new situations is that the quality of the result depends heavily on the size, balance and purity of the input data.
The transferability of the model to a new problem is therefore very difficult to assess. While in statistics the concept of significance is used to assess the significance of each analysis in each of its steps are critically questioned, the AI lacks a such a punctured measure of the quality of an algorithm‘. As a rule, validation can only be performed the quality of the result based on the corresponding training data. However, whether correlations occur here within multi-layered neural networks, which in spite of large data only randomly meaningful pattern (perhaps because errors cancel each other out or, especially in this case of application, do not lead to relevant distortions have) and which input data can be used, to get the right results, stays here in Hidden rule. Accordingly, such a
algorithm due to wrong assumptions in the learning process, e.g. a low diversity of data sources, faulty robustness in changing application domains or incorrect assumptions in the modeling process lead to unbalanced results, such as in the case of gender-discriminatory allocation of credit lines.

“One of the first things taught: […] correlation is not causation. It is also one of the first things forgotten.” (Thomas Sowell, Stanford, 1930)

Just the nested non-linear structure modern Deep Learning systems makes it difficult for users and AI experts to gain transparency which information or characteristics appear, which used by the system to make a decision have been. Which information may only have been are due to a random correlation, but may not allow for significant causality is difficult to assess. Therefore this Type of AI system often referred to as „black box system is called. Impairing faulty results then directly the profitability. But as soon as Based on the recommendations of an AI system, decisions can be made that are essential, perhaps even even have existential effects on individual persons or groups of persons, are of course Requirements for such a non-discriminating algorithm are much higher and the error tolerance much lower. So how can we ensure that a system can also be used in a real environment functions as expected and meets the high requirements for non-discriminatory results? How can the risk of getting bad, unfair and unstable results from a neural „black box“ be reduced?

“Professional responsibility […] is not to discover the laws of the universe, but act responsibly in the world by transforming existing situations into more preferred ones.” (Herb Simon, 1996)

In a first approximation, of course, the currently rapidly evolving regulations play a major role in limiting the risks that can arise in the design and use of AI. In addition to existing product liability laws, the DSGVO or security regulations, however, many institutions and companies concretise their approach in charters or sets of rules that are intended to do justice to the special features of AI. At Siemens, we use a set of seven „mitigation principles“ (see figure below), which we believe help to harness the undisputed advantages of AI within a responsible framework.
In addition to sensible rules tailored to AI the holistic inclusion of different perspectives plays a major role as the second major lever in risk minimization. Because the fact that we are operating in a volatile, uncertain where we all live in a world where blind spots are not have in our perception and judgement, and often decide in unconscious bias, makes it necessary to take this human Weakness also in the field of artificial intelligence from different perspectives and to correct. Diversity as an elementary building block in the AI life cycle: The data used to train AI systems the range of their later Cover the application cases to obtain valid results.

But different perspectives must also be taken into account in the research, development and application of AI. From end users to domain experts to software development – we must value diversity in all its dimensions such as gender, social and ethnic origin as a social potential. and integrate. Diversity is therefore not only the not only a motor for outstanding innovation performance, but also elementary for the reduction of bias and bias in artificial intelligence.

The implementation takes place in short-term innovation formats such as hackathons, bootcamps or innovation sprints, but also in dedicated locations for co-creation, which are intended to fulfil the claim of being a platform for different perspectives.
An elegant and especially in the European context very relevant way to resolve the contradictions between the potential and the danger of AI are new technologies that allow a holistic view on the influencing factors of AI applications in their life cycles: Data generation and selection, algorithm selection and explainability, accuracy and runtime, but also provision, updating and monitoring of the applications. There are already many relevant technological components that can help to eliminate implicit prejudices in the data and thus also the to uncover AI-based recommendations and make them correctable. Furthermore, they contribute to to meet the high requirements for the protection of personal data, without being hindered by the higher complexity in processes and products, and ultimately improve the robustness of the AI systems, which at the same time minimizes their susceptibility to errors and facilitates economically attractive scalability to other application fields. Currently relevant technologies are:

Explainable AI is a field that addresses the interpretability of black box decisions in AI. Explainable AI is defined in terms of the explainability of (for example, data, input characteristics), during (e.g. model architectures, relevance characteristics) and after (e.g. test and target references) of the modeling. In industry, these methods are used in combination with black-box approaches, among others, to accuracy of the AI algorithms can be explained. This helps to make the process more understandable for customers. but also make internal system distortions into illustrate.

Active Learning is another emerging field in the AI, which not only describes the process of the AI with can speed up little labelled data, but the „human-in-the-loop paradigm“ within that shapes AI. Permitted in industrial applications this approach to integrate feedback from domain experts into the AI training cycle, i.e. to train the system through human usage behaviour and domain knowledge continuously and efficiently to improve.

Trustworthy AI aims at the area of trustworthiness and robustness of algorithms, which provide the user with feedback on errors, robustness, and or inconsistencies in all phases of the AI. Target is to enable AI applications to detect a possible domain change and the corresponding adaptive uncertainty and report back.

Federated Learning is a distributed approach of machine learning, which is based on the model training large amounts of data from distributed edge devices. The basic idea behind it is to use the code to the data instead of sending the data to the code, and addresses here the basic aspects the privacy, ownership and location of the Data

Differential Privacy is a mathematical method for anonymizing records of the use of metadata, thereby ensuring privacy of the individual can be safeguarded. An algorithm analyzes a data set and calculates statistics about it (e.g. the mean value, the variance, or the median). It is called differential private, if you can’t tell from the output, whether the data of a person in the original record were included or not.

Edge AI not only allows real-time processing of data collected on a hardware device („Edge“), but also allows the AI to act as a trustee, since data generation and analysis is performed within the Edge. This means that data can be processed and decisions can be made decentrally without the need for a connection to a central system (e.g. cloud).

However, a responsible use of artificial intelligence requires not only technical and institutional control and monitoring of the AI process with regard to bias, fairness, transparency, accountability and explainability, but also continuous further training of developers, users and decision makers. These must be able to understand the advantages, dangers and procedures for risk reduction of AI methods and applications in Learn to know and evaluate trainings.

“Trust is not necessarily about transparency but about interaction.“

Trust and safety are the most important imperatives for people, process and products throughout the entire life cycle of AI. The benefits and consequences of AI are still unfolding and will continue to fundamentally change society and the economy. It is therefore all the more important to shape technological and cultural change in a joint and responsible manner.

BibTeX:

@ARTICLE{WaltingerBlumoser:2019, 
 author={Ulli Waltinger and Benno Blumoser}, 
 title={Responsible AI: Transparenz, Bias, und Verantwortung in der KI},
 journal={Handelsblatt Journal Future IT}, 
 year={2019}, 
 volume={17th December}, 
 page={9-10}, 
 publisher={Handelsblatt} 
}

PDF