Methods developed in the field of artificial intelligence are often not transparent and their decisions are in many cases difficult to understand. However, depending on the area of application in which algorithms and models are used, laws or guidelines can require them to be neither discriminatory nor opaque. The subproject investigates how and to what extent methods that use various definitions of fairness can be made fully interpretable or transparent.

Can machine learning algorithms preserve privacy and be fair? The sub-project "Fairness vs. Privacy" deals with the interaction and the trade-offs of these two concepts. On the one hand, we want to achieve that different social groups are considered equally in machine learning models (fairness). On the other hand, within an ML algorithm, no data should be passed on to third parties, and from the output of the algorithm, it should not be possible to draw conclusions about the original training data (privacy).The aim of the sub-project is to theoretically examine the combination of both concepts and to develop new approaches for applications in practice.

What does privacy in machine learning cost? This question is the main directive for the sub-project "Privacy vs. Resource Efficiency". We aim to analyse cryptographic procedures regarding their costs, like computation and communication, and to develop efficient methods and implementations.

The demand for explainable and transparent machine learning models with good predictive power is currently stronger than ever. Especially through recent technologies, data is continuously generated and transmitted. To meet these requirements the subproject ‘Resource Efficiency vs. Transparency’ aims at the development of efficient and at the same time transparent learning methods in the data stream setting.

Machine learning methods, particularly neural networks, are known to require large amounts of computational resources as well as data. To change this, various approaches have been developed to increase the resource efficiency of machine learning. However, it is largely unclear how these affects the fairness of the resulting models. The aim of this subproject is to investigate this question and to develop methods that increase both resource efficiency and fairness.

Subproject 6 considers the aspects transparency vs. privacy. The transparency aspect will be covered by building on the InteKRator toolbox—a toolbox that allows for learning comprehensible knowledge bases from data. In this context, it will be investigated to which extend such learned knowledge bases satisfy data protection criteria—especially in (untrusted) distributed environments. Moreover, it will be investigated, how distributed approaches can help to improve InteKRator (i.e., concerning the learning performance) for learning knowledge bases for medical applications (such as medical expert systems).

Um relevante KI-Forschung zu betreiben, ist es heute dringend erforderlich, ebenso ethische und rechtliche Rahmenbedingungen zu bedenken. Daher ist ein strategisches Ziel, Expertise von Ethik und Recht auf der einen Seite und KI-Forschung in der Informatik auf der anderen Seite in einem Projekt und langfristig aneinander zu binden.
Another feature of TOPML is the AI Lab, which on the one hand is to bring the findings from the sub-projects into applications at the JGU itself, but also to carry them into industrial practice. The AI Lab is located at the University of Applied Sciences Mainz due to its proximity to regional and national industry. The project will thus take a pioneering position in the cooperation between the JGU and the University of Applied Sciences Mainz in a technical field.

Im Bereich Nachwuchsförderung und Gleichstellung soll in den ersten drei Jahren des Projektes mithilfe der Programme Q+ und Ada-Lovelace technisch versierte wie gesellschaftlich verantwortungsvolle Nachwuchswissenschaftler:innen als studentische Hilfskräfte herangezogen werden, die im späteren Projektverlauf als Doktorand:innen eingestellt werden können.