The possible ranges of applications of Artificial Intelligence (AI) are quite divers. The General Data Protection Regulation (GDPR) is often quoted a showstopper – with the following article, we want to demonstrate, that is not the case. To this end, we are taking up very specific fields of application for Artificial Intelligence, which are already a reality in numerous companies and institutions – “despite” the GDPR. The examples illustrate a selection of data protection challenges that must be overcome in order to be able to confidently deploy the respective AI.

Chatbots and digital assistants as first point of contact – the appropriate legal basis without risking a conversion killer

The trend towards supplementing customer service is moving inherently towards digital assistants, who proactively recognise problems and needs of the user (so-called “predictive analytics”) and proactively propose appropriate solutions. Prominent examples of digital assistants are Alexa (Amazon), but also Siri (Apple) and Google Assistant (Google).

For a chatbot to take the step from so-called machine learning to deep learning, it is necessary that at first, large amounts of user data are collected and structured or rather processed for use by an AI. The required data are regularly collected by companies via the input field of the chat window. However, the immediate linking of the user’s input with the IP address of his end device creates a reference to persons that opens up the scope of application of the General Data Protection Regulation (GDPR). The processing of personal data on the other hand requires the existence of a legal basis. In case of chatbots or rather digital assistants, declarations of consent are conceivable, but the implementation involves a lot of effort. Before the Chatbot can actually provide any assistance, it must first obtain the consent of the requesting party. The person seeking help, however, does not want to have to deal with data protection information before he gets the solution to his problem.
Since the chatbot regularly only gets active on the initiative of the person seeking advice, companies – by means of correct structuring – may use other legal bases instead. In this way, the chatbot can begin its actual work immediately, since the advice seeker does not have to be fully informed. In case of chatbots, data controllers usually provide their transparency and information obligations with the privacy policy and a corresponding note.

More reliable diagnosis in the health sector – take protection measures at an early stage

Especially in the health sector, Artificial intelligence has the potential to make significant progress, if not breakthroughs. In the field of imaging techniques, AI can be used to greatly simplify and complement the work of a physician today.
The basic requirement for the use of AI is always, that the information to be analysed is already digitised: CT scans, electrocardiograms, close-ups. Particularly in the area of healthcare, this basic requirement is fulfilled in many sectors.
However, the problem here is the fact that health data – which are particularly sensitive data – are processed. At first, large amounts of reference data are needed to train the AI. In order to finally detect irregularities and make a diagnosis, the AI must match the patient’s information (e.g., in the form of a CT scan) with the reference data. In both cases, personal information is involved since the data comes directly from a natural person.

For the processing of health data the GDPR sets up special restrictions. In so far, it requires appropriate safeguards. The focus should be on a pseudonymisation or at best anonymisation of said data. Anonymisation would lead to the scope of the GDPR being abandoned. However, if this is not possible, institutions and companies should concentrate on the use of recognised pseudonymisation techniques, as the GDPR fundamentally rewards any kind of risk-minimising measures. For long-term studies or similar projects, the additional use of a “trust centre” is recommended. In such a case, an assignment of the pseudonymised data is only possible by including the trust centre, i.e. a third party that securely stores the assignment rule.
Companies or institutions should lay the technical foundation for the protection of personal data – so-called “Privacy by Design” – already during the stage of development. A data privacy impact assessment (DPIA) can be helpful at that stage. Due to the potentially high risks regularly associated with the processing of health data, the obligation to carry out a DPIA is particularly prevalent in the health sector. With the help of a DPIA, specific risks can be identified and targeted protective measures can be taken to ensure the security of the data.
Apart from that, data controllers should above all comply with the extensive transparency obligations under the GDPR in case AI is applied.

More cost-effective results through automated decision-making – use exemptions

Another area of application for Artificial Intelligence is automated decision-making. This way of data processing affects our lives already today – especially in the field of bank lending. It is problematic though, that in many cases the decision-making of self-learning algorithms can not be fully understood due to their black box character.
One possible consequence of an incomprehensible, fully automated case-by-case decision is always the exclusion or even discrimination of certain persons or group of persons. In order to protect data subjects against such consequences, the GDPR requires that not only an algorithm may decide, if said decision has legal effect. Rather, a person must be interposed, who, to an extent, ultimately makes the decision. The result of the so-called “ADM system” (ADM = Algorithmic Decision Making) can only be used as an aid to decision-making.
Companies that do not want to sacrifice cost savings and sometimes even more neutral decisions, still have the opportunity to use an AI in a way that a “human in the loop” is not necessary: Again, it is conceivable to obtain the consent of the data subject – with all its disadvantages.

However, its use is considerably simpler if the data processing is required for the conclusion or the fulfilment of a contract between the data subject and the data controller. In case an automated decision-making is contractually agreed upon, its use is also required. Same applies to the fulfilment of a legal obligation. For example, in Germany, provisions of the German Banking Act and the German Civil Code in case of (real estate) consumer loan agreements set up an obligation to carry out a creditworthiness check based on information like income, expenses and other relevant circumstances.

A fully automated credit check should also be qualified as necessary for all other contracts where the solvency of the counterparty is a decisive criterion. Ultimately, it always depends on the specific case.

AI and data privacy are not mutually exclusive

GDPR often provides data controllers with a choice of legal bases on which they can base their respective processing activities. The advantages and disadvantages of each legal basis should be carefully balanced against each other. Any obligation to carry out a data protection impact assessment, as it is often the case in the health sector, should not be seen as an inevitable evil, but as an opportunity.
The examples illustrated show that the GDPR and its numerous requirements on the handling of personal data by no means prevent the use of Artificial Intelligence. Anyone who precociously thinks about how the concrete implementation of AI can be brought into line with the GDPR – at best during the development phase – will also benefit in the long term: unnecessary follow-up costs that may result from any adaptation are avoided. In addition, the risk of reputational damages due to a lack of data security and due to a data protection incident can be effectively contained.

You need advice on data protection? Please do not hesitate to contact us!

We will contact you!

In our privacy policy you will find more information on how we handle your personal data and what your rights are.