AI Ethics in the Post-GDPR World

One year on from GDPR’s implementation there are concepts worth considering around AI’s ethical implications

The General Data Protection Regulation (GDPR) creates a seemingly large hurdle for artificial intelligence (AI) implementation. Yet it can also be an opportunity for ensuring your AI is maximising the privacy of individuals. High profile data breaches have diminished public trust around data privacy, but companies restore that trust by embracing GDPR and taking a transparent approach to AI with a focus on privacy.

GDPR stipulates that data cannot be held longer than needed for its stated purpose, making it unclear if innovation is being stifled. AI-driven personalisation can significantly improve customer experience but user data is essential to accuracy in personalisation.

Accumulating enough data for credible insights into the customer journey can take years. An expiration date on that data could compromise the effectiveness of the tool, yet there is a fine line in using AI for a level of customer personalisation that could be considered invasive.

Automated Decision Making and Profiling

A set of specific GDPR provisions are targeted towards AI-based decisions, specifically concerning automated decision making (“ADM”) and profiling. ADM is the process of deciding by automated means, without any human involvement, such as an algorithm deciding if an individual is eligible for a loan, whether a candidate is compatible with a job vacancy, or which call-centre agent is best suited to answer a customer’s concern.

Profiling refers to any form of automated processing of personal data to evaluate certain personal aspects relating to an individual. This may include analysing online behaviour for targeted marketing or advertising, analysing credit history to create a credit profile, or analysing qualifications and online presence to assess a candidate’s skill set.

Organisations should keep the following topics in mind when utilising ADM and profiling:

Information Gathering.

The first step is to ensure your organisation has a clear understanding of the personal data that is being collected and how it’s being processed. Organisations should document what kinds of data are being collected, from whom, and through what channels. It’s also important to document and understand what data the ADM uses and what decisions it makes. It’s important here to note whether there are any significant effects of the ADM.

Risk Assessment.

A data protection impact assessment (“DPIA”) should be performed before utilising any ADM with personal data. This DPIA will allow your organisation to evaluate the privacy risks of your ADM processing. The goal of the DPIA is to examine the risks to data subjects at each step of the processing life-cycle. Your organisation should develop and implement mitigation strategies around any detected risks.

Establish the Basis for Lawful Processing.

GDPR places restrictions on ADM when there may be ‘legal’ or ‘similarly significant’ effects on individuals (for example, the right to vote, exercise contractual rights, or effects that influence an individual’s circumstances, behaviour or choices). If the ADM does have these significant effects, the data controller is required to have the consent of the individual involved or conduct the ADM to fulfil a specific contractual obligation with the individual.

If your ADM does not have a legal or similarly significant effect on the individual, your organisation can still conduct the ADM if it’s for a “legitimate interest” of your organisation (balanced against the rights of the individual). For example, if ADM is directing calls to call-centre agents, the legitimate interest would be better resolution of customer issues, and that is balanced well against a minimal impact to the end customer.

Managing Third Parties.

If your company uses a third-party vendor for ADM services, it’s important to carry out the appropriate due diligence. Make sure you understand the security and privacy controls that your vendor uses, and that you are comfortable with those controls. Asking if your vendor has any relevant industry certifications can be helpful in verifying vendor assessments. You should also have your vendor cooperate with you in completing your DPIA.

AI and GDPR Moving Forward University of Strathclyde law professor Lilian Edwards highlights that big data for use in AI is a direct opposition to the purposeful limiting of data collection, and retention. “It challenges transparency and the notion of consent, since you can’t consent lawfully without knowing to what purposes you’re consenting,” Edwards said, “Algorithmic transparency means you can see how the decision is reached, but you can’t with machine-learning systems because it’s not rule-based software.” The emphasis on how data is used, and why it’s being collected will not go away. We must decide if progress for the sake of progress is more important than purposeful use of the data. However, this momentary halt might be the ethical roadblock needed for societal, not technical, progress.

About the Author

Shahzad Ahmad is Vice President cloud competence centre and Data privacy – EMEA, Genesys. Genesys is a leader for omnichannel customer experience & contact center solutions, trusted by 10000+ companies in over 100 countries.

AI Ethics in the Post-GDPR World TechNative
Tags : featured