Artificial intelligence (AI) is a data-intensive technology that often involves the use of personal data (e.g., relating to an individual’s behaviour, social relationships, private preferences or identity) to analyse, profile, assess, categorize and eventually make decisions about individuals. AI can be employed for purposes relating to the identification, tracking, profiling, facial recognition, behavioural prediction or scoring of individuals.
AI generates analytical and predictive insights that outpace human capabilities. As such, AI has the potential to replace human decision-making, especially when analysis needs to be done rapidly and at scale. Artificial intelligence also creates challenges for transparency and oversight as it is not always clear to developers and implementers how results are generated. AI can preclude effective accountability in cases where these systems cause harm, such as when an AI system makes or supports a decision that has a discriminatory impact.
Without proper checks and balances, the use of artificial intelligence can also result in an intrusive digital environment whereby an individual has no control over his or her private information. Governments and businesses may be empowered to carry out widespread surveillance on individuals and to track, analyse, predict and even manipulate behaviour.
Businesses that utilize artificial intelligence should fully contemplate the right to privacy, non-discrimination, and other relevant human rights into the design, development, deployment and evaluation processes of AI projects, products or systems. Businesses should also educate employees about their right to privacy, employing data privacy officers with adequate resources, training and authority to carry out their functions, and developing human-centred auditing and redress mechanisms.