What is the Need to Call for a Ban on AI and Computer Vision?
Govciooutlook

What is the Need to Call for a Ban on AI and Computer Vision?

By Gov CIO Outlook | Tuesday, October 13, 2020

People should not rely on a system that doesn’t make accurate decisions. Even if they use, they need to consider systems’ fairness, accountability, transparency, and ethics, both during design and application, where humans should be the final decision-maker.

Fremont, CA: The AI Now Institute, an interdisciplinary research center, studying the societal implications of artificial intelligence (AI), in its annual report, called for a ban on emotional recognition technology. The technology is specially designed to recognize the emotions of people and is otherwise known as affect recognition technology. According to the report, the technology is not applicable for making decisions that will impact people’s lives like hiring decisions or pain assessments as it is not adequate and accurate leading to biased conclusions.

Computers are trained to understand the emotions and intent of humans using a technology that relies on data-centric techniques known as machine l

earning. It process data to learn how to make decisions and accomplish exact affect recognition. Emotion recognition is still a challenge for researchers, although it is enticing because replicating human skills using computer vision is difficult. Emotions vary with contexts, and therefore identifying a person’s emotional state purely on looking at his/her face misses critical information. To address this, researchers are taking active efforts to augment artificial intelligence techniques to consider the context that suits 

all kinds of applications and not just for emotion recognition.

Further, the report released by AI Now sheds light on possible ways AI is being applied to the workforce to evaluate worker productivity and also at the interview stage. If managers can sense their subordinates’ emotions from interview to evaluation, decision making for employment matters like promotions raises, or assignments might end up being influenced by that information.

However, these types of systems always have fairness, accountability, transparency, and ethical (“FATE”) flaws baked into their pattern matching. One example would be a misinterpretation of black people as angrier ones than fair ones. Although research groups are trying to tackle this problem, it can’t be solved exclusively at the technological level. It requires a continued and concerted effort to address issues regarding FATE in AI. In reality, it demands everyone to be aware of the types of biases and weaknesses these systems present, similar to how one should be aware of theirs and others.

 

Weekly Brief

Read Also