-2.1 C
New York

Let’s Talk About Responsible AI

Published:


AI is becoming an integral part of our society. This is good news in many ways. AI will bring efficiency to many sectors, for example, in healthcare where it is being used to assign patients to care programs based on their needs.

However, in 2019, a study revealed that an algorithm used for this purpose in US hospitals was less likely to recommend Black patients than white patients who were equally sick. 

When AI can’t recognize Black women

Algorithmic bias is not only an issue for AI in healthcare, it affects AI in general. This is because AI systems are often trained using machine learning algorithms which learn to recognize patterns in data.

If historical biases are present in this data, they will become embedded in the AI’s decision making, and research has shown that this is a common issue. This is why some have said there is a need for Responsible AI, an approach to AI research that prioritizes benefit to society in a fair and equitable way.

Fortunately, there are many researchers doing ground-breaking work in this field. One of them is Dr Joy Buolamwini, the founder of the Algorithmic Justice League, an organization working to promote the principles of equitability and accountability in AI.

Joy Buolamwini gives her TED talk on the bias of algorithms Photograph: TED

As a graduate student at MIT, Dr Buolamwini realized that the various facial recognition technologies developed by top tech companies shared the same biases. These systems worked effectively on her colleagues, who were white men, but could not recognize her own face.

This is because the training datasets they used mostly featured lighter-skinned male faces, so the systems trained on this data struggled to identify darker-skinned female faces. This weakness could have serious consequences, for example if this technology is used to identify criminals.

Read: Meet The Black Women Trying To Fix AI

Dr Timnit Gebru is a colleague of Dr Buolamwini. Dr Gebru was asked to leave Google after she co-authored a controversial paper which highlighted the weaknesses of large language models (AI systems like ChatGPT that generate text), including their susceptibility to bias.

Timnit Gebru, sitting on a windowsill with an afro wearing a beige scarf and grey top

Large language models often output racist and sexist statements. Access to the internet is not evenly distributed, so much like with image data, the text data used to train these systems tends to represent the views of the most powerful groups in society, which may explain why the systems often exhibit this behavior.

Pointing out this weakness, amongst others, did not please Dr Gebru’s higher-ups at Google, probably because language models are essential to the future of their business. Since then, Dr Gebru has founded the Distributed AI Research Institute, which aims to promote diverse perspectives in AI development to minimize its potential harms.

Dr Gebru’s exit from Google is not a unique event. Another ethical AI researcher at Google was fired as well as entire teams from Twitter, and recently Microsoft.

Risk of Burnout and Exploitation

Many Responsible AI researchers have complained about burnout as their work can be psychologically draining.  They are under a lot of pressure to solve urgent problems.

Currently, human content moderators are often used to clean data or evaluate the output of AI systems throughout their development process, and this is referred to as “Ghost Work.” These workers are paid very little and treated poorly by the companies that employ them.

The use of this type of labor in the development of safe AI systems is questioned because these workers are regularly exposed to harmful content and are often left traumatizsed by their work. This job is very important to AI development but a more ethical way to do it needs to be found. 

How do we measure bias anyway?

Despite the hard work currently going on in responsible AI, there is still much work to be done before AI can be used safely. An AI system called COMPAS was being used by judges in the US to determine the lengths of prison sentences. It would use various factors about the defendant to predict how likely they were to reoffend once they left prison.

After multiple complaints, a study was conducted which found that it was twice as likely to incorrectly label Black defendants as high risk. This led to academic debate about evaluating the bias of algorithms, culminating in a paper that showed that certain ways of measuring bias conflicted with each other.

Effectively measuring bias in algorithms is still an open question in AI research. For anyone developing or using an AI system, it is a good idea to consider the potential ways in which your system could cause harm and use this to motivate how you measure its biases.

GPT-4 and beyond

Many AI systems are in use today, and some are even available to the public. DALL.E 2  allows users to generate artwork from a simple description. ChatGPT can generate written responses for a huge range of topics, on anything from poetry to coding.

On March 14th, 2023, Open AI announced GPT-4, the sequel to the technology behind ChatGPT. GPT-4 will be a massive step up from GPT-3 in terms of capability, but experts are wondering whether it will still suffer from the same problems.

Many more similar systems are likely to arise in the future, but with all of these, it is crucial we are aware of their potential biases and how they may amplify society’s inequalities.  


The featured image is AI-generated.


#blacktech #entrepreneur #tech #afrotech #womenintech #supportblackbusiness #blackexcellence #technology #blackbusiness #blacktechmatters #blackowned #blackgirlmagic #blackpreneur #startup #innovation #hbcu #techtrap #blackownedbusiness #pitchblack #autographedmemories #blacksintech #shopblack #wocintech #nba #blackwomen #repost #hbcubuzz #blackwomenintech #startupbusiness #nails

Source link

Coffistop Media
Coffistop Mediahttps://coffistop.com
Consolidated platform for African American bloggers, YouTubers, writers, foodies, travelers, athletes and much more. One platform endless flavor.

Related articles

Recent articles