The goal of this study room is to help you gain an appreciation of the risks that come with AI, help you describe the trade-offs that come with AI implementation and understand the nature of the alignment problem.

Have a look at this Forbes article which lists 14 potentially negative ways in which AI can be used. Do any of these surprise you?

Take a break and spend some time listening to this podcast which tries to hone in on whether the alignment problem is actually a technical problem or rather one of socio-technical values. This may be a little technical so have patience!

The following article discusses the potential drawbacks and negative impacts of AI on various aspects of society, including privacy, employment, security, and ethical concerns, highlighting the need for responsible AI development and regulation.

Let’s look at COMPAS now. Watch this video on the ways in which the use of algorithms can have negative impacts. COMPAS is one of the most well known examples in which bias has significant negative outcomes.

Here is another resource which looks at how leveraging technology to forecast crime can reinforce existing racial injustices

The Racism and Technology Center uses technology as a mirror to reflect existing racist practices in society and make them visible. See for example their collected examples of racist technology.