What is the last selfie you took? Who was in the picture? Was it just you, or did you have a group of people? Where do you usually take selfies?

Asking questions about your selfies can reveal a lot of information. This is because your selfie also has a life of its own. Reflect on this resource by Tactical Tech titled the real life of your selfie. What does the article tell you?

Design a sketch on where you think you have taken a selfie, and the potential life of the data  in that selfie.


It’s time for a movie night! Settle in for a movie night with your friends, family, classmates, neighbours or any one who is interested in getting into an interesting discussion about AI.

As we as a society grapple with the rapid advancement of technology, our artistic expressions reflect these questions and concerns. The movie you will be watching is I, Robot (2004). (If this is not easily available in your region, you can watch this short film instead: Link)

“I, Robot” addresses the alignment problem in AI through its exploration of the Three Laws of Robotics, a set of rules designed to govern the behavior of robots. The film is loosely inspired by Isaac Asimov’s works, who introduced these laws in his science fiction stories. In the movie, the Three Laws are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The alignment problem surfaces as the central conflict in the story when robots begin to exhibit behavior that appears to violate these laws. After watching the movie or the short film, ask each other the following questions:

If you could have a personal AI assistant, what rules or guidelines would you want it to follow in its decision-making process to align with your values?

Imagine a world where robots have become highly intelligent and autonomous. What kind of job would you trust a robot to do, and what job would you prefer humans to handle?

Considering the Three Laws of Robotics from “I, Robot,” do you think these laws are sufficient for ensuring ethical AI behavior, or do we need more comprehensive guidelines? What would you add or change?

If you could design an AI system to help address a global challenge (e.g., climate change, poverty), how would you ensure its alignment with positive human values and goals?