Because your app or AI invention could hurt people by accident, you need to think carefully about what effect it will have on different people
Could an invention that helps direct traffic signals cause harm to anybody?
What about a mobile app where people share their pictures with others?
WHY MIGHT THAT BE GOOD OR BAD?
Let’s say there’s a garbage collector robot that automatically picks up and sorts items into recycling.
Could that robot cause any problems?
What about a mobile app that lets a patient communicate directly with their doctor?
Could there be any issues with an app like that?
WHY ALL THESE QUESTIONS?
By answering these questions, you were thinking about ethics!
Ethics is thinking about right and wrong.
You want to make sure that your project helps people and society instead of causing harm, even if the harm is by accident.
The video below talks about how you can make technology that does good and does no harm.
BATYA'S 5 (6) BIG WORDS
Stakeholders
Who is affected by your project?
Direct stakeholders
Others who might be affected by your project.
Widespread use
What does it mean if LOTS of people use your product?
Materiality
Stuff! (used to make a cellphone) or used to build your project.
Progress (not perfection)
Although it’s hard, do your best to make sure your project will do good and no harm.
THINK AS YOU BUILD
You will write the algorithms to make your mobile app run, or to make your AI model learn.
You need to make sure you think about your stakeholders along the way!
VALUE SCENARIOS
If your app includes AI, this video should make you think a little more about your dataset and preventing bias.
"If it’s your algorithm, it’s your responsibility. This is the only way that we can sort of sustain a world where we know who is responsible for what."
Stop and Discuss
Who are your stakeholders?
Direct stakeholders? Your users?
Indirect stakeholders? Who is affected your project?