The topic I'm most interested in learning more about is Moral Outsourcing in AI . Moral outsourcing in AI is the idea of letting machines make ethical decisions on our behalf. For example, autonomous vehicles make split-second decisions in the case of an accident. As AI is becoming more integrated into areas such as autonomous vehicles, criminal justice, and healthcare, the overall question I am looking to investigate is who is responsible when things go wrong. I will research how humans will draw the line between helpful automation and a detachment from human judgment and whether to hold AI and the people behind it accountable for moral failures.