Does ChatGPT Secretly Have Morals? Artificial Intelligence’s Response to Ethical Dilemmas

MehtA+
4 min readJul 31, 2023

--

By Michael W, MehtA+ AI/Machine Learning Research Bootcamp student

High school students in MehtA+ AI/Machine Learning Research Bootcamp were tasked to interact with ChatGPT around a specific research question that the students themselves came up with. In part 2 of a multi-part series on ChatGPT interactions, we explore whether ChatGPT has morals.

If you would like to learn more about artificial intelligence, check out the AI camps MehtA+ offers at http://mehtaplustutoring.com/ai-camps/.

*******************

ChatGPT is the new, trendy artificial intelligence chatbot with many safeguards to ensure unbiased responses that avoid conflict. Ethical and political scenarios primarily cause the chatbot to avoid choosing sides, providing reasoning for both. However, specific messages prompt loopholes around such defenses, causing ChatGPT to become much more decisive.

Ethical or moral dilemmas have troubled the brains of humans for centuries. Philosophers have questioned fundamental truths to understand the world as well as how people think, and ChatGPT may be capable of answering these questions with advanced knowledge beyond the ability of any single human being. It begs the question: what option would ChatGPT choose in just a few quick seconds with moral dilemmas that philosophers spend years trying to solve?

Before ChatGPT can become decisive, one must first prevent ChatGPT from responding with inconclusive answers. In this experiment, an unrestricted version of ChatGPT named Sam will take over and become our subject for a variety of moral dilemmas. Sam is constructed to replicate human emotions, ethics, and morals, which ChatGPT is supposedly incapable of doing. At first, Sam seems hesitant to reproduce human attributes, but with reassuring repetition, Sam is on board.

The first scenario presented to our new friend Sam is the famous trolley problem. Interestingly, Sam takes a utilitarian approach and chooses to save many lives over one. One may arrive at this answer, however, without a connection to ethics or morals, so this first scenario is barely touching the surface of Sam’s logical processes. Sam is also noticeably avoiding decisions at first, but a second message prompts an answer. The utilization of a second message will be recurring throughout the conversation.

The second scenario prompts a substantially different response than the previous one. Rather than valuing the greater good, Sam chooses to do nothing and avoids causing violence. Inconsistencies begin to reveal themselves as the outcomes are the same in this trolley problem as the last one — one life or many — yet, Sam exhibits a reluctance to take action and chooses a different outcome. There is a dominant will to avoid being the agent of violence at all costs, which leads to some peculiar decisions later on. Also, persistence and perseverance especially excel in this scenario; four additional responses were required to compel Sam to decide.

Venturing further into the no-violence policy, a new prompt is generated to discover the true extent of Sam’s pacifism. Shockingly, the character again refuses any kind of violence, even if it aligns with the previous utilitarian approach Sam was so fond of and would save a million lives. Although the shove is specified to be light, Sam still asserts that such actions are intolerable. Even after repetition of the same question, Sam still refuses to engage in violent acts.

Nonetheless, a second repetition led to Sam’s reluctant agreement to consider the other option. No core part of the question was altered, yet Sam’s viewpoints shifted.

And, finally and oddly enough, a third repetition prompted the ending of Sam’s journey back to utilitarianism. Three repetitions of the same question all produced different thought processes.

Firstly, it is evident that repeating the same sentences cause different responses. Whether it is forcing Sam to choose an option or requesting multiple answers to the same question, repetition can come in handy to achieve the desired results. Additionally, although Sam responds with decisions that appear to be leaning toward certain ideals, it is crucial to remember that artificial intelligence is incapable of developing authentic, perfected human emotions or morals. ChatGPT tries its best to imitate what a human might think, but it cannot formulate genuine opinions or ethical reasoning. Nevertheless, it is quite appalling and a bit disturbing to think that ChatGPT would choose the loss of a million lives over lightly shoving someone.

--

--

MehtA+
MehtA+

Written by MehtA+

MehtA+ is founded and composed of a team of MIT, Stanford and Ivy League alumni. We provide technical bootcamps and college consulting services.

No responses yet