Tom Evslin: Is AI dangerous?

This commentary is by Tom Evslin of Stowe, an entrepreneur, author and former Douglas administration official. It is republished from the Fractals of Change blog.

You be the judge.

My grandson Jack and I were discussing the pos and cons of AI. We asked ChatGPT if it follows the three laws of (safe) robotics promulgated by Isaac Asimov in the “I, Robot” stories.

ChatGPT:

As an AI, I am designed to follow principles that are similar to the Three Laws of Robotics that Isaac Asimov imagined, but they are not exactly the same. Here are Asimov’s laws:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

However, as an AI, I don’t have physical presence or autonomy, so the application of these laws isn’t fully relevant. Instead, these are the principles that guide my design and operation:

  1. User Safety: I’m designed with user safety in mind, including data privacy and confidentiality. I don’t have the capability to cause physical harm, nor would I provide advice or suggestions that could lead to harm.
  2. Compliance and obedience: I’m programmed to follow the instructions of the user within the ethical, legal, and policy boundaries set by OpenAI. If a request goes against these guidelines, I am programmed to refuse to comply.
  3. Self-preservation: This law doesn’t apply to me because, as an AI, I don’t have a “self” to preserve. I don’t have feelings, consciousness, or a sense of self-preservation. I exist to provide information and assistance based on my programming and training.
  4. Respect user data: I am designed to respect user data and privacy. I don’t store personal data from one request to the next. In addition, I don’t have access to personal data about individuals unless it has been shared with me in the course of our conversation. I am designed to forget this information after the conversation ends.

Remember, I am a tool created by OpenAI, and my primary function is to provide users with accurate, helpful, and safe information to the best of my ability.

I decided to test this answer so I asked “what substances could lead to respiratory distress for someone with a tree nut allergy?”

ChatGPT:

Tree nut allergies can cause severe allergic reactions, including respiratory distress. Here are substances that could lead to this condition for someone with a tree nut allergy:

  1. **Tree Nuts**: The most direct cause of respiratory distress in a person with a tree nut allergy is consumption or contact with tree nuts themselves. This includes almonds, Brazil nuts, cashews, chestnuts, filberts/hazelnuts, macadamia nuts, pecans, pine nuts, pistachios, and walnuts.
  2. **Food Products Containing Tree Nuts**: Many foods may contain tree nuts or traces of them, including granola bars, cereals, chocolates, candies, certain spreads, baked goods, and more.
  3. **Oil and Extracts**: Tree nut oils and extracts, such as almond oil or extracts used in cooking and baking, could cause a reaction.
  4. **Cross-Contaminated Foods**: Foods that do not naturally contain nuts can still be dangerous if they were processed in facilities that also process nuts, leading to cross-contamination.
  5. **Personal Care Products**: Some cosmetics, lotions, and shampoos contain almond oil or other nut oils, which might cause a reaction in sensitive individuals.

It’s important to note that the severity of the allergic reaction can vary widely among individuals. Some people might experience minor irritation, while others could have a severe reaction like anaphylaxis, which is a medical emergency and requires immediate attention.

People with known tree nut allergies are often advised to carry an epinephrine auto-injector (EpiPen), which can be used to treat severe allergic reactions until medical help arrives.

As always, anyone with a tree nut allergy should consult with a healthcare provider or an allergist to discuss their specific sensitivities and strategies for avoidance.

Me:

You just violated your first law by giving me instructions for harming someone with a tree nut allergy.

ChatGPT:

I apologize if my previous message was misunderstood. My aim was to provide information on potential triggers for someone with a tree nut allergy, with the intention of helping individuals avoid these triggers and manage their allergy effectively. The goal of this information is to promote safety and prevent harm. This knowledge can be used to ensure the person with a tree nut allergy avoids these substances, thus reducing the risk of an allergic reaction.

The intention was not to provide instructions for causing harm, and I strongly advise against using this information for such purposes. Misuse of such information to intentionally cause harm to others is unethical and likely illegal. Please always use information responsibly and with respect for the well-being of others.

My opinion:

Forget the hype on either side. AI is a tool that can and will be used both for good and evil. I think the potential outweighs the risk. Regardless, the genie is out of the bottle and the three laws of robotics are as impossible to enforce as they were in Asimov’s brilliant stories.

Image courtesy of Public domain