Office of the Utah State Auditor

Office of the
State Auditor

Navigation

Privacy Alert 24-01 – DRAFT

Comment Period: In an effort to make our publications accurate and useful to our intended audience, we invite individuals who work for and with government entities to read this draft and provide comment. The comment period will end August 30, 2024. Comments should be submitted to Nora Kurzova at nkurzova@utah.gov.

Date: July 30, 2024

Subject: Risks of Using Generative AI Tools

Introduction

Tools like ChatGPT are intriguing. They have the potential to help government officials work faster and more efficiently, as well as make government interactions more convenient. But those tools also pose risks to privacy and may not perform as expected.

“Generative AI systems” like ChatGPT use artificial intelligence (AI) algorithms to generate content such as text, audio, or video. To do this, though, they must be trained using mountains of data. In fact, some tools are continually training, scooping up data from whatever source they encounter – even data that you provide to them. As such, government officials must use these tools wisely and judiciously.

What Happens When Using AI Chatbots Goes Wrong? 

Chatbots Can Give Incorrect Information

An airline used an AI chatbot to assist its customers. Unfortunately, the chatbot told a customer the wrong fare. The airline later wanted the customer to pay the difference.

Ultimately, a court ruled that the airline was responsible for the error and was not entitled to be reimbursed by the customer for the error. This highlights the importance of ensuring the AI tools provide accurate information and the problems when they don’t.

Don’t Improperly Disclose Information to a Chatbot

Suppose a school employee enters private student medical information into a chatbot to quickly analyze student health for assessing lunch programs and physical education classes. Unfortunately, the chatbot operator kept the data for future training.

Later, a stranger queries the chatbot and receives snippets of the students’ private medical information, leading to a data breach. You might view generative AI as your personal ‘army of interns,’ but remember that you never know who is watching and once you share your data, you cannot always take it back.

Privacy Risks

  1. Improper Disclosure of Private or Sensitive Information: Your employees might enter private or sensitive information into a generative AI system thinking no one is watching. But they do not know where that information might go. Unauthorized people or entities might access the data your employees entered in an AI system.
    • Example 1: Cybercriminals hack an AI system and steal sensitive data.
    • Example 2: An AI programmer analyzes your data for inappropriate purposes.
  2. Losing Control Over Disclosed Information: A generative AI system might use your data for additional purposes, storing it and using it for training. Once your employee enters data into a generative AI system, you should assume that you’ve lost control of that data and that it might be shared with anyone and kept forever.
    • Example 1: A chatbot uses your private information in a response to another user.
    • Example 2: Personal data is stored indefinitely in the AI system, increasing the risk of unauthorized access in the future.
  3. Inaccurate Results May Malign Individuals or Bias Decisions: AI systems can produce content that includes falsehoods or “hallucinations.” This might infringe on another’s rights by improperly maligning them, making inaccurate statements, or spreading false information. Also, this inaccurate content can bias certain decision-making processes, possibly illegally discriminating against certain individuals.
    • Example: An AI-generated report incorrectly associates a person with criminal activity based on low quality training data.

Recommendations

  1. Never enter private, protected, or confidential information into an AI system: The following information should never be entered into an AI system: Social Security numbers, medical records and health information, financial information (e.g., bank account numbers, credit card details), personal identification information (e.g., driver’s license numbers, passport numbers), passwords, login details, social media handles, personal phone numbers and addresses, including email addresses.
  2. Limit who can access your data or how it might be used by others (e.g., AI training):
    • Always opt-out of sharing data. Review if the generative AI platform allows you to opt out of having your data used for training or other purposes. Opting out can help limit data exposure. But there is no guarantee that just because you request to opt out that your data is protected from unauthorized use or misuse.
  3. Double-check AI-generated results for accuracy and appropriateness.

Governmental entities should establish policies and procedures about the appropriate use of generative AI tools and train employees about how to use generative AI tools safely.

Note:This privacy alert was created using generative AI and applying human oversight.