Privacy Alert 2024-02 – DRAFT
Comment Period: In an effort to make our publications accurate and useful to our intended audience, we invite individuals who work for and with government entities to read this draft and provide comment. The comment period will end September 30, 2024. Comments should be submitted to Nora Kurzova at nkurzova@utah.gov.
Date: August 28, 2024
Subject: Preventing AI-Powered Scams
Introduction
Generative AI tools like ChatGPT, Copilot, or Invideo are easy to use and can do a lot of work without much human input. This usability led AI technologies to quickly become tools to amplify scams. These technologies make attacks quicker to execute and harder to spot, potentially exploiting vulnerabilities on a much larger scale than we have seen in the past.
AI-powered fraud ranked among the top five types of identity fraud in 2023. Deepfake technology (AI-generated videos impersonating others) saw a tenfold increase[1] from 2022 to 2023, with a 1740% surge in North America specifically.
Would You Fall for This Deepfake Scam?
A multinational firm’s Hong Kong office lost over $25 million in a deepfake video conference scam.[2] The scammers used deepfake technology to create convincing versions of the meeting participants using public videos. The victim first received an email asking to join an “urgent and secret” video meeting. The scammer filled the meeting with fake digital representations of real people, including the most senior leadership, with only the victim and the scammer being real. The scammer requested money for a “secret project.” The victim complied, transferring $25 million immediately after the meeting. They felt something was off, but transferred the funds because the Chief Finance Officer was also at the meeting and approved the transfer on the spot.
All of Us Are at Risk of Being Scammed
Other than deepfakes, scammers often use AI-generated emails, texts, or calls on a mass scale to impersonate banks or government officials with “urgent messages.” They may also pretend to be distressed family members, convincing victims to send money or share personal information.
Watch Out for These Tell-Tale Signs:
- Demands Urgent Response: Scammers do not want you to stop and think, so they create a sense of urgency for immediate action.
- Example: An AI-generated voicemail calls your city residents claiming their utility service will be cut off unless payment is made immediately, prompting residents to improperly disclose sensitive payment information to scammers.
- Asserts Positional Authority: Scammers recognize that we tend to be deferential to persons in positions of power, so they impersonate authoritative figures or institutions.
- Example: An AI-generated email appears to be from the mayor, demanding immediate response to populate a form with sensitive details like a bank account number and routing number to “complete a transaction.” An employee, believing it urgent, clicks on the form and provides sensitive information, compromising data security.
- Deviation from Standard Practice: Unexpected or unusual requests, including requests for “secret transactions” should raise immediate red flags.
- Example: A county employee received an email from a county official asking to wire $50,000 dollars to a new county contractor. The county employee has never received an email directly from this official and has not heard about the new contractor before. The county employee complies without questioning the unusual request, resulting in a loss of $50,000.
Specific Risks
- Identity Theft: Scammers create fake personas to steal identities of real people.
- Example: An AI-generated email, posing as a trusted payroll service, asks a government HR employee to update payroll information. The employee complies, providing full name, address, birth date, Social Security number, and banking information. Scammers take this information to steal identities and reroute salaries, causing people to lose money and privacy.
- Financial Fraud: AI simulates communications to trick people into transferring money or sharing sensitive information that are later exploited to steal funds from the victims.
- Example: AI mimics the voice of a loved one in distress to scam money.
- Social Engineering: AI crafts personalized scams by analyzing public data.
- Example: A government entity receives emails and calls pretending to be from a cybersecurity firm it publicly disclosed as a partner, warning of a data breach. An IT worker provides system access to “mitigate” the breach. Scammers access sensitive data, leading to identity theft, unauthorized transactions, and data loss.
Recommendations:
- Independently verify identity: Check requests for data or money through a reliable alternative channel. Do not immediately respond back to that email. While you might think you are communicating with a trusted party, you might be responding directly to the scammer. Crosscheck requests to ensure legitimacy. For example, after receiving an unexpected or unusual request via email, call the supposed requesting person back on their known number to confirm the request. Also, do not be afraid to say “no” if things do not seem right and you are unable to independently validate the request.
- Use multi-factor authentication (MFA): Protect sensitive information by using multiple factors to verify your identity. If an employee asks to reroute their salary, request their employee ID number and ask them to email you from their verified email address.
- Deploy AI tools to detect scams: Use trusted AI tools with strong security features to detect and prevent scams. Work with reputable vendors to maintain high security and ethical standards.
- Know who to call: Establish ways to report scams and share scam alerts with employees and the public.
- Inform users: Clearly share with your community how you typically interact with them and what the boundaries are. For example, let them know that you would never ask for sensitive information like a credit card number or social security number over the phone.
Governmental entities should establish policies and procedures about scam prevention and detection and train employees adequately.
For further guidance and targeted training, please contact the State Privacy Office.
Note:This alert was created using AI with human oversight. We welcome your feedback and examples of AI use and risks you have encountered. Let us know by September 30, 2024, by emailing us at: nkurzova@utah.gov.
[1] https://sumsub.com/newsroom/sumsub-research-global-deepfake-incidents-surge-tenfold-from-2022-to-2023/
[2] https://www.berkleyfs.com/2024/03/14/finance-employee-scam-25m-deepfake-cfo/