How AI Chatbots Are Enabling New Forms of Cybercrime?

by curvature
How AI Chatbots Are Enabling New Forms of Cybercrime?

Cybersecurity experts have warned that artificial intelligence (AI) chatbots are being used by hackers to create more convincing phishing scams and steal personal information from gullible victims. According to a report by the National Cyber Security Centre (NCSC), a branch of the UK’s spy agency GCHQ, AI chatbots can generate accurate and natural-sounding messages that can fool users into clicking on malicious links or revealing sensitive data. The report also said that the implementation of AI tools by cybercriminals will likely increase the volume and impact of cyberattacks in the future.

Phishing scams are one of the most common and operational methods of cybercrime, where hackers mimic genuine units such as banks, government agencies, or online services, and lure users into providing their login credentials, financial details, or other personal information. These scams often rely on social engineering techniques, such as creating a sense of urgency, exploiting emotions, or impersonating the tone and style of the target. However, traditional phishing scams can also be identified by careful users who notice spelling errors, grammatical mistakes, or suspicious URLs in the messages.

Read more: AssemblyAI: The Go-to Tool for Efficient and Accurate Speech-To-Text Transcription and Analysis

AI chatbots, on the other hand, can overcome these restrictions by using natural language processing (NLP) and generative AI technologies to produce fluent and intelligible texts that match the framework and purpose of the scam. For example, AI chatbots can use ChatGPT, an open-source conversational AI model based on GPT-3, the most advanced language model to date, to generate genuine and cooperating dialogues with potential victims. AI chatbots can also use translation tools, sentiment analysis, and personalization techniques to adapt their messages to different languages, cultures, and preferences of the users.

The NCSC report quoted several examples of how AI chatbots can be used to improve existing cyber threats, such as ransomware, malware, denial-of-service, and identity theft. For instance, AI chatbots can be used to:

  • Send personalized and convincing phishing emails to a large number of targets, using information scraped from social media or other sources.
  • Engage in real-time conversations with victims via text or voice, using voice cloning or speech synthesis technologies, and encourage them to perform certain actions, such as downloading a malevolent file, visiting a fake website, or making a payment.
  • Create fake profiles on dating apps or social networks, and establish trust and relationship with users, before asking them for money, gifts, or intimate photos.
  • Impersonate customer service agents or technical support staff, and offer assistance or guidance to users, while secretly installing malware or stealing data from their devices.

The NCSC report also highlighted the ethical and social implications of using AI chatbots for malicious purposes, such as violating privacy, spreading misinformation, manipulating opinions, or causing psychological harm. The report urged the public and private sectors to collaborate and develop best practices and standards for the responsible and secure use of AI chatbots, and to raise awareness and education among users about the potential risks and how to protect themselves.

Read more: MIT CSAIL Study Challenges the Doom-and-Gloom Narrative of AI and Jobs

Read more: ElevenLabs: The Voice Cloning AI Startup that Became a Unicorn in Two Years

Read more: New Study Shows AI Can Decode Neural Languages in Real Time

 

Related Posts

Leave a Comment