www.theemaillistcompany.com
+1-844-993-6245

Cybercrime is getting worse by the day. Cyber gangs are becoming so sophisticated in developing new types of attacks – security experts can’t keep up. Now, (no) thanks to ChatGPT, it’s going to get worse.

Cybercrime is a huge business. According to Arctic Wolf, it’s a $1.5 trillion – that’s trillion with a T – industry with fraudsters throughout the world running their scams like legitimate organizations. Some cyber gangs offer technical leadership and step-by-step training. The most brazen actors even take out pop-up ads to sell their products. The Mafia or drug cartels have nothing on these guys.

From March 2023 to May 2023 alone, there were about 11.5 cyber-attacks per minute. This represents a 13% increase from the previous reporting period – indicating that attackers are diversifying their tooling in an attempt to bypass security operation (SecOps) measures.

Financial and healthcare services industries are some of the most targeted sectors. In healthcare, the combination of valuable data and critical services presents a lucrative target for cybercriminals, resulting in ransomware gangs directly targeting healthcare organizations and installing information-stealing malware.

Financial cybercriminals target banks, financial institutions, and individuals. It includes identity fraud, ransomware attacks, email, and internet fraud. They try to steal financial accounts, credit cards, and other kinds of payment information.

Different Kinds of Cybercrime

Panda Security breaks down cybercrime into three categories:

• Property: This is similar to a real-life instance of a criminal illegally possessing an individual’s bank or credit card details. The hacker steals a person’s bank details to gain access to funds, makes purchases online or runs phishing scams to get people to give away their information. They also use a malicious software to gain access to a web page with confidential information.

• Individual: This category of cybercrime involves one individual distributing malicious or illegal information online. This can include cyberstalking, distributing pornography, human trafficking, fake charities, and romance scams.

• Government: This is the least common cybercrime but the most serious. A crime against the government is also known as cyber terrorism. Government cybercrime includes hacking government websites, military websites or distributing propaganda. These criminals are usually terrorists or enemy governments from other countries.

Cybercrime & ChatGPT

Criminals are already using ChatGPT to elicit profits. They create convincing phishing emails, scam people into providing personal information or payments, and generate fake news and propaganda to manipulate public opinion.

This is amazing because criminals can find ways to use ChatGPT for their illegal money-making operations and launder their profits. They have become high-skilled ChatGPT operators and this skill set makes them a formidable enemy to financial crime fighters working in anti-fraud and AML (anti-money laundering) departments.

Stu Sjouwerman – cybersecurity expert and CEO of KnowBe4 – says phishing will be harder to detect. One of the obvious tell-tale signs of phishing is poorly worded sentences and grammatical errors because crime organizations are operating from overseas. But everything changes with ChatGPT. The scammers can now craft emails and messages that are not only grammatically correct but are very convincing.

Fraud and Social Engineering

ChatGPT’s ability to draft highly realistic text makes it a useful tool for phishing purposes. The ability of large language models (LLMs) to re-produce language patterns can be used to impersonate the style of speech of specific individuals or groups. This capability can be abused at scale to mislead potential victims into placing their trust in the hands of criminal actors.

One example is the city of Hutto, Texas that recently paid $193,000 to a fake account that was impersonating a city vendor. This fraud was enabled by ChatGPT.

Ransomware

Checkpoint Research says that the underground marketplaces are buzzing about how ChatGPT can be used to write malware code. One threat actor used ChatGPT to create malware that hunted common file types, copied them onto a random folder, compressed them and then uploaded the files to a hardcoded FTP (file transfer protocol) server.

Another scammer created an encryption algorithm that encrypted all files in a specified directory. Both these examples illustrate that even those with limited technical experience can use advanced AI to build core elements of ransomware-type programs.

Vulnerable Code and Software

In addition to generating human-like language, ChatGPT is capable of producing code in a number of different programming languages. For a potential criminal with little technical knowledge, this is an invaluable resource to produce malicious code.

Hackers often spend a lot of time and energy going through each line of code, trying to understand what each flag, keyword or section does. With ChatGPT, hackers can now take a snippet of code and ask the chatbot to analyze it and provide a high-level summary of what each module does. This technology has the potential to significantly empower attackers and reduce the barrier to entry for other adversaries.

Misinformation/Disinformation

Rogue actors, anti-Western governments and political opponents are often engaged in systematic campaigns to spread false narratives across multiple accounts to attract consensus. Twitter has been widely criticized for posting repeated content in suspicious patterns. An AI like ChatGPT designed to mimic humans can produce an infinite supply of content on a wide range of topics. Triggered by certain keywords or controversial topics, the chatbot can post to an unlimited number of social accounts. These bots are indistinguishable from human users, elevating disinformation to a whole new level.

As ChatGPT gets more sophisticated, so do the exploitations by cybercriminals. If you’re looking for the best solution for your internet security needs, click here.