As seen in Providence Business News
When it comes to artificial intelligence (AI), it is easy to focus solely on the benefits. Advantages of AI include insights to expedite our decision-making capabilities, reducing errors and increasing efficiency related to manual or repetitive processes, instantaneous generation of complex programming code or written content, and providing sleepless 24/7 support for products and services.
Lost in the often-glowing media coverage, however, is that artificial intelligence can be exploited by cybercriminals for a multitude of insidious purposes. We are just starting to see how AI can be used when the user possesses ill-intent. Here is a selection of some of the more prevalent concerns of how AI can empower cybercriminals and endanger our digital lives:
- Instant generation of sophisticated social engineering attacks. A chatbot such as ChatGPT is especially helpful for those attackers who don’t write or speak fluent English to compose natural-sounding spear phishing emails. With the advantage of having AI crafting their communications, cybercriminals will no longer be concerned with poor spelling and grammar – the traditional hallmarks of phishing attacks.
- Best Practice: Keep up to speed with the latest social engineering tactics through the use of cybersecurity awareness training and spear phishing simulations.
- New strains of malware can be generated with minimal effort and coding skills. According to security experts, examples of polymorphic malware generated through artificial intelligence chatbots have been observed in the wild. Cybercriminals utilizing AI to help program their malware can increase their likelihood of circumnavigating antivirus and other endpoint detection tools. A recently discovered hacking application called WormGPT is an AI tool that can produce malware using the coding language Python while also providing advice on producing sophisticated cyberattacks.
- Best Practice: Utilize a strong endpoint protection solution and ensure that all of your applications and hardware devices are continuously patched with security updates.
- Criminals have set up fake websites that appear to host legitimate AI tools. With interest in artificial intelligence at a fever pitch, many individuals are determined to take the latest AI chatbot for a test drive. Criminals are taking advantage of this opportunity by running spurious ads for AI tools via social media and search engines, sending users into a trap instead of legitimate AI sites. A user may think that they arrived on a seemingly safe website, but it may actually be a conduit for downloading malware onto their device. Once the malware has been installed, criminals can possibly gain access to the victim’s passwords or steal information to sell it to markets and hackers located on the dark web.
- Best Practice: Use extreme caution when clicking on any links and conduct research on the legitimate address of any website before you visit.
- There are limited safety mechanisms in place to prevent the upload of sensitive information to AI tools. If users post information to a tool that is of a sensitive nature, that information is saved and is at risk of being exposed due to a future bug or hack. This risk can be especially dangerous (e.g., compromise of client data) for businesses that allow their employees to use AI tools without providing them with best practices.
- Best Practice: Every business should provide training and establish policies for users authorized to access AI tools, and should consider blocking access for employees who have not received approval.
- Artificial intelligence has supercharged the deepfake capabilities used by cybercriminals. A deepfake, also known as synthetic media, is when a person’s voice or likeness is digitally altered, often with the goal of deceiving or misleading people. AI has made the process of creating deepfakes exceptionally easy and virtually indistinguishable from reality, with only a single photograph or a few seconds of audio as the necessary ingredients. Deepfakes can be used by cybercriminals for a variety of malicious purposes, including simulating the voice of a relative asking for money to be released by kidnappers, pretending to be a CEO calling to demand a wire transfer be made, creating incriminating photos for blackmail purposes, and posting fake news to affect a company’s stock price or reputation.
- Best Practice: While spotting deepfakes is possible (e.g., looking for inconsistencies in facial proportions, inconsistent quality of audio and video, etc.), it is exceptionally difficult to prevent them if your image or voice can be found on the internet. Until tougher legislation is enacted (only a handful of states have deepfake laws in place), responding to a deepfake may entail contacting the website administrator or law enforcement, as well as enlisting the aid of an attorney to pursue civil or criminal options.
- Chatbots are susceptible to hackers, so be aware of what information you provide on these sites. In May 2023, OpenAI, the creator of artificial intelligence (AI)-powered ChatGPT chatbot, confirmed that they may have experienced a data leak due to a bug in the chatbot’s source code. Beyond the usual threat of identity theft when a data breach occurs, chatbots can pose an additional risk related to the questions you have asked. While it may be tempting to have a chatbot author an article or answer a sensitive question for you, consider the repercussions if your chat history was exposed to the public after a data breach.
- Best Practice: Be very cautious when sharing sensitive information and ask yourself what the fallout would be if your chatbot history was available for public consumption.
While the power of artificial intelligence can be harnessed for an astonishing number of beneficial purposes, this technology is not without risk. Ironically, we may be rapidly approaching the day when “good” AI will be required to combat the risks of “evil” AI, a situation that is rife with concerning questions that even a chatbot would be unable to answer. For more information on the risks of artificial intelligence and keeping yourself safe and secure, contact Kevin Ricci at kricci@citrincooperman.com.
Related Insights
All InsightsOur specialists are here to help.
Get in touch with a specialist in your industry today.