Categories
Full Text Articles - Audio Posts

AI hallucinations can pose a risk to your cybersecurity

Spread the news

In early 2023, Google’s Bard made headlines for a pretty big mistake, which we now call an AI hallucination. During a demo, the chatbot was asked, “What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?” Bard answered that JWST, which launched in December 2021, took the “very first pictures” of an exoplanet outside our solar system. However, the European Southern Observatory’s Very Large Telescope took the first picture of an exoplanet in 2004.

What is an AI hallucination?

Simply put, an AI hallucination is when a large language model (LLM), such as a generative AI tool, provides an answer that is incorrect. Sometimes, this means that the answer is totally fabricated, such as making up a research paper that doesn’t exist. Other times, it’s the wrong answer, such as with the Bard debacle.

Reasons for hallucination are varied, but the biggest one is that the data the model uses for training is incorrect — AI is only as accurate as the information it ingests. Input bias is also a top cause. If the data used for training contains biases, then the LLM will find patterns that are actually not there, which leads to incorrect results.

With businesses and consumers increasingly turning to AI for automation and decision-making, especially in key areas like healthcare and finance, the potential for errors poses a big risk. According to Gartner, AI hallucination compromises both decision-making and brand reputation. Additionally, AI hallucinations lead to the spreading of misinformation. Even more so, each AI hallucination leads to people not trusting AI results, which has widespread consequences, and businesses are increasingly turning to this technology.

While it’s tempting to have blind trust in AI, it’s important to use a balanced approach when using AI. By taking precautions to reduce AI hallucinations, organizations can weigh the benefits of AI with the potential complications, which include AI hallucinations.

Explore AI cybersecurity solutions

Organizations increasingly using generative AI for cybersecurity

While the discussion about generative AI often focuses on software development, the issue increasingly affects cybersecurity. The reason is that organizations are starting to use generative AI for cybersecurity purposes.

Many cybersecurity professionals turn to generative AI for threat hunting. While AI-powered security information and event management (SIEM) improves response management, generative AI can use natural language searches for faster threat hunting. Analysts can use natural language chatbots to spot threats. Once a threat is detected, cybersecurity professionals can turn to generative AI to create a playbook based on the specific threat. Because generative AI uses training data to create the output, analysts have access to the latest information to respond to a specific threat with the best action.

Training is another common use for generative AI in cybersecurity. By using generative AI, cybersecurity professionals can use real-time data and current threats to create realistic scenarios. Through the simulation, cybersecurity teams get real-world experience and practice that was previously challenging to find. Because they can practice on similar threats to those they may encounter that day or week, professionals can train on current threats, not ones in the past.

How AI hallucinations affect cybersecurity

One of the biggest issues with AI hallucinations in cybersecurity is that the error can cause an organization to overlook a potential threat. For example, the AI tool may miss a potential threat that ends up causing a cyberattack. Often, this is due to bias in the model that happens through biased training data, which causes the tool to overlook a pattern that ends up affecting the results.

On the flip side, an AI hallucination may create a false alarm. If the generative AI tool fabricates a threat or falsely identifies a vulnerability, then employees will begin to trust the tool less in the future. Additionally, the organization focuses its resources on addressing the false threat, which means that a real attack may be overlooked. Each time that the AI tool produces inaccurate results, employee’s confidence in the tool becomes lower, making it less likely that they will turn to AI or trust the results in the future.

Similarly, a hallucination can provide inaccurate recommendations that prolong detection or recovery. For example, a generative AI tool may accurately spot suspicious activity but provide inaccurate information on the next step or system recommendations. Because the IT team takes the wrong steps, the cyberattack is not stopped and the threat actors gain access.

Reducing the impact of AI hallucinations on cybersecurity

By understanding and anticipating AI hallucinations, organizations can take proactive steps to both reduce the occurrence and the impact.

Here are three tips:

  1. Train employees on prompt engineering. With generative AI, the quality of the results depends greatly on the specific prompts used for the requests. However, many employees create the prompts without formal training or knowledge on how to provide the right information to the model. Organizations that train their IT team on using specific and clear prompts can improve the results and possibly reduce AI hallucinations.
  2. Focus on data cleanliness. AI hallucinations often happen when using poisoned data, meaning there are errors or inaccuracies in the training data. For example, a model that is trained on data that includes cybersecurity threats that were later found to be false reports may identify a threat that is not accurate. By ensuring, as much as possible, that the model uses clean data then your organization can eliminate some AI hallucinations.
  3. Incorporate fact-checking into your process. With today’s current maturity level of generative AI tools, AI hallucinations are likely part of the process. Organizations should assume that errors or inaccurate information may be returned at this stage. By designing a fact-checking process to make sure that all information returned is accurate before employees take action, organizations can reduce the impact of the hallucinations on the business.

Leveling the cyber playing field

Many ransomware gangs and cyber criminals are using generative AI to find vulnerabilities and create attacks. Organizations that use these same tools to fight cyber crime can put themselves on a more level playing field. By also taking proactive measures to prevent and reduce the impact of AI hallucinations, businesses can more successfully use generative AI to help their cybersecurity team better protect data and infrastructure.

The post AI hallucinations can pose a risk to your cybersecurity appeared first on Security Intelligence.


Spread the news
Categories
Newscasts

AP Headline News – Oct 23 2024 09:00 (EDT)

Spread the news


Spread the news
Categories
Newscasts

9 AM ET: Harris town hall, American Airlines fined, North Korean troops in Russia & more

Spread the news

Vice President Kamala Harris is getting ready for a town hall hosted by CNN, while former President Donald Trump is having issues with the UK’s government. More than 20 million people have already voted in the election. We’ll tell you why the federal government has fined American Airlines $50 million. The Defense Secretary says North Korean troops are in Russia, but there are still some big questions. Plus, Diwali is becoming an official holiday in one US state.
Learn more about your ad choices. Visit podcastchoices.com/adchoices

Spread the news
Categories
Newscasts

More Than 19 Million Americans have Already Voted

Spread the news

8AM ET 10/23/2024 Newscast
Learn more about your ad choices. Visit megaphone.fm/adchoices

Spread the news
Categories
Newscasts

AP Headline News – Oct 23 2024 08:00 (EDT)

Spread the news


Spread the news
Categories
Newscasts

AP Headline News – Oct 23 2024 08:00 (EDT)

Spread the news


Spread the news
Categories
Newscasts

The latest international headlines

Spread the news

AP correspondent Charles de Ledesma reports on the Secretary of State’s Mideast trip; South Korean worries about a North Korean troop build up; and a likely political bestseller hits bookstores.

Spread the news
Categories
Newscasts

More Republicans are voting early, helping break records.

Spread the news

More Republicans are voting early, helping break records. AP correspondent Donna Warder reports.

Spread the news
Categories
Newscasts

Hong Kong bars services like WhatsApp and Google Drive from government computers

Spread the news

AP correspondent Charles de Ledesma reports Hong Kong is barring some large social platforms from government computers.

Spread the news
Categories
Newscasts

Starbucks reports weak quarterly results despite the arrival of Pumpkin Spice Latte season

Spread the news

Starbucks reports a disappointing fourth quarter. AP correspondent Jennifer King reports. ((A full results report and conference call with investors is planned for Oct. 30, 2024.))

Spread the news