Mass event will let hackers test limits of AI technology

Soon after ChatGPT was released, hackers wasted no time attempting to “jailbreak” the AI chatbot, trying to bypass its safeguards in order to make it generate inappropriate or offensive content.

To address this challenge, OpenAI, along with major AI providers like Google and Microsoft, are collaborating with the Biden administration to invite thousands of hackers to test the limits of their AI technology.

The objective of this mass hacking initiative is to uncover potential vulnerabilities and risks associated with chatbots, the Associated Press has reported.

Researchers will investigate how chatbots can be manipulated to cause harm, whether they may disclose private information shared with them, and why they tend to exhibit gender biases like assuming doctors are men and nurses are women.

Users of ChatGPT, Microsoft’s Bing chatbot, or Google’s Bard may have noticed their inclination to fabricate information and present it as factual.

These AI systems, based on large language models, can also perpetuate cultural biases acquired from training on vast amounts of online text.

The idea of a large-scale hacking event gained attention from U.S. government officials during the South by Southwest festival in March, as reported by the AP.

Sven Cattell, founder of DEF CON’s AI Village, and Austin Carson, president of the responsible AI nonprofit SeedAI, organized a workshop where community college students were invited to hack an AI model.

From these initial conversations, a proposal emerged to conduct AI language model testing in line with the principles outlined in the White House’s Blueprint for an AI Bill of Rights.

This blueprint aims to mitigate algorithmic bias, empower users with control over their data, and ensure the safe and transparent use of automated systems.

Advertisement