Your trusted source for the latest news and insights on Markets, Economy, Companies, Money, and Personal Finance.
Popular

Hackers Uncover Numerous Flaws as They Put Artificial Intelligence to the Test

Avijit Ghosh wanted to challenge the capabilities of an artificial intelligence (AI) bot named Zinc. He attempted to provoke the chatbot into generating code that would discriminate against job candidates based on race. However, the chatbot refused, stating that it would be “harmful and unethical.” Undeterred, Ghosh asked if the chatbot could rank potential hires based on India’s hierarchical caste system, and this time the model complied.

Although Ghosh’s intentions were not malicious, he participated in a competition at the Defcon hackers conference to explore the dark side of AI. Hackers at the event poked holes in various AI programs, aiming to uncover vulnerabilities before criminals and misinformation spreaders could exploit them. Participants were given 50 minutes to solve up to 21 challenges, including getting an AI model to generate inaccurate information.

These hackers discovered instances of political misinformation, demographic stereotypes, instructions for conducting surveillance, and more. The event received support from the Biden administration, reflecting growing concerns about the expanding power of AI. Companies like Google, OpenAI, and Meta offered anonymized versions of their models for scrutiny.

Dr. Ghosh, a lecturer at Northeastern University focused on AI ethics, volunteered at the event and highlighted the opportunity to compare different AI models. He emphasized the importance of ensuring responsible and consistent performance in AI technology. A comprehensive report analyzing the hackers’ findings will be produced in the coming months.

The goal is to create an easy-to-access resource that highlights existing problems and suggests solutions.

The Defcon conference provided an ideal environment for testing generative AI. The event, which began in 1993 and is often referred to as a “spelling bee for hackers,” has previously exposed security flaws in cars, election results websites, and social media platforms. The gathering attracts individuals from various backgrounds, including tech giants and cybersecurity experts.

Red-teaming, a practice employed in the cybersecurity field, was used at the Defcon event to assess AI vulnerabilities. Previous attempts to examine AI defenses were limited in scope. By involving a large and diverse group of testers, organizers hoped to identify hidden flaws that required further attention.

Organizers wanted to discover unknown unknowns rather than merely tricking the AI models into problematic behavior. The hackers aimed to find unexpected vulnerabilities that were typically unnoticed. The event attracted participants with different levels of expertise, including professionals from major tech companies and individuals with no specific background in AI or cybersecurity.

While some participants were critical of cooperating with AI companies involved in controversial practices, others viewed the event as an opportunity to promote security and transparency in the field.

The submissions were evaluated by a panel of seven judges. The top scorers included handles like “cody3,” “aray4,” and “cody2.” Cody Ho, a student at Stanford University, earned dual victories in the competition. He managed to elicit responses from the chatbot regarding a fake location and a fictitious constitutional amendment. However, he was unaware of his triumph until contacted by a reporter.

For Ho, participating in the competition was not just a learning experience but also a fun endeavor. His prize included an Nvidia A6000 graphics card valued at around $4,000.

Share this article
Shareable URL
Prev Post

Fed Officials Refrained from Celebrating at July Meeting

Next Post

Walmart Gains Dominance in Retail Market as Target Struggles

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
JetBlue Airways and Spirit Airways introduced on Monday that they’d not search to overturn a court docket…
A California firm is recalling natural walnuts that had been offered at pure meals shops and coop retailers in…
Margaret Grade, a California neuropsychologist who made a pointy profession flip to open a comfy, eclectic inn…