The FTC should investigate OpenAI and block GPT over ‘deceptive’ behavior, AI think tank claims

Ad Blocker Detected

Our website is made possible by displaying online advertisements to our visitors. Please consider supporting us by disabling your ad blocker.

The FTC should investigate OpenAI and block GPT over ‘deceptive’ behavior, AI think tank claims

Washington
CNN
 — 

An AI policy think tank wants the US government to investigate OpenAI and its wildly popular GPT artificial intelligence product, claiming that algorithmic bias, privacy concerns and the technology’s tendency to produce sometimes inaccurate results may violate federal consumer protection law.

The Federal Trade Commission should prohibit OpenAI from releasing future versions of GPT, the Center for AI and Digital Policy (CAIDP) said Thursday in an agency complaint, and establish new regulations for the rapidly growing AI sector.

The complaint seeks to bring the full force of the FTC’s broad consumer protection powers to bear against what CAIDP portrayed as a Wild West of runaway experimentation in which consumers pay for the unintended consequences of AI development. And it could prove to be an early test of the US government’s appetite for directly regulating AI, as tech-skeptic officials such as FTC Chair Lina Khan have warned of the dangers of unchecked data use for commercial purposes and of novel ways that tech companies may try to entrench monopolies.

The FTC declined to comment. OpenAI didn’t immediately respond to a request for comment.

“We believe that the FTC should look closely at OpenAI and GPT-4,” said Marc Rotenberg, CAIDP’s president and a longtime consumer protection advocate on technology issues.

The complaint attacks a range of risks associated with generative artificial intelligence, which has captured the world’s attention after OpenAI’s ChatGPT — powered by an earlier version of the GPT product — was first released to the public late last year. Everyday internet users have used ChatGPT to write poetry, create software and get answers to questions, all within seconds and with surprising sophistication. Microsoft and Google have both begun to integrate that same type of AI into their search products, with Microsoft’s Bing running on the GPT technology itself.

But the race for dominance in a seemingly new field has also produced unsettling or simply flat-out incorrect results, such as confident claims that Feb. 12, 2023 came before Dec. 16, 2022. In industry parlance, these types of mistakes are known as “AI hallucinations” — and they should be considered legally enforceable violations, CAIDP argued in its complaint.

“Many of the problems associated with GPT-4 are often described as ‘misinformation,’ ‘hallucinations,’ or ‘fabrications.’ But for the purpose of the FTC, these outputs should best be understood as ‘deception,’” the complaint said, referring to the FTC’s broad authority to prosecute unfair or deceptive business acts or practices.

The complaint acknowledges that OpenAI has been upfront about many of the limitations of its algorithms. For example, the white paper linked to GPT’s latest release, GPT-4, explains that the model may “produce content that is nonsensical or untruthful in relation to certain sources.” OpenAI also makes similar disclosures about the possibility that tools like GPT can lead to broad-based discrimination against minorities or other vulnerable groups.

But in addition to arguing that those outcomes themselves may be unfair or deceptive, CAIDP also alleges that OpenAI has violated the FTC’s AI guidelines by trying to offload responsibility for those risks onto its clients who use the technology.

The complaint alleges that OpenAI’s terms require news publishers, banks, hospitals and other institutions that deploy GPT to include a disclaimer about the limitations of artificial intelligence. That does not insulate OpenAI from liability, according to the complaint.

Citing a March FTC advisory on chatbots, CAIDP wrote: “Recently [the] FTC stated that ‘Merely warning your customers about misuse or telling them to make disclosures is hardly sufficient to deter bad actors. Your deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.’”

Artificial intelligence also stands to have vast implications for consumer privacy and cybersecurity, said CAIDP, issues that sit squarely within the FTC’s jurisdiction but that the agency has not studied in connection with GPT’s inner workings.