The U.S. Federal Trade Commission has launched an investigation into ChatGPT creator OpenAI and whether the artificial intelligence company violated consumer protection laws by scraping public data and publishing false information through its chatbot.
The agency sent OpenAI a 20-page letter requesting detailed information on its AI technology, products, customers, privacy safeguards and data security arrangements.
An FTC spokesperson had no comment on the investigation, which was first reported by The Washington Post on Thursday.
The FTC document the Post published told OpenAI that the agency was investigating whether it has “engaged in unfair or deceptive privacy or data security practices” or practices harming consumers.
OpenAI founder Sam Altman tweeted disappointment that the investigation was disclosed in a “leak,” noting that would “not help build trust,” but added that the company will work with the FTC.
“It’s super important to us that out technology is safe and pro-consumer, and we are confident we follow the law,” he wrote. “We protect user privacy and design our systems to learn about the world, not private individuals.”
OpenAI has faced scrutiny elsewhere. Italian regulators temporarily blocked ChatGPT over privacy concerns, and privacy watchdogs in France, Spain, Ireland and Canada also are paying closer attention, including some that have launched investigations after receiving complaints.
The FTC’s move is a serious regulatory threat to the nascent but fast-growing AI industry, although it’s not the only challenge facing these companies.
Comedian Sarah Silverman and two other authors have sued both OpenAI and Facebook parent Meta for copyright infringement, claiming that the companies’ AI systems were illegally “trained” by exposing them to datasets containing illegal copies of their works.
On Thursday, OpenAI and The Associated Press announced a deal under which the AI company will license AP’s archive of news stories.
Altman has emerged as a global AI ambassador of sorts following his testimony before Congress in May and a subsequent worldwide tour, including to Europe, where officials are putting the final touches on the world’s first comprehensive rules for AI.
The regulations will focus on risky uses such as predictive policing and social scoring and include provisions for generative AI to disclose any copyright material used to train its algorithms.
Altman himself has called for AI regulation, although he has tended to emphasize difficult-to-evaluate existential threats such as the possibility that superintelligent AI systems could one day turn against humanity.
Some argue that focusing on a far-off “science fiction trope” of superpowerful AI could make it harder to take action against already existing harms that require regulators to dig deep on data transparency, discriminatory behavior and potential for trickery and disinformation.
“It’s the fear of these systems and our lack of understanding of them that is making everyone have a collective freak-out,” Suresh Venkatasubramanian, a Brown University computer scientist and former assistant director for science and justice at the White House Office of Science and Technology Policy, told the AP in May. “This fear, which is very unfounded, is a distraction from all the concerns we’re dealing with right now.”
News of the FTC’s OpenAI investigation broke just hours after a combative House Judiciary Committee hearing in which FTC Chair Lina Khan faced off against Republican lawmakers, who said she has been too aggressive in pursuing technology companies over allegations of wrongdoing.
Republicans said she has been harassing Twitter since its acquisition by Elon Musk, arbitrarily suing large tech companies and declining to recuse herself from certain cases. Khan pushed back, arguing that more regulation is necessary as the companies have grown and that tech conglomeration could hurt the economy and consumers.