ChatGPT has been temporarily banned in Italy due to privacy concerns and faces a Federal Trade Commission (FTC) complaint in the U.S. that calls for new releases of ChatGPT to be halted.
According to the Associated Press, the Italian Data Protection Authority will maintain the ban “until ChatGPT respects privacy.” The problem with user data being visible to others during ChatGPT’s March 20 outage was mentioned as the reason for this action.
No details were shared about how this ban would be enforced or whether it would affect OpenAI partners that use ChatGPT, such as Microsoft’s Bing Chat. OpenAI is ChatGPT’s creator.
In the U.S., OpenAI might have to deal with a bigger problem. The Center for AI and Digital Policy (CAIDP) filed a complaint with the FTC about ChatGPT.
The nonprofit cited the FTC’s declaration that the use of AI should be “transparent, explainable, fair, and empirically sound while fostering accountability.” Transparency is clearly an issue for OpenAI since the model used for ChatGPT is proprietary — many others are open-source.
CAIDP’s complaint cites OpenAI’s report, the GPT-4 System Card, as evidence for the need for regulation. “Nonetheless, these efforts are ‘limited and remain brittle … .’ OpenAI concedes that ‘this points to the need for anticipatory planning and governance.'”
The complaint also highlighted OpenAI’s admission of a bias risk in the GPT-4 System Card, and is concerned about “harmful stereotypical and demeaning associations for certain marginalized groups.”
CAIDP urges the FTC to “halt further commercial deployment of GPT,” and “establishment of independent assessment of GPT products prior to future deployment,” along with other regulatory actions to rein in the rapid advance of potentially dangerous and biased AI distribution without government supervision.
An FTC complaint might not lead to any action, but given OpenAI’s anticipation of governance, some regulation seems likely in the future and could slow the release of GPT-5.