The U.S. Federal Trade Commission (FTC) has initiated an inquiry into OpenAI, the company behind the artificial intelligence (AI) software ChatGPT. The investigation examines the possibility that the AI chatbot may have negatively impacted consumers by collecting data and spreading false personal information.
The FTC’s examination primarily focuses on whether OpenAI has participated in deceptive or unfair practices related to privacy, data security, or potential consumer harm. The investigation was initially reported by The Washington Post and later confirmed by an insider.
The FTC’s investigation marks the first significant U.S. regulatory challenge to OpenAI, a leading AI company. The probe signals that regulatory scrutiny will likely intensify as AI technologies become more prevalent among individuals, corporations, and governments. Concerns about the potential misuse of chatbots are increasing as the technology evolves rapidly.
Also Read: “U.S. Agencies on Alert: China-Linked Cyber Espionage Campaign Unearthed”
OpenAI’s CEO, Sam Altman, has previously expressed the need for regulation within the growing AI sector. In May, Altman testified before Congress to advocate for AI-related legislation. He has interacted with hundreds of lawmakers to promote a policy agenda for the technology, emphasising OpenAI’s commitment to safety and compliance with laws.
OpenAI has also faced international regulatory pressures. In March, Italy’s data protection authority temporarily banned ChatGPT, accusing OpenAI of unlawfully collecting users’ personal data and lacking an age-verification system to protect minors from inappropriate content. OpenAI regained system access a month after making the required changes.
The FTC is moving swiftly in dealing with AI, launching an investigation less than a year after ChatGPT’s introduction by OpenAI. FTC Chair Lina Khan has advocated for the early regulation of technologies to avoid major public gaffes by corporations, such as Meta’s privacy breach in 2018.
At a House Judiciary Committee hearing, Khan argued that there is a lack of controls on the type of data being fed into AI systems like ChatGPT, which may inadvertently lead to the exposure of sensitive information. She reiterated that novel technologies like AI are not exempt from existing laws and regulations.
The investigation may compel OpenAI to disclose its methodology for developing ChatGPT and the data sources used to build its AI systems.
OpenAI’s AI technology, called a neural network, learns by analysing data, identifying patterns, and applying these insights to perform tasks. However, the systems can sometimes repeat erroneous information or combine facts incorrectly, leading to misinformation.
In March, the Center for AI and Digital Policy, an advocacy group focused on ethical technology use, requested the FTC halt OpenAI from releasing commercial versions of ChatGPT due to concerns about bias, disinformation, and security.
The FTC’s investigation into OpenAI may last several months, and it remains to be seen what actions the agency might take. FTC investigations are typically confidential and often involve interviewing top corporate executives. A former staff member states that the technical expertise needed to evaluate OpenAI’s responses may challenge the FTC.
0 Comments