OpenAI Trust Issues: Trust Issues and Scarlett Johansson’s Voice
In this week’s AI roundup, OpenAI finds itself under the microscope. The company recently launched reduced plans for nonprofits and education customers, but its actions have raised questions about trust. First, OpenAI faced criticism for its handling of Scarlett Johansson’s voice in its AI-powered chatbot, ChatGPT. Users noticed resemblances between the voice and Johansson’s, provoking the removal of that particular voice. The situation intensified when Johansson herself hired legal counsel to investigate the matter. The company’s response seemed suspect, especially considering the timing of the incident.
The company recently launched discounted plans for nonprofits and education customers, but its actions have raised questions about trust. Here are the key points:
- Scarlett Johansson’s Voice in ChatGPT: Users noticed resemblances between the voice used in OpenAI’s AI-powered chatbot, ChatGPT, and Scarlett Johansson’s. This provoked the removal of that particular voice. The situation intensified when Johansson herself hired legal counsel to investigate the matter. OpenAI’s response seemed suspect, especially considering the timing of the incident.
- Trust and Safety Practices: OpenAI’s disbanding of the Superalignment team, responsible for governing “superintelligent” AI systems, led to the resignation of key members. Safety experts expressed concerns that OpenAI highlights commercial projects over safety and transparency efforts. To address this, OpenAI formed a new committee, but it included company insiders rather than independent observers.
- Nonprofit to For-Profit Transition: Rumors circulate that OpenAI may abandon its nonprofit structure for a traditional for-profit model. These incidents wear down trust in a company with growing influence in the AI landscape.
Read More: Carnegie Mellon University Seeks Associate AI Security Researcher
Read More: AI Training Data Cost: The Price Label Only Big Tech Can Pay for