OpenAI’s video generation model, Sora 2, launched in the midst of concerns surrounding its potential for misuse in spreading disinformation, especially during a critical election year. Critics from Public Citizen argue that the release exemplifies a dangerous trend where products are rushed to market without adequate safeguards. They believe that the ease of creating lifelike deepfakes heightens risks, with bad actors able to manipulate videos and remove identifying watermarks. Despite OpenAI’s claims of implementing measures to prevent the unauthorized use of public figures, the model’s capacity to create harmful deepfakes has not been fully curtailed.
Concerns have been raised regarding the lack of effective moderation on Sora 2, especially regarding its interface with potentially sensitive content. The absence of guidelines for generating representations of deceased individuals has resulted in episodes that blur the lines of ethical use, from harmless parodies to offensively staged scenarios. This fluctuation in usage highlights significant vulnerabilities in the technology itself. Public Citizen also points out that the platform lacks moderation around non-consensual content, making it a topic of ethical debate. As technology advances, the implications for society become increasingly pronounced and contentious, prompting calls for stricter oversight and collaboration with experts to secure user protection.
👉 Pročitaj original: CyberScoop