Several current and former OpenAI researchers are speaking out over the company’s first foray into social media: the Sora app, a TikTok-style feed filled with AI-generated videos and a lot of Sam Altman deepfakes. The researchers, airing their grievances on X, seem torn over how the launch fits into OpenAI’s nonprofit mission to develop advanced AI that benefits humanity.
“AI-based feeds are scary,” said OpenAI pretraining researcher John Hallman in a post on X. “I won’t deny that I felt some concern when I first learned we were releasing Sora 2. That said, I think the team did the absolute best job they possibly could in designing a positive experience […] We’re going to do our best to make sure AI helps and does not hurt humanity.”
Boaz Barak, another OpenAI researcher and Harvard professor, replied: “I share a similar mix of worry and excitement. Sora 2 is technically amazing but it’s premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes.”
Former OpenAI researcher Rohan Pandey used the moment to plug a new startup, Periodic Labs, which is made up of former AI lab researchers trying to build AI systems for scientific discovery: “If you don’t want to build the infinite AI TikTok slop machine but want to develop AI that accelerates fundamental science […] come join us at Periodic Labs.”
There were many other posts along the same lines.
The Sora launch highlights a core tension for OpenAI that flares up time and time again. It’s the fastest-growing consumer tech company on Earth, but also a frontier AI lab with a lofty nonprofit charter. Some former OpenAI employees I’ve spoken to argue the consumer business can, in theory, serve the mission: ChatGPT helps fund AI research and distribute the technology widely.
OpenAI CEO Sam Altman said as much in a post on X Wednesday, addressing why the company is allocating so much capital and computing power to an AI social media app:
Techcrunch event
San Francisco
|
October 27-29, 2025
“We do mostly need the capital for build [sic] AI that can do science, and for sure we are focused on AGI with almost all of our research effort,” said Altman. “It is also nice to show people cool new tech/products along the way, make them smile, and hopefully make some money given all that compute need.”
“When we launched chatgpt there was a lot of ‘who needs this and where is AGI’,” Altman continued. “[R]eality is nuanced when it comes to optimal trajectories for a company.”
But at what point does OpenAI’s consumer business overtake its nonprofit mission? In other words, when does OpenAI say no to a money-making, platform-growing opportunity because it’s at odds with the mission?
The question looms as regulators scrutinize OpenAI’s for-profit transition, which OpenAI needs to complete to raise additional capital and eventually go public. California Attorney General Rob Bonta said last month that he is “particularly concerned with ensuring that the stated safety mission of OpenAI as a nonprofit remains front and center” in the restructuring.
Skeptics have dismissed OpenAI’s mission as a branding tool to lure talent from Big Tech. But many insiders at OpenAI insist it’s central to why they joined the company in the first place.
For now, Sora’s footprint is small; the app is one day old. But its debut marks a significant expansion of OpenAI’s consumer business, and exposes the company to incentives that have plagued social media apps for decades.
Unlike ChatGPT, which is optimized for usefulness, OpenAI says Sora is built for fun — a place to generate and share AI clips. The feed feels closer to TikTok or Instagram Reels, platforms that are infamous for their addictive loops.
OpenAI insists it wants to avoid those pitfalls, claiming in blog post announcing the Sora launch that “concerns about doomscrolling, addiction, isolation, and RL-sloptimized feeds are top of mind.” The company explicitly says it’s not optimizing for time spent on feed and instead wants to maximize creation. OpenAI says it will send reminders to users when they’ve been scrolling for too long, and primarily show them people they know.
That’s a stronger starting point than Meta’s Vibes — another AI-powered short form video feed released last week — that seems to have been raced out without as many safeguards. As a former OpenAI policy leader, Miles Brundage, points out, it’s possible there will be good and bad applications of AI-video feeds, much like we’ve seen in the chatbot era.
Still, as Altman has long acknowledged, no one sets out to build an addictive app. The incentives of running a feed guide them to it. OpenAI has even run into problems around sycophancy in ChatGPT, which the company says was unintentional due to some of its training techniques.
In a June podcast, Altman discussed what he calls “the big misalignment of social media.”
“One of the big mistakes of the social media era was [that] the feed algorithms had a bunch of unintended, negative consequences on society as a whole, and maybe even individual users. Although they were doing the thing that a user wanted — or someone thought users wanted — in the moment, which is [to] get them to, like, keep spending time on the site.”
It’s too soon to tell how aligned the Sora app is with its users or OpenAI’s mission. Users are already noticing some engagement optimizing techniques in the app, such as the dynamic emojis that appear every time you like a video. That feels designed to shoot a little dopamine to users for engaging with a video.
The real test will be how OpenAI evolves Sora. Given how much AI has taken over regular social media feeds, it seems plausible that AI-native feeds could soon have their moment. Whether OpenAI can grow Sora without replicating the mistakes of its predecessors remains to be seen.