You record a short selfie video to create a “cameo” profile and choose who can access it – friends, everyone or just yourself.
Anyone with access can type a few words describing what they want to see you doing and generate a clip in minutes.
You have to opt in to share your face, but you don’t get to approve what people make with it.
The app sends you a notification after someone creates a video of you. Sora is also a social network like TikTok, except everything you see is AI generated.
You can remove a video featuring your likeness from inside the Sora app. But by then its creator has already seen it, possibly shared, and maybe even downloaded and posted it in a different social app.
Sora pushes the idea of consent – or having say over how you appear online – into uncomfortable territory.
Until recently, using AI to fake someone’s likeness was called a deepfake and considered taboo.
Now OpenAI has rebranded it as a cameo and made it the social activity of the week.
Lost in the wave of LOLs is what it actually feels like to lose control of your own face.
To understand both the technical and interpersonal dynamics of this, I asked my very creative colleague Chris Velazco to go wild making Sora videos of me.
He made a fake DUI video and dozens more, including one of Ronald Reagan appointing me “Chair of the National Council of Lying to Children”. The Washington Post has a content partnership with OpenAI, but I review all tech with the same critical eye.
Much of it was genuinely funny. Sora is more impressive than previous AI video tech at creating faces – the expressions, the lip movements, sometimes even the tone of voice.
There’s something inherently entertaining about seeing yourself in scenarios you’d never experience. When you hand someone else control of that mirror, they can be creative in ways you wouldn’t be.
One of my favourites is a video Chris made of me being berated by a reality TV chef for a bad dish, in which I make a face people who know me say is eerily accurate.
I might have left it there – just harmless fun between friends.
Then Chris made a video of me telling an off-colour joke in the office. I felt my stomach drop. It wasn’t something I’d ever say, but it wasn’t outside the realm of plausibility.
That’s what made it disturbing. I could imagine showing it to someone, watching them watch it, seeing the question form in their mind: did he actually say that?
I’m a journalist with a platform and a byline. What about actually vulnerable people, from the victims of abuse to targets of online fraud?
Imagine high school.
Picture someone’s face showing them shoplifting, using drugs, saying slurs – sent to employers, parents, schools.
I called up Eva Galperin, director of cybersecurity at the non-profit Electronic Frontier Foundation, who has deep experience helping abuse victims deal with digital threats.
“Videos of friends shoplifting at Target are much more dangerous for some populations than others,” she said.
“Fundamentally, what OpenAI is misunderstanding is power dynamics.”
What you can, and can’t, control about your face
In a statement, OpenAI told me “you are in control of your likeness end-to-end with Sora”. When I asked why users don’t approve videos before they’re created, the company thanked me for the feedback. Translation: We know this is a problem. We shipped it anyway.
“We should have a lot more control over our identity,” said Hany Farid, a University of California at Berkeley professor who has studied AI-generated media for years.
“But making the product safer makes it less viral, so nobody wants to do that,” he added.
The cameo-sharing defaults make it worse. When you sign up, Sora automatically sets your cameo to share with “mutuals” – anyone who follows you and you follow back in its social network function. Most people won’t keep track of that list.
“Friends now do not remain friends indefinitely – people’s relationships do change,” Galperin told me.
You’re essentially trusting everyone you’ve ever followed back to never become an ex, an enemy or just someone with bad judgment.
After the launch, Sora gave users a bit more control. Under cameo preferences, you can now write in guidelines for how you want to be portrayed in other people’s videos, which includes “restrictions” for what should be off-limits.
“We’re continuing to hillclimb on making restrictions even more robust, and we will add new ways for you to stay in control of your cameo in the future,” wrote Bill Peebles, the head of Sora, on X.
OpenAI also says Sora has guardrails against sexual, violent and otherwise harmful content. You can’t make a video of someone undressed, but I’ve got videos of crowds spitting on me, someone slapping me, and me flipping off a baby. (The AI raised the wrong finger, but still.) To make a video of someone “being nasty”, you just type their name and “being nasty”.
Sora videos also contain a visible watermark, to help people know they’re fake. But my designer showed me she was able to crop it out with video editing software from Adobe in about five minutes.
How a fake video can hurt
OpenAI chief executive Sam Altman set his Sora cameo to share with everyone.
In the app’s first week, he has appeared in many videos acting goofy. When I asked what he learned from the experience, OpenAI wouldn’t say. But Altman isn’t a regular user – he has the power to ban anyone from the app.
Online creator Justine Ezarik, who is known as iJustine, also opened her cameo to everyone, then found people making sexualised videos throwing liquid on her body.
In an email, Ezarik said she wasn’t surprised it happened, and she now actively monitors every draft and deletes anything that crosses a line. Most users won’t have the time, attention or technical savvy to do that.
“It’s sparking important conversations,” she said, including “about how these tools can be misused to harm or bully people”.
Non-celebrity users could face a different set of complications.
“I’m concerned about the use of these videos as instruments of harassments and also as instruments of grift – ways of fooling people into thinking their friends are in trouble and need money,” she said.
So what should you do? Jules Terpak, a Gen Z online creator, told me she thinks regular users would be “quite open to opening up the feature to friends” but not the public, a dynamic she has already seen among her peers.
For close friends who communicate well, this might be fine. But even then, there’s a minefield.
“I would feel the need to ask a friend if I could post the cameo before doing so,” said Terpak, who previously worked as an advice columnist for the Washington Post.
“I don’t think it’s necessarily harmful to privately create with their identity if they gave me the approval to do so via the app’s settings, but even so, I intuitively feel like I’d still want to check in before actually publishing.”
That instinct – to ask permission even when you technically have it – is the consent culture OpenAI should have built into the app. Instead, they’ve outsourced it to the courtesy of individual users.
After my experiment with Chris, I changed my Sora settings to “only me” – meaning only I can make videos of myself. It defeats the social purpose of the app, but that’s the point. If you’ve already created a cameo, you can delete it in settings, though that doesn’t remove you from videos already created.
Chris apologised for making me uncomfortable. Now imagine it in the hands of someone who wouldn’t care.
– Gerrit De Vynck and Drew Harwell contributed to this report.
Sign up to Herald Premium Editor’s Picks, delivered straight to your inbox every Friday. Editor-in-Chief Murray Kirkness picks the week’s best features, interviews and investigations. Sign up for Herald Premium here.