Episode 136
Listen to this episode on
Episode Transcript
Speaker 1 00:02
This is CyberSound, your simplified and fundamentals-focused source for all things cybersecurity.
Jason Pufahl 00:11
Welcome to CyberSound, the Vancord cybersecurity podcast, where we break down the latest threats, technologies and the people behind them. I’m Jason Pufahl, your host, and I’m thrilled to have our Resident Security Strategist, Steve Maresca, joining me today.
Steven Maresca 00:24
Thanks, Jason. Good to be here.
Jason Pufahl 00:27
For our video viewers, today, we will be off camera as our studio is being updated, but we’ll be back to normal soon. Steve, what’s the main topic today?
Steven Maresca 00:35
Well, today in particular, I’d like to dive into the latest release from OpenAI Sora 2 and the new social platform they’re rolling out alongside it,
Jason Pufahl 00:44
Absolutely, Steve. OpenAI announced Sora 2 on September 30, a next generation audio and video generation system that promises more realistic physics and smoother motion than its predecessor. They also launched a companion short form video app simply called Sora, which lets users insert their own likeness into generated scenes via a feature they call cameos. The cameos idea is fascinating from a technical standpoint, but it also opens up a whole new attack surface. To give our audience some context, let’s recall a high profile deep fake fraud that happened just last year,
Steven Maresca 01:18
Right, in February 2024, a finance professional at a multinational firm named Arup was duped into transferring $25 million, the fraudsters used deep fake technology to impersonate the company’s chief financial officer during a video conference, the victim believed he was speaking with senior executives, all of whom appeared and sounded exactly like his colleagues. The deception only fell apart after the transaction was completed, prompting a police briefing in Hong Kong.
Jason Pufahl 01:46
That case showed how convincing synthetic video can be when it mimics not only a person’s appearance, but also their voice, mannerisms and the dynamics of a multi-person call. The scammers exploited the trust that comes from visual confirmation, a trust that Sora’s cameos could amplify, if not carefully guarding.
Steven Maresca 02:03
And the risk isn’t just financial loss. Rachel Toback, the Chief Executive of Social Proof, a security awareness firm, warned that within three months of Sora 2’s release, we could see a video of a well known executive saying something damaging, perhaps false statements that could crash a company stock price. She’s basically saying we’re on the brink of a new wave of market manipulation using synthetic video at scale.
Jason Pufahl 02:26
That warning is sobering. With Sora’s algorithmic feed modeled after popular short form video platforms, a single fabricated clip could go viral in minutes, reaching millions before any fact checking can occur. The platform’s recommendation engine will consider a user’s activity, location derived from IP address, past engagement and even their conversation history with ChatGPT, unless they opt out, that data can be used to micro target individuals, making the spread of disinformation even more precise.
Steven Maresca 02:54
Let’s talk about the responsibility of platform providers in this space. OpenAI is positioning Sora as a free to use experience, with the only monetization being a charge for extra video generation during periods of high demand. But free access can lead to rapid adoption, and with that comes a duty to implement robust safeguards.
Jason Pufahl 03:12
Exactly. The first line of defense should be a strong verification process for cameos. OpenAI requires a one time video and audio recording to confirm a user’s identity before their likeness can be used. That’s a good start, but it’s not enough. Malicious actors could still obtain a verified recording through social engineering or by compromising an account.
Steven Maresca 03:33
Moreover, the ability for users to grant permission to friends to use their likeness adds another layer of complexity. Imagine a scenario where a user unwittingly authorizes a friend to embed them in a video that spreads false statements, even if the original user revokes permission later, the video may already have been downloaded, shared and repurposed.
Jason Pufahl 03:51
This is where platform providers must take proactive steps. They need to implement real time detection of synthetic media that could be used for defamation or fraud, techniques like watermarking generated frames, cryptographic provenance tags or requiring a visible Generated by Sora badge could help downstream platforms and users identify manipulated content.
Steven Maresca 04:13
In addition, OpenAI should integrate a rapid takedown process if a user reports non consensual use of their likeness, the platform must be able to locate and remove the offending video across its feed, as well as provide a clear audit trail for law enforcement.
Jason Pufahl 04:27
Speaking of law enforcement, there is a glaring gap in legislation regarding non consensual synthetic video, while many jurisdictions have laws against deep fake pornography, broader misuse, such as political sabotage or financial fraud, remains an illegal gray area. Platform providers therefore have a moral, if not legal, obligation to act as gatekeepers.
Steven Maresca 04:46
Let’s bring this back to the user perspective. The Sora app includes parental controls powered by ChatGPT, allowing guardians to limit infinite scrolling, disable algorithmic personalization and manage direct messaging, while that’s a step forward, the effectiveness of these controls depends heavily on a parent’s technical literacy. If a parent doesn’t understand how deep fake generation works, they may not configure the settings properly.
Jason Pufahl 05:11
That’s a classic challenge in cybersecurity. Bridging the gap between sophisticated technology and everyday users. Education is key. Security awareness programs need to incorporate modules on synthetic media, teaching people to verify sources, look for watermark indicators and question content that seems too perfect or too sensational.
Steven Maresca 05:30
We should also consider how organizations can protect themselves. Companies can adopt a policy that any video communication involving executives must be accompanied by a secondary verification factor, perhaps a secure token displayed only on a trusted device. That way, even if a deep fake of a chief executive is generated, the lack of the token would raise a red flag.
Jason Pufahl 05:50
Another practical measure is to monitor social media for brand impersonation. Automated monitoring tools can flag videos that use a company’s logo, slogans or executives faces without authorization. Early detection can enable a rapid response, whether that’s issuing a public statement or requesting removal from the platform.
Steven Maresca 06:07
It’s also worth noting that open AI’s recommendation engine will factor in a user’s ChatGPT conversation history unless they turn it off. That raises privacy concerns. Users need to be fully aware that their chat logs could influence what videos they see, potentially creating echo chambers that reinforce misinformation.
Jason Pufahl 06:25
Transparency is essential. The platform should provide an easy to access dashboard where users can see exactly which data points are being used for personalization and allow them to opt out of each category individually. That level of control can help mitigate the risk of algorithmic amplification of harmful content.
Steven Maresca 06:42
Let’s shift gears for a moment and discuss the broader ecosystem. Competitors like meta have recently added a video feed called vibes to their own artificial intelligence driven application, while meta’s offering has been described as mindless slop, the underlying principle is the same short form video combined with powerful generation capabilities, the race to dominate user attention, means we’ll see a proliferation of similar services, each with its own set of security challenges.
Jason Pufahl 07:08
Indeed, the more platforms that enable user generated synthetic video, the larger the attack surface for fraudsters. Attackers can test their deep fake techniques on one platform and then deploy the most convincing version on another with a broader audience, that cross platform risk underscores the need for industry wide standards.
Steven Maresca 07:26
Industry groups could develop a set of best practices for synthetic media platforms, covering verification, watermarking, content moderation and user education. If major players adopt these standards, it could create a baseline of protection that makes it harder for malicious actors to operate unchecked
Jason Pufahl 07:43
Until such standards are in place, organizations and individuals must stay vigilant. For example, finance teams should adopt a two person two channel policy for large transfers requiring confirmation via a separate communication method that cannot be spoofed by video alone.
Steven Maresca 07:57
And executives should be proactive in protecting their digital likeness by regularly updating the verification video used for cameos, monitoring where their face appears online, and issuing clear statements about authorized use, they can reduce the chance of their image being weaponized.
Jason Pufahl 08:13
Before we wrap up, let’s summarize the key takeaways from today’s discussion.
Steven Maresca 08:18
First, OpenAI, Sora 2 represents a leap forward in realistic video generation, but that realism also heightens the potential for deception.
Jason Pufahl 08:26
Second, real world incidents like the Arup deepfake fraud and the warning from Rachel Toback show that synthetic video can already cause massive financial and reputational damage. Third, platform providers bare a heavy responsibility to implement verification watermarking, rapid takedown and transparent data usage practices to curb abuse.
Steven Maresca 08:47
Fourth, users, families and organizations must adopt robust verification processes, educate themselves on synthetic media threats and enforce policies that require multiple forms of authentication for critical actions.
Jason Pufahl 08:59
And finally, the industry as a whole should work toward common standards that raise the security baseline for all synthetic media platforms. Steve, any final thoughts before we sign off?
Steven Maresca 09:10
Just a reminder that the technology itself is neutral. It’s how we choose to deploy and protect against it that determines the outcome. By staying informed, demanding accountability from providers and building layered defenses, we can enjoy the creative possibilities of tools like Sora 2 without falling victim to their darker uses.
Jason Pufahl 09:27
Well said, that’s all for today’s episode of CyberSound. I’m Jason Pufahl.
Steven Maresca 09:32
And I’m Steve Maresca. Thanks for listening.
Jason Pufahl 09:34
If you found this episode useful, please subscribe, share with your colleagues and consider supporting the show, until next time. Keep your data guarded and your curiosity sharp.
Steven Maresca 09:44
And for the true last word to underscore the point the entirety of this episode was generated. Be wary in 2025 this is doable on cheap, surplus hardware. Let us know out of band whether you noticed before this comment, Take care, everyone.
Speaker 1 09:58
We’d love to hear your. Feedback, feel free to get in touch at Vancord on LinkedIn and remember, stay vigilant, stay resilient. This has been CyberSound.



































































































