Content Credentials: A novel approach to tackling deepfakes
Posted: April 8, 2025 Filed under: cybersecurity, technology | Tags: AI, artificial intelligence, content credentials, deepfake, deepfakes, fraud, technology Leave a comment
Can you believe what you see online? Unfortunately for many people today, the answer is increasingly a resounding “no”. Deepfakes are bad news for many reasons. But for CISOs, they posed an outsized threat through the potential to amplify social engineering, business email compromise (BEC) and even social media scams.
There is certainly no silver bullet for a challenge that’s only set to grow as the technology behind deepfakes gets better and cheaper. But an initiative dubbed Content Credentials has already won many plaudits, including the NSA and the UK’s National Cyber Security Centre (NCSC). It could yet help society in general, and businesses in particular, to push back against a rising tide of online fakery.
Why we need it
Deepfakes have been circulating online for several years. These digitally altered or completely synthetic pieces of audio, video and image-based content were initially viewed with curiosity as fairly harmless, easy-to-spot fakes. But the technology has rapidly matured, supercharged by generative AI (GenAI), to the point where threat actors are now using it in everything from sextortion and online scams to child abuse material. The government is frantically looking for answers to what it describes as “a growing menace and an evolving threat”, citing figures that eight million fakes will be shared in 2025, up from 500,000 two years ago.
From a CISO perspective, there are several potential risks associated with malicious use of the technology, including:
Brand damage: Deepfake videos of CEOs and senior executives circulated online could be used to tarnish the corporate brand directly, in order perhaps to influence the stock price, or even to perpetuate investment scams and other fraud.
Direct financial loss: While the above scenario could also create significant financial risk, there are other more direct tactics threat actors can use to make money. One is by amplifying business email compromise (BEC) scams. Instead of sending an email to a finance team member, requesting a fund transfer, a cybercriminal could send manipulated audio or video impersonating a supplier or C-suite executive. The FBI has been warning about this for several years. BEC cost $2.9bn in 2023 alone, and the figure continues to rise.
Unauthorised access: Beyond BEC, deepfakes could also be deployed to amplify social engineering in an attempt to gain access to sensitive data and/or systems. One such technique spotted in the wild is the fake employee threat, which has already deceived one cybersecurity company. Faked images, or even video, could be used to add credibility to a candidate that would otherwise be turned away, such as a nation state operative or cybercriminal.
Account takeover/creation fraud: Cybercriminals are also using stolen biometrics data to create deepfakes of customers, in order to open new accounts or hijack existing ones. This is especially concerning for financial services firms. According to one study, deepfakes now account for a quarter of fraudulent attempts to pass motion-based biometrics checks.
The challenge is that deepfakes are increasingly difficult to tell apart from the real thing. And the technology is being commoditised on the cybercrime underground, lowering the barrier to entry for would-be fakers. There have even been warnings that deepfakes could be made even more realistic if large language models (LLMs) were trained with stolen or scraped personal information – to create an evil “digital twin” of a victim. The deepfake would be used as the front end of this avatar, who would look, sound and act like the real person in BEC, fake employee and other scams.
How Content Credentials works
Against this backdrop, many cybersecurity professionals are getting behind Content Credentials. Backed by the likes of Adobe, Google, Microsoft and – most recently – Cloudflare, the initiative was first proposed by the Coalition for Content Provenance and Authenticity (C2PA). Currently being fast-tracked to become global standard ISO 22144, it works as a content provenance and authentication mechanism.
A Content Credential is a set of “tamper-evident”, cryptographically signed metadata attached to a piece of content at the time of capture, editing or directly before publishing. If that content is edited and/or processed over time, it may accrue more Content Credentials, enabling the individual who has altered it to identify themselves and what they did. The idea, as the NSA puts it, is to create trust among content consumers through greater transparency, just as nutrition labels do with food.
The initiative is evolving as potential weaknesses are discovered. For example, recognising that trust in the metadata itself is paramount, efforts were made to enhance preservation and retrieval of this information. Thus, Durable Content Credentials were born, incorporating digital watermarking of media and “a robust media fingerprint matching system”.
Progress will take time
If the standard takes off, it could be a game changer, argues Andy Parsons, senior director for content authenticity at Adobe.
“We’ve seen great momentum for real-world applications of Content Credentials which includes being integrated into the recently launched Samsung Galaxy S25. They are also supported by all ‘big five’ camera manufacturers – Canon, Fujifilm, Leica, Nikon, and Sony – as well as by the BBC for BBC Verify,” he tells me.
“Where social media and other websites do not yet retain visible Content Credentials when content is posted on their platforms, we have released the Adobe Content Authenticity extension for Google Chrome to allow end users to view Content Credentials on any website.”
Cloudflare’s head of AI audit and media privacy, Will Allen, adds that it could be used as a “trusted authentication tool” to tackle BEC, social media scams and other deepfake content.
“This approach helps organisations filter out manipulated content, make informed decisions and reduce exposure to misinformation,” he tells me.
However, there are still limits to the initiative’s potential impact, especially given the growing speed, quality and accessibility of deepfake tools.
Although there’s is “active work underway” to support live video, according to Adobe’s Parsons, that support is not yet finalised. This could leave the door open for threat actors using real-time deepfake tools for BEC fraud. Trend Micro senior threat researcher, David Sancho, adds that until all sources watermark their content, the potential for a high rate of false negatives is amplified.
“Often, once you see it, you can’t unsee it. This is more relevant for disinformation campaigns, but also for some scams,” he continues. “The criminals may also be able to remove fingerprinting metadata from synthetic media.”
While Content Credentials offers a helping hand in the form of additional data points to study, it’s not a silver bullet.
“Instead, to stop BEC, a company needs to implement strong processes that force finance employees to double/triple check money transfers beyond a certain amount, especially out of working hours,” Sancho continues. “This makes BEC a much more difficult proposition for the criminal because they must fool two or three people, not only one.”
Cloudflare’s Allen admits that take up remains the key to success.
“The biggest challenge is adoption – making it easier for users to inspect and verify Content Credentials,” he says. “For this to be truly effective, verification needs to be effortless and accessible, wherever users encounter media – whether on social media platforms, websites, or apps.”
Adobe’s Parsons claims that its Content Authenticity Initiative (CAI) now has over 4000 members, but agrees that end-user awareness will be key to is success.
“The more places Content Credentials show up, the more valuable they become. We also need to help build more healthy assessment of digital content and grow awareness of tools that are available,” he concludes. “Therefore, ensuring people are better educated to check for credentials and to be sceptical of content without them becomes even more essential.”
This article was first published on Assured Intelligence.
How much do words matter? Why Interpol wants to stop talking about ‘pig butchering’
Posted: March 10, 2025 Filed under: cybersecurity | Tags: crime, fraud, interpol, investment fraud, news, pig butchering, romance baiting, romance fraud, scam, scams Leave a comment
This article originally appeared on Assured Intelligence.
Cyber crime reporting is worryingly low. According to one estimate from the Crime Survey of England and Wales (CSEW) only 13% of fraud cases are reported to Action Fraud or the police by victims. National Trading Standards reckons the figure is more like 32%.
It means estimated victim losses of $652m (£801m) to romance and confidence fraud in 2023 are likely to be just the tip of the iceberg. That’s part of the reason why Interpol wants industry to do more to encourage victims to come forward. A good start, it argues, is to change the way we refer to victims of a particularly prevalent romance/investment scam hitherto known as “pig butchering”.
But can changing the way we refer to cyber crimes really have the desired effect? Or is the policing group overthinking things?
Romance baiting is the new pig butchering
Pig butchering derives its English moniker from the Chinese word “shazhupan”, which roughly equates to “killing pig game”. It refers to the way victims are often approached on dating sites by scammers, who then try to build a trusted relationship with them – “fattening them up” for the kill. Once the fraudster has won over hearts and minds, they will then suggest their victims invest in a fake crypto scheme or similar. By the time they realise it’s all a con, it’s too late for the victim. The animal has already been metaphorically slaughtered, and their hard-earned cash is gone.
Interpol’s argument is that using such language shames victims to the point where they may not be keen on coming forward. The policing group wants “romance baiting” to enter the cybersecurity lexicon instead.
“Words matter. We’ve seen this in the areas of violent sexual offences, domestic abuse, and online child exploitation. We need to recognise that our words also matter to the victims of fraud,” says Interpol acting executive director of police services, Cyril Gout.
“It’s time to change our language to prioritise respect and empathy for the victims, and to hold fraudsters accountable for their crimes.”
Is Interpol right?
This isn’t the first time that a call has gone out to change specific cyber crime terminology. Back in 2020, the National Cyber Security Centre (NCSC) led a largely successful push to change “black/whitelist” to the more racially neutral “denylist/allowlist”. The terms “black hat” and “white hat” are far less common today for similar reasons. And the maintainers of programming language Python replaced terms “slaves” to “workers” or “helpers” and “master process” to “parent process”.
But does Interpol have a point about “pig butchering”? There’s certainly a case for saying that some cyber crimes can have a particular emotional impact on the individual, especially those where the victim has been betrayed by someone they thought could be trusted. It can cause distress, shame and feelings of helplessness. One victim of a historic dating scam even told researchers she felt like the experience was akin to being “mentally raped”.
Elisabeth Carter is an associate professor of criminology and forensic linguist at Kingston University London. She agrees that language can have a “huge impact” on victims and societal narratives.
“The terminology ‘pig butchering’ is used by criminals intent on harm. It is pejorative, dehumanising and it does harm victim reporting, victim self-identification, self-esteem, recovery, and harms societal narratives around fraud victimhood which also in turn feds into barriers to reporting,” she tells me.
“Language is the very way in which criminals engage with and attack victims, using this to create an alternate reality where victims believe they are making reasonable choices, but in fact are being exploited and harmed financially and psychologically. The terms we use when communicating with the public are therefore all the more important. Far from being a distraction or overthinking, language in relation to fraud should be considered and extremely carefully selected, and only done so with an evidence-based reasoning behind it.”
KnowBe4 lead security awareness advocate, Javvad Malik, agrees up to a point.
“Kudos to Interpol for recognising the power of words. They’re not wrong – language can indeed be a barrier to reporting crimes,” he tells me. “On one hand, changing terminology could potentially confuse the public. But if the current terminology is preventing victims from coming forward, then it’s worth solving the issue.”
The power (or not) of words
Kingston University’s Carter highlights other ways in which language can in subtle ways undermine the fight against cyber crime and fraud. Although the language, manipulation and silencing tactics used by fraudsters are similar to those of domestic abusers, many people still say victims “fell for fraud” – which implies they were in some way to blame for being tricked, she argues.
“We wouldn’t say ‘don’t fall for domestic abuse’”, Carter adds. “Similarly, ‘scam’ should be avoided, as it minimises the crime, which is fraud, and ‘money you lost’ should instead be ‘money that was stolen/taken from you.’”
However, others expressed scepticism over Interpol’s calls. Silvija Krupena has over two decades of experience in financial crime prevention and is currently director of the financial intelligence unit at RedCompass Labs. She tells me that Interpol should be focusing on policing crime, not language.
“These scams are bleeding hundreds of billions annually. Do we seriously believe terminology is what’s keeping victims from reporting?” she argues. “Changing the term now adds friction, inefficiency and confusion to an already overwhelmed industry. And for what? Victims aren’t holding back because of terminology – they’re devastated, afraid and focused on recovery, not word choice.”
Krupena adds that “romance baiting” is far from ideal as a replacement, as not all pig butchering fraud involves a romance element.
The bigger picture
It is, of course, difficult to calculate just how under-reported cyber crime is. But it’s not impossible. Victim support group The Cyber Helpline estimates that, while reporting rates for all crime is 79%, it drops to just 36% for cyber-related incidents. That’s bad news not just for the victims, but also UK PLC, because it means the perpetrators are more likely to continue operating with impunity – and may turn their attention to business targets.
If the government can’t even get a proper picture of how widespread specific crime types are, and who is committing them, it will hamper both efforts to design effective public policy, and the ability of law enforcers to track down specific offenders. It’s one of the reasons why the new Labour government has made mandatory incident reporting a key part of its forthcoming Cyber Security and Resilience Bill.
Yet there’s more to encouraging incident reporting, especially among individual victims, than changing the way certain crimes are referred to. Krupena wants to see “bold transnational awareness campaigns” focused on breaking the stigma that fraud victims are caught out because they are “stupid” in some way.
“The campaigns should educate on red flags and warning signs, protecting both victims recruited by traffickers to run scams and those targeted by them,” she continues. “These efforts must start on social media, especially Meta platforms, and then telecommunication providers; the next biggest channel. Let’s go beyond wordplay and focus on what matters – education and prevention. That’s how we disrupt the cycle of cyber crime.”
According to Home Office research from over a decade ago which The Cyber Helpline claims is still relevant today, cyber crimes are also under reported because of a perception that the police can’t or won’t do anything to solve them. This in turn is influenced by a perception that digital offences are not ‘real’ crimes. Victims may also not consider themselves to be such if they’ve been refunded money stolen by fraudsters.
Yet here too language may have a part to play.
“We need to avoid saying ‘you will get your money back’, as it is not the victim’s money that is coming back; that money has gone to feed criminal enterprises,” Carter argues. “We need to instead say ‘you will be made financially whole’ or ‘you will be reimbursed’, because framing it as the victim getting their money back feeds into the wider misperception that once this is done there was no harm.”
In fact, fraud causes tremendous economic and societal harm. The proceeds of fraud are often reinvested into other nefarious activities, including cyber crime, drugs, gun running and even human trafficking. The challenge is so acute that one noted think tank has described fraud as a threat to national security. In this context, gaining a more accurate picture of the scale of the problem is the first step towards tackling it.
As Black Friday Approaches, Retailers are Braced for a Fraud Deluge
Posted: November 24, 2016 Filed under: Uncategorized | Tags: black friday, christmas shopping, e-commerce, fraud, fraud prevention, threatmetrix Leave a comment
Looking forward to Christmas? Spare a thought for the nation’s retailers, who will be battling as many as one million fraud attempts each day in the period following Black Friday, according to new estimates.
They come from ThreatMetrix, a fraud prevention company with good industry insight thanks to its Digital Identity Network platform which analyses over 20 billion transactions globally each year.
It predicted a 60% increase in fraudulent e-commerce transactions in Q4 2016 compared to the last three months of 2015.
Product and data evangelist, Rebekah Moody, told me that this time of year usually sees an uptick in activity as dodgy transactions are less likely to be spotted, because retailers loosen fraud filters to let more transactions through.
“Transaction volumes are much higher – we saw huge daily peaks for some merchants in the same period last year. This means some merchants may choose to adjust their risk tolerance to ensure that more transactions can be processed with less friction,” she explained.
Cybercriminals also jump on the fact that average basket values are usually higher in the run up to Christmas.
“Fraudsters capitalise on this by trying to sneak through higher value transactions that are less likely to flag as unusual in amongst the sea of high value transactions,” said Moody. “Last year we saw the average basket value of rejected transactions was around 70% more than the overall average. We expect this trend to be mimicked this holiday season.”
The problem is compounded by current fraud prevention technologies, many of which have problems detecting some of the more advanced techniques used by the black hats, including device and IP spoofing and automated bots.
The latter threat is increasingly prominent to the point where, during attack spikes, bot traffic exceeds legitimate user traffic, according to the company’s latest Cybercrime Report for Q3.
It has the following:
“What might begin as a simple account validation using a basic bot evolves to using a complex bot to guess unknown passwords, to a bot that masquerades as genuine human traffic to trick unsuspecting businesses.”
Another tactic which makes fraud hard to spot is when the scammer manages to trick a victim into downloading malware onto their machine.
“For example, a fraudster convinces a customer to download some remote access software after playing to their worst fears that their account is being hacked following a data breach. They pretend to be from the consumer’s bank, and reassure them that they will protect their account from the impending hack,” explained Moody.
“In actual fact they manage to take over the consumers account after the consumer has legitimately logged in.”
Because there are no unusual log-in patterns, strange locations or hacked devices to monitor, it might look like a legitimate transaction.
“The key here though is that the remote access software was suddenly enabled, and then the fraud occurred,” Moody told me. “It’s not the fact that there was remote access software installed; many consumers use this legitimately. It was the change in behaviour. Unless a fraud system is advanced enough to detect this, it could be easy to see how this technique could cause huge issues.”
The best systems work in the background, using contextual data and real-time behavioural analytics in a way that is invisible to the user. But unfortunately they’re still not the norm. According to Barclays, two thirds of retailers (64%) are confident that their digital infrastructure will cope well with the Christmas rush. But if they prioritise up-time and sales over fraud prevention, there could be some nasty surprises down the line.
