Here comes mayhem: How The Com is rewriting the threat landscape
Posted: June 27, 2025 Filed under: cybersecurity, Uncategorized | Tags: AI, cyber security, cybersecurity, NCA, scattered spider, security, technology, The Com, threat landscape Leave a comment
When it comes to the cyber-threat landscape, Russian actors are usually portrayed as the bogeymen. But over the past few months and years, a more disturbing picture has started to emerge. A different breed of hacker has stepped out of the shadows – technically proficient, native-English speaking and with an almost nihilistic penchant for violence and human misery.
Sometimes described as “The Com” or “Scattered Spider”, these loosely associated grassroots groups defy easy categorisation. The question is, with the likes of M&S, MGM Resorts, and Santander among their growing list of victims, how big a threat do they pose to CISOs?
Uncovering The Com
UK CISOs may have first heard the moniker “The Com” or “Com networks” following the publication of the latest annual report from the National Crime Agency (NCA) in March. In it, the agency warns of “sadistic and violent online gangs” comprised mainly of teenage boys engaging in acts of extremism, sexual violence and sadistic child abuse. Reports of this emerging threat increased six-fold between 2022 and 2024, with the NCA claiming that girls as young as 11 had been coerced by members into “seriously harming or sexually abusing themselves, siblings or pets”.
What has this got to do with enterprise cybersecurity? Curiously, Com network members are also blamed for data breaches, fraud, and malware/ransomware attacks. On paper, The Com seems far removed from the highly professionalised world of Russian cybercrime. Yet some of its supposed members use techniques that traditional threat actors would applaud and have been tied to some of the most damaging breaches on record.
Where does it all begin? According to Unit 221B researcher, Allison Nixon, the Com’s members were largely financially motivated until the early 2020s, when sextortion and high-value fraud also became popular. The “bottom-up social phenomenon” now venerates depravity, harm and misogyny – with youngsters recruited because of their naïvety, hunger for attention and money, and reduced exposure to legal jeopardy. However, although the worst acts of these networks are truly awful, they represent only a small percentage of total members, says Nixon.
High-profile arrests seem to be dampening down their worst excesses, she says. But the threat to enterprises remains undiminished, as recent attacks on UK retailers have shown.
The Com/Scattered Spider crossover
A detailed Brian Krebs investigation into the young men behind many of these attacks shows the strong links between Scattered Spider and Com networks. They include:
- Connor Riley Moucka, a Canadian hacker blamed for the major breach of Snowflake accounts, who also goes by the monikers ‘Judische’ and ‘Waifu’. The latter corresponds to “one of the more accomplished SIM-swappers in The Com over the years”, according to Krebs
- The ‘@Holy’ screen name, associated with a Telegram user who gave media interviews about the MGM hack. The same account was apparently active on a number of cybercrime channels focused on extorting young people into harming themselves or others, and recording it on video
- Noah Michael Urban, a 19-year-old American indicted in January 2024 with a string of wire fraud and identity theft offences. His ‘King Bob’ and ‘Sosa’ monikers are linked to real-world violence-as-a-service offerings
- Four other young men who, along with Urban, were indicted in November 2024 by US authorities for a string of attacks involving the phishing of IT helpdesks, data theft and crypto-based extortion
- 22-year-old Tyler Buchanan, another member of the same group, who allegedly took part in a 2022 phishing campaign which resulted in the theft of 10,000 login credentials related to more than 130 companies
- Conor Brian Fitzpatrick (aka Pompompurin), who pleaded guilty to operating the BreachForums criminal marketplace, and possessing child pornography back in 2023
A different way of doing things
According to a recent ReliaQuest report, Scattered Spider relies heavily on social engineering to achieve initial access, often using the off-the-shelf Evilginx tool to bypass multi-factor authentication (MFA). A recent analysis of over 600 publicly shared IOCs by the threat intelligence firm reveals that its phishing domains primarily target services such as single sign-on (SSO), identity providers (IdP), VPNs, and IT support systems.
The end goal is to harvest credentials from high-value users, including system administrators, CFOs, COOs, and CISOs. When Scattered Spider actors fail with initial phishing attempts, they double down, using vishing techniques to impersonate C-level executives. Typically, they make panicked helpdesk calls requesting password resets or enrollment of new MFA devices, ReliaQuest claims.
The report also warns MSPs in particular to be on their guard, as actors are keen on ‘one-to-many’ attacks. In a recent example, they breached an MSP and exploited vulnerabilities in the SimpleHelp remote management software to deploy ransomware across client networks, it claims.
SOCRadar CISO, Ensar Seker, tells Assured Intelligence that this new breed of threat actor presents new challenges to network defenders accustomed to facing more traditional adversaries.
“Scattered Spider and the Com network actors represent a distinct kind of threat compared to traditional Russian-speaking cyber criminal groups. What sets them apart isn’t necessarily technical sophistication, but their boldness, deep social engineering playbooks, and insider-like operational tempo,” he explains. “These groups frequently exploit identity and access mismanagement, leveraging SIM swapping, MFA fatigue attacks, and even targeting IT help desks to gain privileged access. Their tactics resemble those of APTs but are often executed with the agility and audacity of hacktivist crews, making attribution and defence more complex.”
ReliaQuest director of threat research, Brandon Tirado, agrees, explaining that Scattered Spider actors often cause significant damage within just eight hours of initial access – for example, by rapidly escalating privileges and abusing identity systems like Okta and Azure AD.
“In addition to their speed and expertise in social engineering, their potency lies in their fluency in English, which helps avoid tipping off the targeted organisation’s helpdesk, and their ‘scattered’ nature – operating as a loosely organised network rather than a centralised group,” he tells Assured Intelligence.
“This decentralised structure makes them more unpredictable and adaptable.”
Lessons for CISOs
The threat actor profile may be unusual, but ultimately, they are still focused on the same thing as any cyber criminal: making money. That’s why several notable Com attacks have seen actors work as affiliates for ransomware groups like ALPHV/Black Cat (MGM) and – more recently – DragonForce (M&S).
“CISOs should focus on proactive monitoring of third-party accounts, bolstering helpdesk defences with identity verification protocols, and enforcing adaptive MFA policies,” advises Tirado. “Compared to Russian cyber criminals, who often rely on longer dwell times, combating Scattered Spider requires faster detection, automated response playbooks, and real-time threat hunting to neutralise their rapid operations.”
SOCRadar’s Seker agrees that CISOs need to “double down on identity security” with phishing-resistant MFA, privilege access management and regular access audits, alongside specialised employee training.
“Defending against these threat actors demands a mindset shift. While traditional ransomware groups often follow a predictable path – initial access broker, lateral movement, exfiltration, and encryption – groups like Scattered Spider bypass many of these stages by targeting identity and session hijacking. This means the usual EDR, network segmentation, and backup combo isn’t enough,” he adds.
“These homegrown actors are loud, fast, and opportunistic. What they lack in stealth, they compensate for in adaptability. That makes real-time visibility into authentication events and faster incident response cycles non-negotiable.”
Bridewell CTO, Martin Riley, adds that preparedness is vital. “If we compare recent attacks, one retailer has been far worse hit, because it wasn’t able to ‘pull the plug’ on non-essential services that prevented the spread of the attack,” he tells Assured Intelligence. “Do you know your organisation and technology enough to understand what is an operational and defendable cybersecurity position? What can you turn off, what impact will it have on the business, and what must you keep?”
Qodea CISO, Adam Casey, argues that security leaders must also go beyond the technical to drive cultural change through continuous awareness training and testing.
“Security is a shared responsibility and CISOs need to be reinforcing that vigilance is expected from everyone within the organisation. The M&S cyber attack demonstrated how conventional cybersecurity layers weren’t even a factor. They manipulated ‘outsourced’ IT staff through impersonation, then went straight for the jugular by targeting leadership,” he tells Assured Intelligence.
“CISOs are also going to need to put a focus on their outsourced operations. Recent attacks have shown that a third-party risk management programme is essential – and needs to be rock solid.”
Whatever freakish confluence of societal factors originally fomented The Com, it’s here now. This is the reality CISOs need to adapt to, and a new threat to consider in their risk planning.
This article first appeared on Assured Intelligence.
Content Credentials: A novel approach to tackling deepfakes
Posted: April 8, 2025 Filed under: cybersecurity, technology | Tags: AI, artificial intelligence, content credentials, deepfake, deepfakes, fraud, technology Leave a comment
Can you believe what you see online? Unfortunately for many people today, the answer is increasingly a resounding “no”. Deepfakes are bad news for many reasons. But for CISOs, they posed an outsized threat through the potential to amplify social engineering, business email compromise (BEC) and even social media scams.
There is certainly no silver bullet for a challenge that’s only set to grow as the technology behind deepfakes gets better and cheaper. But an initiative dubbed Content Credentials has already won many plaudits, including the NSA and the UK’s National Cyber Security Centre (NCSC). It could yet help society in general, and businesses in particular, to push back against a rising tide of online fakery.
Why we need it
Deepfakes have been circulating online for several years. These digitally altered or completely synthetic pieces of audio, video and image-based content were initially viewed with curiosity as fairly harmless, easy-to-spot fakes. But the technology has rapidly matured, supercharged by generative AI (GenAI), to the point where threat actors are now using it in everything from sextortion and online scams to child abuse material. The government is frantically looking for answers to what it describes as “a growing menace and an evolving threat”, citing figures that eight million fakes will be shared in 2025, up from 500,000 two years ago.
From a CISO perspective, there are several potential risks associated with malicious use of the technology, including:
Brand damage: Deepfake videos of CEOs and senior executives circulated online could be used to tarnish the corporate brand directly, in order perhaps to influence the stock price, or even to perpetuate investment scams and other fraud.
Direct financial loss: While the above scenario could also create significant financial risk, there are other more direct tactics threat actors can use to make money. One is by amplifying business email compromise (BEC) scams. Instead of sending an email to a finance team member, requesting a fund transfer, a cybercriminal could send manipulated audio or video impersonating a supplier or C-suite executive. The FBI has been warning about this for several years. BEC cost $2.9bn in 2023 alone, and the figure continues to rise.
Unauthorised access: Beyond BEC, deepfakes could also be deployed to amplify social engineering in an attempt to gain access to sensitive data and/or systems. One such technique spotted in the wild is the fake employee threat, which has already deceived one cybersecurity company. Faked images, or even video, could be used to add credibility to a candidate that would otherwise be turned away, such as a nation state operative or cybercriminal.
Account takeover/creation fraud: Cybercriminals are also using stolen biometrics data to create deepfakes of customers, in order to open new accounts or hijack existing ones. This is especially concerning for financial services firms. According to one study, deepfakes now account for a quarter of fraudulent attempts to pass motion-based biometrics checks.
The challenge is that deepfakes are increasingly difficult to tell apart from the real thing. And the technology is being commoditised on the cybercrime underground, lowering the barrier to entry for would-be fakers. There have even been warnings that deepfakes could be made even more realistic if large language models (LLMs) were trained with stolen or scraped personal information – to create an evil “digital twin” of a victim. The deepfake would be used as the front end of this avatar, who would look, sound and act like the real person in BEC, fake employee and other scams.
How Content Credentials works
Against this backdrop, many cybersecurity professionals are getting behind Content Credentials. Backed by the likes of Adobe, Google, Microsoft and – most recently – Cloudflare, the initiative was first proposed by the Coalition for Content Provenance and Authenticity (C2PA). Currently being fast-tracked to become global standard ISO 22144, it works as a content provenance and authentication mechanism.
A Content Credential is a set of “tamper-evident”, cryptographically signed metadata attached to a piece of content at the time of capture, editing or directly before publishing. If that content is edited and/or processed over time, it may accrue more Content Credentials, enabling the individual who has altered it to identify themselves and what they did. The idea, as the NSA puts it, is to create trust among content consumers through greater transparency, just as nutrition labels do with food.
The initiative is evolving as potential weaknesses are discovered. For example, recognising that trust in the metadata itself is paramount, efforts were made to enhance preservation and retrieval of this information. Thus, Durable Content Credentials were born, incorporating digital watermarking of media and “a robust media fingerprint matching system”.
Progress will take time
If the standard takes off, it could be a game changer, argues Andy Parsons, senior director for content authenticity at Adobe.
“We’ve seen great momentum for real-world applications of Content Credentials which includes being integrated into the recently launched Samsung Galaxy S25. They are also supported by all ‘big five’ camera manufacturers – Canon, Fujifilm, Leica, Nikon, and Sony – as well as by the BBC for BBC Verify,” he tells me.
“Where social media and other websites do not yet retain visible Content Credentials when content is posted on their platforms, we have released the Adobe Content Authenticity extension for Google Chrome to allow end users to view Content Credentials on any website.”
Cloudflare’s head of AI audit and media privacy, Will Allen, adds that it could be used as a “trusted authentication tool” to tackle BEC, social media scams and other deepfake content.
“This approach helps organisations filter out manipulated content, make informed decisions and reduce exposure to misinformation,” he tells me.
However, there are still limits to the initiative’s potential impact, especially given the growing speed, quality and accessibility of deepfake tools.
Although there’s is “active work underway” to support live video, according to Adobe’s Parsons, that support is not yet finalised. This could leave the door open for threat actors using real-time deepfake tools for BEC fraud. Trend Micro senior threat researcher, David Sancho, adds that until all sources watermark their content, the potential for a high rate of false negatives is amplified.
“Often, once you see it, you can’t unsee it. This is more relevant for disinformation campaigns, but also for some scams,” he continues. “The criminals may also be able to remove fingerprinting metadata from synthetic media.”
While Content Credentials offers a helping hand in the form of additional data points to study, it’s not a silver bullet.
“Instead, to stop BEC, a company needs to implement strong processes that force finance employees to double/triple check money transfers beyond a certain amount, especially out of working hours,” Sancho continues. “This makes BEC a much more difficult proposition for the criminal because they must fool two or three people, not only one.”
Cloudflare’s Allen admits that take up remains the key to success.
“The biggest challenge is adoption – making it easier for users to inspect and verify Content Credentials,” he says. “For this to be truly effective, verification needs to be effortless and accessible, wherever users encounter media – whether on social media platforms, websites, or apps.”
Adobe’s Parsons claims that its Content Authenticity Initiative (CAI) now has over 4000 members, but agrees that end-user awareness will be key to is success.
“The more places Content Credentials show up, the more valuable they become. We also need to help build more healthy assessment of digital content and grow awareness of tools that are available,” he concludes. “Therefore, ensuring people are better educated to check for credentials and to be sceptical of content without them becomes even more essential.”
This article was first published on Assured Intelligence.
UK government security is foundering: here’s how to fix it
Posted: February 13, 2025 Filed under: cybersecurity, technology, Uncategorized | Tags: AI, cyber security, cyber security strategy, cybersecurity, government, public sector, security, technology, uk government Leave a comment
This article first appeared on Assured Intelligence.
We knew it was bad, but not as bad as this. On January 29 the National Audit Office (NAO) released a bombshell report revealing, in gory detail, the challenges facing central government cybersecurity leaders. Blaming skills gaps and funding shortages for much of the malaise, it warns that the cyber-threat to government is “severe and advancing quickly”, urging immediate action to protect vital public services.
The spending watchdog did not pull its punches. But the gaps in cyber resilience it identifies are so pronounced that fixing them will be extremely challenging, especially with a self-imposed deadline of 2030.
A giant target
There’s no doubting the massive target central government has painted on its back. The National Cyber Security Centre (NCSC) warns of a “diffuse and dangerous” threat from hostile states as well as cybercrime groups. Hacking tools and easy-to-use pre-packaged services are freely available online, as are breached credentials, including those linked to .gov email domains. The use of generative AI tools to upskill threat actors in penetration testing, and innovative new techniques like IT impersonation are already accelerating and improving outcomes for adversaries.
This matters for central government in particular, given the huge number of citizens that rely on public services. The NAO report cites NCSC figures claiming that 40% of incidents managed by the agency between 2020 and 2021 targeted the public sector. Breaches at NHS provider Synnovis and the British Library show the devastating impact and cost these can have.
Yet despite the ambition outlined in the Government Cyber Security Strategy: 2022–2030, plans appear to have languished under the previous administration.
What’s gone wrong?
The headline-grabbing part of the report is all about visibility and resilience, and the work of the Government Security Group (GSG) – the Cabinet Office body that oversees central government security. It claims that a 2023-24 assessment by the government’s new cyber assurance scheme, GovAssure, found that 58 critical departmental IT systems had “significant” gaps in cyber resilience, creating “extremely high” risk.
“The data highlighted multiple fundamental system controls that were at low levels of maturity across departments including asset management, protective monitoring, and response planning,” the report notes. “GSG reported to ministers the implication of these findings: the cyber-resilience risk to government was extremely high.”
Edwin Weijdema, EMEA field CTO at Veeam, argues that asset management, protective monitoring and incident response planning are three “interconnected pillars” vital to cybersecurity.
“If you don’t know about it, you can’t secure it – so a thorough asset inventory is the first step to knowing exactly what needs protection,” he tells Assured Intelligence.
“Once you have this visibility, protective monitoring of those assets provides real-time detection of suspicious activity, helping to prevent small issues from turning into major breaches. Finally, a robust response plan ensures you’re ready to recover quickly when incidents occur, turning potential chaos into controlled chaos with a smaller blast radius and much less damage tied to it.”
According to the NAO, the GSG also failed to include legacy IT systems in the GovAssure audit because many of its recommended controls were apparently not applicable to such technology. That has unwittingly created a significant visibility gap at the heart of government.
“In March 2024, departments reported using at least 228 legacy IT systems. Of these, 28% (63 of 228) were red-rated as there was a high likelihood and impact of operational and security risks occurring,” the NAO report notes.
Other critical cybersecurity challenges and failings highlighted by the NAO include:
- Until April 2023, the government did not collect “detailed, reliable data” about the cyber resilience of individual departments
- The government has not improved cyber resilience quickly enough to meet its aim to be “significantly hardened” to cyber-attack by 2025
- Departments still find it difficult to understand the roles and responsibilities of the cyber-related bodies at the centre of government
- GSG has no effective mechanisms in place to show whether its approach to government cybersecurity is effective, or even a plan to make government organisations cyber resilient by 2030
The NAO also slams individual departments for failing to meet their responsibilities to improve resilience. It claims that leaders “have not always recognised how cyber risk is relevant to their strategic goals” and that boards often don’t even include any members with cyber expertise.
James Morris, CEO of the non-profit Cybersecurity and Business Resilience Policy Centre (CSBR), argues that there’s plenty to be done.
“Cyber resilience needs to be hardwired into the processes of central government departments and made a priority for their core strategic and operational work,” he tells Assured Intelligence.
“It should also be identified as a core strategic priority for ministers and senior civil servants. Each department should identify where skill gaps are putting resilience at risk and plans should be put in place to improve cyber resilience skills among existing staff.”
Too few skills, not enough money
However, at the heart of the problem appear to be both money and talent. A cyber directorate set up by the GSG to lead cybersecurity improvement across government apparently had 32% of posts unfilled when first established. In 2023-24, a third of security roles in central government were either vacant or filled by temporary staff, with the share of vacancies in several departmental security teams over 50%.
“There are only two real options: increase the supply of cybersecurity skills, or recognise that market rates are what they are for cybersecurity skills, and pay them. Better still, do both,” says Ian Stretton, director at consulting firm Green Raven Limited. “But these are long-term fixes that will take years to effect.”
Attracting talent is made harder when departments must compete with deep-pocketed private sector organisations for a limited number of skilled professionals. The government announced in 2021 a £2.6bn funding boost for cyber, of which it apparently allocated £1.3bn to departments for cybersecurity and legacy IT remediation. However, since 2023, departments have “significantly reduced” the scope of improvement programmes, the NAO says. As of March 2024, departments did not have fully funded plans to remediate around half of the government’s legacy IT assets.
How to sort out this mess
In the absence of funding, it will be a tough ask to meet the recommendations set out by the NAO (see boxout). However, it is possible, according to the experts Assured Intelligence spoke to.
“Central government departments can boost cyber resilience – even in the face of legacy IT – by focusing on three core principles: speed, skills and accountability,” argues Veeam’s Weijdema.
“Speed in detection is crucial because the sooner you spot a breach, the less time attackers have to move laterally, exfiltrate data or disrupt critical services. Continuous log monitoring, threat intelligence feeds, and anomaly detection tools should be in place to catch potential intrusions in near real-time. Equally important is the ability to respond swiftly. Well-defined processes and empowered teams prevent small issues from escalating into large-scale crises.”
Government must also recognise the high demand for security professionals and pay competitive salaries, as well as offering clear career progression, and investing heavily in training to plug the skills gap, Weijdema adds. Security teams should be held accountable for the outcomes of the measures they take, he says.
“Finally, regular drills and exercises – like red-team attacks or simulated breaches – will help to instil a culture of digital emergency response,” Weijdema continues. “Just as physical first responders train constantly for disasters, a cyber workforce should practice containing threats under realistic conditions. Such exercises refine tactics, highlight weaknesses and foster collaboration.”
Green Raven’s Stretton agrees that government must find the money to compete with the private sector on salaries, but warns that this alone will not be enough.
“Even if there were enough cybersecurity professionals to go around, current cyber-defence strategies revolve around building higher and higher walls. But this isn’t a sustainable approach to cybersecurity, and cyber pros know it,” he tells Assured Intelligence.
“The problem is the world is still thinking about cybersecurity like medieval monarchs used to think about castles: just dig deeper ditches and build higher ramparts and it’ll be fine. Instead, we need to get smarter and focus defensive resources on where we know they are going to be needed.”
By making the most of AI-powered cyber-threat intelligence, government bodies can get back on the front foot against their adversaries, Stretton argues.
“Rather than constantly reacting to general threats, knowing who is coming after your organisation, and with what ‘weapons’, means you can remove the blindfold and react to what poses the greatest threat,” he says. “It’s analogous to how the security services work: there aren’t enough of them to keep us safe by sheer force of numbers, so they use sophisticated intelligence-gathering to pre-empt attacks and intercept attackers.”
The fact that the NAO report has been published at all is a positive sign. It’s signifies the new government’s recognition of the growing cyber-threat facing Whitehall, and its desire to achieve key parts of the 2022-2030 strategy by the end of the year. However, whether it can match this ambition with results remains to be seen.
The Singularity and the CIO: Discuss
Posted: August 26, 2016 Filed under: Uncategorized | Tags: AI, artificial intelligence, cio, forrester, IHS, machine learning, manufacturing, robots, the singularity Leave a comment
Sci-fi writers have been warning us about the coming of the singularity for a decade now. And while we’re some years away from having to contemplate such a future, AI, machine learning, big data and other technologies are developing at a pace which is already beginning to impact the global workforce.
I chatted to some experts on the subject for an upcoming feature to find out whether CIOs should be terrified or enthused by the prospect of robot workers.
The truth is that they’re already here, in many heavy industries like tech manufacturing. In May this year a local government official in the Chinese district of Kunshan announced contract manufacturing giant Foxconn was reducing “employee strength” from 110,000 to 50,000 workers, because of investments in robots. But what about when they spread into other industries? As far back as 2014, Gartner was predicting that as many as one in three jobs will be “converted to software, robots and smart machines by 2025” as software advances mean technology systems begin to replace cognitive tasks as well as factory jobs.
Meanwhile, a report from the Bank of England last year estimated up to 15 million UK jobs could be at risk of automation in the future. And a Deloitte/Oxford University study in January claimed 35% of today’s jobs have a “high chance” of being automated in the next 10-20 years.
For IHS Markit analyst, Wilmer Zhou, the coming robot hordes represent both a challenge and an opportunity to employers. Aside from manufacturing, he picked out several industries where jobs are potentially most at risk, including agriculture, logistics, and specialist domestic care. Most surprising for me was healthcare.
“It’s one of the industries with relatively high robot deployment such as surgical robots,” he told me via email. “IHS forecasts that robots in the medical industry will be one of the fastest growth sectors, with the decreasing of the average sale price of surgical robots and expansion of medical operation tasks.”
For CIOs looking to maximise the potential offered by these new automated workers, it will be important to create trust in the bots, argued Forrester principal analyst, Craig Le Clair.
“Cognitive systems can end up learning undesirable behavior from a weak training script or a bad customer experience. So build ‘airbags’ into the process,” he told me.
“Assess the level of trust required for your customer to release their financial details. Get compliance and legal colleagues on board as early as possible. Cognitive applications affect compliance in positive and negative ways. Be prepared to leverage the machines ability to explain recommendations in an understandable manner.”
Also important is to foster human and machine collaboration wherever possible, to reduce friction between the two.
“Rethink talent acquisition and your workplace vision,” Le Clair explained. “Some 78% of automation technologists foresee a mismatch of skill sets between today’s workers and the human/machine future, with the largest gaps in data, analytics, and cognitive skills.”
The bottom line is that robots and AI are here to stay. Whether they’ll have a net positive or negative impact on the workplace is up for discussion, but it may well hinge on how many so-called ‘higher value’ roles there are for humans to move into once they’ve been displaced by silicon.
