Forget ChatGPT: How Industrial AI is changing the world
Posted: December 30, 2025 Filed under: IT spending, technology | Tags: AI, artificial intelligence, business, chatgpt, digital transformation, factories, GenAI, IFS, industrial AI, industry 4.0, technology Leave a comment
Could AI genuinely be as transformative as every tech booster would have you believe? The hyperscalers think so. That’s why they’re estimated to spend nearly $500bn on infrastructure this year. Yet behind these increasingly outlandish numbers lies an awkward truth: 70-80% of AI projects fail, according to the Project Management Institute. Should CTOs and CIOs keep their powder dry until the hyperbole has died down?
Not if they’re considering industrial AI projects. While generative AI (GenAI) tools in the ChatGPT/Gemini/Claude mould are having limited success in the office, large-scale deployments in factories, power plants, construction sites and other locations are already making an impact.
According to a Reuters/Siemens report, a fifth of global organisations have already implemented industrial AI for energy management and predictive maintenance. And over 60% have done so, or are planning to in the next three years, across even more scenarios. These include supply chain optimisation, real-time operational decision making, and production process optimisation.
Citing Microsoft figures that AI could contribute £550bn to the UK economy by 2035, techUK argues that successful deployments might even catapult the UK back to the pinnacle of global manufacturing. ERP specialist IFS is emphatic. It describes this as an “invisible revolution” powering a new wave of global growth. The firm’s own study points to growing momentum. It claims that while only a third (32%) of global businesses say they’ve deeply embedded AI into workflows and decision-making today, nearly 60% expect to do so within a year. At the same time, the share of companies “experimenting” with AI is expected to drop from 24% to just 7%.
“Industrial AI is a set of solutions and approaches relevant to companies that spend a lot of time with hard assets. You have 30% of the workforce behind a desk, while 70% are out there getting their hands dirty,” says IFS chief product officer, Christian Pederson. “That separates it from the noise you hear about AI in the press, which is focused on what can be done for that 30%.”
From innovation labs to frontline operations
At its heart, industrial AI is about automating and optimising business processes to improve decision-making, enhance efficiency and increase profitability. It requires the collection of vast volumes of data from sources like IoT sensors, cameras, and back-office systems, and the application of machine and deep learning algorithms to surface insights. In some cases, the AI powers robots to supercharge automation, and in others, it utilises edge computing for faster, localised processing. Agentic AI helps firms go even further, by working autonomously, dynamically and intelligently to achieve the goals it is set.
IFS claims that a large share of the 289 UK organisations it polled are already using automation AI (57%), predictive AI (46%) and agentic AI (37%). Pederson cites one such business – which provides “washroom, healthcare and floorcare hygiene” services – that has transformed its approach to field service using industrial AI.
The company performs more than five million service visits annually for clients, but was struggling to meet its SLAs due to persistent staff shortages, he says. By applying AI algorithms to staff schedules, client information, routing data and even electric vehicle charging telemetry, Pederson claims, it was able to drive a 40% increase in technician productivity. “You can then decide what to do with that increased productivity,” he explains. “Do you get more engagements or realise some savings? That’s obviously up to the individual company and their situation.”
Another standout case is recycling giant TOMRA. It collects large volumes of data from its recycling machines at client sites, and then applies anomaly detection algorithms to improve first-time fix rates for engineers.
“You get the data in from IoT and you trigger that as an anomaly,” says Pederson. “You analyse the anomaly against all your historic records – other incidents that have happened with customers and how they have been fixed. You relate it to your knowledge base articles. And then you relate it to your inventory on your service vans, like which service vans and which technicians are equipped to do the job..
“So it’s the whole estate of structured, unstructured and processed data. In the past, they would send a technician out, and they could get it right 84% of the time. Now they have improved their first-time fix rate to 97%.”
Both this and the aforementioned field service deployment feature an “agentic dispatcher” which autonomously creates and publishes the schedules to the relevant service technicians, updates their calendar and suggests the best route to take. “In the very near future, AI agents will not only be helping to address work for people behind a desk, but guiding robots directly,” says Pederson.
Daniel Basile, vice-president of field service at TOMRA North America, adds that the company is also using what-if scenario planning to better manage and schedule its field resources. “It essentially offers you a test environment right within production, which saves weeks, if not months, in assessing the impact of different potential changes,” he says.
Meanwhile, an AI-powered co-pilot tool has helped the firm to reduce onboarding time by up to 50%, boosting field readiness. “Embedded AI is helping TOMRA capture knowledge from more senior technicians. Instead of just ‘what does the manual say?’ we can use AI to digitise the real experience of our most tenured technicians, to help us capture and share their knowledge,” says Basile. “Rather than an employee needing 30 minutes or an hour to sift through the information of an 800-page manual, they can ask a question and in seconds the AI will return the answer, what page(s) it’s on, and links to any supporting documents.”
Other examples abound. One is Cheer Pack, which makes pouches for various food and beverage clients. It uses AI to control robots on the factory floor, freeing staff to focus on less repetitive labour and saving millions of dollars each year. Another is Schneider Electric, which is teaming up with power system specialist ETAP. The duo are using AI-powered digital twin technology to simulate how AI factories operate. By running advanced analytics against these simulations, they can run what-if scenarios, predictive maintenance and more to improve their design.
Overcoming a human-shaped challenge
The potential benefits of successful industrial AI deployments speak for themselves. Some 91% of UK organisations IFS spoke to claim the technology has helped improve their profitability, while 95% believe it will have a positive impact on the environment. But to harness its full potential, newcomers will need to overcome some potentially sticky challenges.
The first can be expressed simply: garbage in, garbage out. Data must be clean, accurate and high quality if organisations are to emulate the likes of TOMRA and Schneider Electric. It must also be protected from tampering and theft. One study claims that a quarter (26%) of UK and US businesses suffered a data poisoning attack last year.
Legacy technology and skills gaps also loom large. More than half of the global organisations IFS spoke to estimate that up to 60% of their workforce will need retraining for AI tools. A third says the figure could be 100%. Yet this is assuming they are willing participants on this journey. Cultural resistance is not to be underestimated, especially if workers fear their jobs may be at risk.
“The biggest challenge with these things is not the technology, it’s the change management,” says Pederson. “Service technicians often have a years-long relationship with their dispatchers, who know the preferences of each technician. So there are challenges when you walk into this area with AI.”
Yet humans are vital to the success of such projects – not only as end users of AI, but also as supervisors. “With the use of AI in industrial environments, particularly in safety-critical settings, ensuring human oversight is embedded in processes and procedures is vital,” explains Usman Ikhlaq, AI programme manager at techUK. He’s ultimately optimistic that organisations can overcome challenges around data quality, skills, security and leadership buy-in.
“Organisations are seeing tangible results by implementing robust cybersecurity measures, enforcing strong data governance, investing in workforce upskilling, and aligning AI initiatives with strategic business objectives,” Ikhlaq explains. “When these barriers are addressed, industrial AI not only optimises real-time operations and reduces unplanned downtime, but also empowers workers to make better decisions, accelerates knowledge transfer, and enables safer, more autonomous operations in complex industrial environments.”
How far will all this innovation take British companies? TechUK is unsurprisingly bullish, citing the government’s recently published Industrial Strategy, which claims it will “prioritise frontier technologies with the greatest growth potential”, such as AI. Yet the market moves faster than government policy. IFS’s Pederson claims the firm “can’t come up with ideas quickly enough” for its customers. Against this backdrop, tech leaders who prefer to wait and see may find they’re soon left behind.
(This article first appeared on Tech Monitor)
Content Credentials: A novel approach to tackling deepfakes
Posted: April 8, 2025 Filed under: cybersecurity, technology | Tags: AI, artificial intelligence, content credentials, deepfake, deepfakes, fraud, technology Leave a comment
Can you believe what you see online? Unfortunately for many people today, the answer is increasingly a resounding “no”. Deepfakes are bad news for many reasons. But for CISOs, they posed an outsized threat through the potential to amplify social engineering, business email compromise (BEC) and even social media scams.
There is certainly no silver bullet for a challenge that’s only set to grow as the technology behind deepfakes gets better and cheaper. But an initiative dubbed Content Credentials has already won many plaudits, including the NSA and the UK’s National Cyber Security Centre (NCSC). It could yet help society in general, and businesses in particular, to push back against a rising tide of online fakery.
Why we need it
Deepfakes have been circulating online for several years. These digitally altered or completely synthetic pieces of audio, video and image-based content were initially viewed with curiosity as fairly harmless, easy-to-spot fakes. But the technology has rapidly matured, supercharged by generative AI (GenAI), to the point where threat actors are now using it in everything from sextortion and online scams to child abuse material. The government is frantically looking for answers to what it describes as “a growing menace and an evolving threat”, citing figures that eight million fakes will be shared in 2025, up from 500,000 two years ago.
From a CISO perspective, there are several potential risks associated with malicious use of the technology, including:
Brand damage: Deepfake videos of CEOs and senior executives circulated online could be used to tarnish the corporate brand directly, in order perhaps to influence the stock price, or even to perpetuate investment scams and other fraud.
Direct financial loss: While the above scenario could also create significant financial risk, there are other more direct tactics threat actors can use to make money. One is by amplifying business email compromise (BEC) scams. Instead of sending an email to a finance team member, requesting a fund transfer, a cybercriminal could send manipulated audio or video impersonating a supplier or C-suite executive. The FBI has been warning about this for several years. BEC cost $2.9bn in 2023 alone, and the figure continues to rise.
Unauthorised access: Beyond BEC, deepfakes could also be deployed to amplify social engineering in an attempt to gain access to sensitive data and/or systems. One such technique spotted in the wild is the fake employee threat, which has already deceived one cybersecurity company. Faked images, or even video, could be used to add credibility to a candidate that would otherwise be turned away, such as a nation state operative or cybercriminal.
Account takeover/creation fraud: Cybercriminals are also using stolen biometrics data to create deepfakes of customers, in order to open new accounts or hijack existing ones. This is especially concerning for financial services firms. According to one study, deepfakes now account for a quarter of fraudulent attempts to pass motion-based biometrics checks.
The challenge is that deepfakes are increasingly difficult to tell apart from the real thing. And the technology is being commoditised on the cybercrime underground, lowering the barrier to entry for would-be fakers. There have even been warnings that deepfakes could be made even more realistic if large language models (LLMs) were trained with stolen or scraped personal information – to create an evil “digital twin” of a victim. The deepfake would be used as the front end of this avatar, who would look, sound and act like the real person in BEC, fake employee and other scams.
How Content Credentials works
Against this backdrop, many cybersecurity professionals are getting behind Content Credentials. Backed by the likes of Adobe, Google, Microsoft and – most recently – Cloudflare, the initiative was first proposed by the Coalition for Content Provenance and Authenticity (C2PA). Currently being fast-tracked to become global standard ISO 22144, it works as a content provenance and authentication mechanism.
A Content Credential is a set of “tamper-evident”, cryptographically signed metadata attached to a piece of content at the time of capture, editing or directly before publishing. If that content is edited and/or processed over time, it may accrue more Content Credentials, enabling the individual who has altered it to identify themselves and what they did. The idea, as the NSA puts it, is to create trust among content consumers through greater transparency, just as nutrition labels do with food.
The initiative is evolving as potential weaknesses are discovered. For example, recognising that trust in the metadata itself is paramount, efforts were made to enhance preservation and retrieval of this information. Thus, Durable Content Credentials were born, incorporating digital watermarking of media and “a robust media fingerprint matching system”.
Progress will take time
If the standard takes off, it could be a game changer, argues Andy Parsons, senior director for content authenticity at Adobe.
“We’ve seen great momentum for real-world applications of Content Credentials which includes being integrated into the recently launched Samsung Galaxy S25. They are also supported by all ‘big five’ camera manufacturers – Canon, Fujifilm, Leica, Nikon, and Sony – as well as by the BBC for BBC Verify,” he tells me.
“Where social media and other websites do not yet retain visible Content Credentials when content is posted on their platforms, we have released the Adobe Content Authenticity extension for Google Chrome to allow end users to view Content Credentials on any website.”
Cloudflare’s head of AI audit and media privacy, Will Allen, adds that it could be used as a “trusted authentication tool” to tackle BEC, social media scams and other deepfake content.
“This approach helps organisations filter out manipulated content, make informed decisions and reduce exposure to misinformation,” he tells me.
However, there are still limits to the initiative’s potential impact, especially given the growing speed, quality and accessibility of deepfake tools.
Although there’s is “active work underway” to support live video, according to Adobe’s Parsons, that support is not yet finalised. This could leave the door open for threat actors using real-time deepfake tools for BEC fraud. Trend Micro senior threat researcher, David Sancho, adds that until all sources watermark their content, the potential for a high rate of false negatives is amplified.
“Often, once you see it, you can’t unsee it. This is more relevant for disinformation campaigns, but also for some scams,” he continues. “The criminals may also be able to remove fingerprinting metadata from synthetic media.”
While Content Credentials offers a helping hand in the form of additional data points to study, it’s not a silver bullet.
“Instead, to stop BEC, a company needs to implement strong processes that force finance employees to double/triple check money transfers beyond a certain amount, especially out of working hours,” Sancho continues. “This makes BEC a much more difficult proposition for the criminal because they must fool two or three people, not only one.”
Cloudflare’s Allen admits that take up remains the key to success.
“The biggest challenge is adoption – making it easier for users to inspect and verify Content Credentials,” he says. “For this to be truly effective, verification needs to be effortless and accessible, wherever users encounter media – whether on social media platforms, websites, or apps.”
Adobe’s Parsons claims that its Content Authenticity Initiative (CAI) now has over 4000 members, but agrees that end-user awareness will be key to is success.
“The more places Content Credentials show up, the more valuable they become. We also need to help build more healthy assessment of digital content and grow awareness of tools that are available,” he concludes. “Therefore, ensuring people are better educated to check for credentials and to be sceptical of content without them becomes even more essential.”
This article was first published on Assured Intelligence.
The Singularity and the CIO: Discuss
Posted: August 26, 2016 Filed under: Uncategorized | Tags: AI, artificial intelligence, cio, forrester, IHS, machine learning, manufacturing, robots, the singularity Leave a comment
Sci-fi writers have been warning us about the coming of the singularity for a decade now. And while we’re some years away from having to contemplate such a future, AI, machine learning, big data and other technologies are developing at a pace which is already beginning to impact the global workforce.
I chatted to some experts on the subject for an upcoming feature to find out whether CIOs should be terrified or enthused by the prospect of robot workers.
The truth is that they’re already here, in many heavy industries like tech manufacturing. In May this year a local government official in the Chinese district of Kunshan announced contract manufacturing giant Foxconn was reducing “employee strength” from 110,000 to 50,000 workers, because of investments in robots. But what about when they spread into other industries? As far back as 2014, Gartner was predicting that as many as one in three jobs will be “converted to software, robots and smart machines by 2025” as software advances mean technology systems begin to replace cognitive tasks as well as factory jobs.
Meanwhile, a report from the Bank of England last year estimated up to 15 million UK jobs could be at risk of automation in the future. And a Deloitte/Oxford University study in January claimed 35% of today’s jobs have a “high chance” of being automated in the next 10-20 years.
For IHS Markit analyst, Wilmer Zhou, the coming robot hordes represent both a challenge and an opportunity to employers. Aside from manufacturing, he picked out several industries where jobs are potentially most at risk, including agriculture, logistics, and specialist domestic care. Most surprising for me was healthcare.
“It’s one of the industries with relatively high robot deployment such as surgical robots,” he told me via email. “IHS forecasts that robots in the medical industry will be one of the fastest growth sectors, with the decreasing of the average sale price of surgical robots and expansion of medical operation tasks.”
For CIOs looking to maximise the potential offered by these new automated workers, it will be important to create trust in the bots, argued Forrester principal analyst, Craig Le Clair.
“Cognitive systems can end up learning undesirable behavior from a weak training script or a bad customer experience. So build ‘airbags’ into the process,” he told me.
“Assess the level of trust required for your customer to release their financial details. Get compliance and legal colleagues on board as early as possible. Cognitive applications affect compliance in positive and negative ways. Be prepared to leverage the machines ability to explain recommendations in an understandable manner.”
Also important is to foster human and machine collaboration wherever possible, to reduce friction between the two.
“Rethink talent acquisition and your workplace vision,” Le Clair explained. “Some 78% of automation technologists foresee a mismatch of skill sets between today’s workers and the human/machine future, with the largest gaps in data, analytics, and cognitive skills.”
The bottom line is that robots and AI are here to stay. Whether they’ll have a net positive or negative impact on the workplace is up for discussion, but it may well hinge on how many so-called ‘higher value’ roles there are for humans to move into once they’ve been displaced by silicon.
