UK government security is foundering: here’s how to fix it
Posted: February 13, 2025 Filed under: cybersecurity, technology, Uncategorized | Tags: AI, cyber security, cyber security strategy, cybersecurity, government, public sector, security, technology, uk government Leave a comment
This article first appeared on Assured Intelligence.
We knew it was bad, but not as bad as this. On January 29 the National Audit Office (NAO) released a bombshell report revealing, in gory detail, the challenges facing central government cybersecurity leaders. Blaming skills gaps and funding shortages for much of the malaise, it warns that the cyber-threat to government is “severe and advancing quickly”, urging immediate action to protect vital public services.
The spending watchdog did not pull its punches. But the gaps in cyber resilience it identifies are so pronounced that fixing them will be extremely challenging, especially with a self-imposed deadline of 2030.
A giant target
There’s no doubting the massive target central government has painted on its back. The National Cyber Security Centre (NCSC) warns of a “diffuse and dangerous” threat from hostile states as well as cybercrime groups. Hacking tools and easy-to-use pre-packaged services are freely available online, as are breached credentials, including those linked to .gov email domains. The use of generative AI tools to upskill threat actors in penetration testing, and innovative new techniques like IT impersonation are already accelerating and improving outcomes for adversaries.
This matters for central government in particular, given the huge number of citizens that rely on public services. The NAO report cites NCSC figures claiming that 40% of incidents managed by the agency between 2020 and 2021 targeted the public sector. Breaches at NHS provider Synnovis and the British Library show the devastating impact and cost these can have.
Yet despite the ambition outlined in the Government Cyber Security Strategy: 2022–2030, plans appear to have languished under the previous administration.
What’s gone wrong?
The headline-grabbing part of the report is all about visibility and resilience, and the work of the Government Security Group (GSG) – the Cabinet Office body that oversees central government security. It claims that a 2023-24 assessment by the government’s new cyber assurance scheme, GovAssure, found that 58 critical departmental IT systems had “significant” gaps in cyber resilience, creating “extremely high” risk.
“The data highlighted multiple fundamental system controls that were at low levels of maturity across departments including asset management, protective monitoring, and response planning,” the report notes. “GSG reported to ministers the implication of these findings: the cyber-resilience risk to government was extremely high.”
Edwin Weijdema, EMEA field CTO at Veeam, argues that asset management, protective monitoring and incident response planning are three “interconnected pillars” vital to cybersecurity.
“If you don’t know about it, you can’t secure it – so a thorough asset inventory is the first step to knowing exactly what needs protection,” he tells Assured Intelligence.
“Once you have this visibility, protective monitoring of those assets provides real-time detection of suspicious activity, helping to prevent small issues from turning into major breaches. Finally, a robust response plan ensures you’re ready to recover quickly when incidents occur, turning potential chaos into controlled chaos with a smaller blast radius and much less damage tied to it.”
According to the NAO, the GSG also failed to include legacy IT systems in the GovAssure audit because many of its recommended controls were apparently not applicable to such technology. That has unwittingly created a significant visibility gap at the heart of government.
“In March 2024, departments reported using at least 228 legacy IT systems. Of these, 28% (63 of 228) were red-rated as there was a high likelihood and impact of operational and security risks occurring,” the NAO report notes.
Other critical cybersecurity challenges and failings highlighted by the NAO include:
- Until April 2023, the government did not collect “detailed, reliable data” about the cyber resilience of individual departments
- The government has not improved cyber resilience quickly enough to meet its aim to be “significantly hardened” to cyber-attack by 2025
- Departments still find it difficult to understand the roles and responsibilities of the cyber-related bodies at the centre of government
- GSG has no effective mechanisms in place to show whether its approach to government cybersecurity is effective, or even a plan to make government organisations cyber resilient by 2030
The NAO also slams individual departments for failing to meet their responsibilities to improve resilience. It claims that leaders “have not always recognised how cyber risk is relevant to their strategic goals” and that boards often don’t even include any members with cyber expertise.
James Morris, CEO of the non-profit Cybersecurity and Business Resilience Policy Centre (CSBR), argues that there’s plenty to be done.
“Cyber resilience needs to be hardwired into the processes of central government departments and made a priority for their core strategic and operational work,” he tells Assured Intelligence.
“It should also be identified as a core strategic priority for ministers and senior civil servants. Each department should identify where skill gaps are putting resilience at risk and plans should be put in place to improve cyber resilience skills among existing staff.”
Too few skills, not enough money
However, at the heart of the problem appear to be both money and talent. A cyber directorate set up by the GSG to lead cybersecurity improvement across government apparently had 32% of posts unfilled when first established. In 2023-24, a third of security roles in central government were either vacant or filled by temporary staff, with the share of vacancies in several departmental security teams over 50%.
“There are only two real options: increase the supply of cybersecurity skills, or recognise that market rates are what they are for cybersecurity skills, and pay them. Better still, do both,” says Ian Stretton, director at consulting firm Green Raven Limited. “But these are long-term fixes that will take years to effect.”
Attracting talent is made harder when departments must compete with deep-pocketed private sector organisations for a limited number of skilled professionals. The government announced in 2021 a £2.6bn funding boost for cyber, of which it apparently allocated £1.3bn to departments for cybersecurity and legacy IT remediation. However, since 2023, departments have “significantly reduced” the scope of improvement programmes, the NAO says. As of March 2024, departments did not have fully funded plans to remediate around half of the government’s legacy IT assets.
How to sort out this mess
In the absence of funding, it will be a tough ask to meet the recommendations set out by the NAO (see boxout). However, it is possible, according to the experts Assured Intelligence spoke to.
“Central government departments can boost cyber resilience – even in the face of legacy IT – by focusing on three core principles: speed, skills and accountability,” argues Veeam’s Weijdema.
“Speed in detection is crucial because the sooner you spot a breach, the less time attackers have to move laterally, exfiltrate data or disrupt critical services. Continuous log monitoring, threat intelligence feeds, and anomaly detection tools should be in place to catch potential intrusions in near real-time. Equally important is the ability to respond swiftly. Well-defined processes and empowered teams prevent small issues from escalating into large-scale crises.”
Government must also recognise the high demand for security professionals and pay competitive salaries, as well as offering clear career progression, and investing heavily in training to plug the skills gap, Weijdema adds. Security teams should be held accountable for the outcomes of the measures they take, he says.
“Finally, regular drills and exercises – like red-team attacks or simulated breaches – will help to instil a culture of digital emergency response,” Weijdema continues. “Just as physical first responders train constantly for disasters, a cyber workforce should practice containing threats under realistic conditions. Such exercises refine tactics, highlight weaknesses and foster collaboration.”
Green Raven’s Stretton agrees that government must find the money to compete with the private sector on salaries, but warns that this alone will not be enough.
“Even if there were enough cybersecurity professionals to go around, current cyber-defence strategies revolve around building higher and higher walls. But this isn’t a sustainable approach to cybersecurity, and cyber pros know it,” he tells Assured Intelligence.
“The problem is the world is still thinking about cybersecurity like medieval monarchs used to think about castles: just dig deeper ditches and build higher ramparts and it’ll be fine. Instead, we need to get smarter and focus defensive resources on where we know they are going to be needed.”
By making the most of AI-powered cyber-threat intelligence, government bodies can get back on the front foot against their adversaries, Stretton argues.
“Rather than constantly reacting to general threats, knowing who is coming after your organisation, and with what ‘weapons’, means you can remove the blindfold and react to what poses the greatest threat,” he says. “It’s analogous to how the security services work: there aren’t enough of them to keep us safe by sheer force of numbers, so they use sophisticated intelligence-gathering to pre-empt attacks and intercept attackers.”
The fact that the NAO report has been published at all is a positive sign. It’s signifies the new government’s recognition of the growing cyber-threat facing Whitehall, and its desire to achieve key parts of the 2022-2030 strategy by the end of the year. However, whether it can match this ambition with results remains to be seen.
The government’s new risk register is heavy on cyber. Is that a good or bad thing?
Posted: September 8, 2023 Filed under: Uncategorized | Tags: cyber resilience, cybersecurity, government, national risk register Leave a commentWhat are the chances of a catastrophic cyber incident occurring in the UK in the next two years? How many might die, or be maimed in such an incident? And how much might it cost the country? These are the kinds of unpleasant questions the government seeks to answer in its latest National Risk Register (NRR).
Since 2008, the report has been published to help businesses running critical national infrastructure (CNI), and other organisations, to enhance their resilience to potential risks. The big difference between now and then is that cyber is now one of nine key “themes” examined in the report.
I recently spoke to some experts to write an upcoming feature for Assured Intelligence.
What the NRR says
For the first time, the NRR was compiled from information in the National Security Risk Assessment (NSRA), a classified document written with help from government experts. It highlights potential cyber risk across multiple scenarios. These involve data theft and/or disruption to:
- Gas infrastructure
- Electricity infrastructure
- Civil nuclear facilities
- Fuel supply infrastructure
- Government
- The health and social care system
- The transport sector
- Telecommunications systems
- UK financial infrastructure
- A UK retail bank
The NRR ranks the likelihood of such attacks happening in the next two years as a “4” on a scale of 1–5, with 5 being the most likely (>25%). That equates to a “highly unlikely” risk with a “moderate” impact. However, as mild as this sounds, even a moderate incident could lead to up to 1000 fatalities and casualties of up to 2000, with losses in the billions of pounds. By contrast, the estimated economic damage from cyber incidents in 2000 was pegged at £10-100m.
That’s a reflection of the digital world we live in, as is the mention of AI as a potential chronic risk (as opposed to the acute risks highlighted above). Chronic risks, the NRR says, are manifest over a longer period of time and can make acute risks “more likely and serious”.
Should we be concerned?
Egress VP of threat intelligence, Jack Chapman, believes the government has it about right.
“I agree with the government’s risk assessment and its accuracy based on historic threats. Obviously this strongly depends on the geo-political landscape and how it evolves,” he told me.
“However, there’s been an increase in digitalisation in this space, meaning the risks and impact are increasing. There’s also a far higher level of uncertainty with the government’s assessment in comparison to previous reports.”
However, it’s not all doom and gloom, as steps are being taken to mitigate these acute cyber risks and build resilience into CNI, he added.
“It’s important to note that more active work is being done around cybersecurity than ever before; from putting security-by-design at the heart of new projects, to the impact the NCSC is having in the sector to help mitigate this risk,” Chapman said.
How can CNI hit back?
The big question is how exactly can CNI providers enhance resilience? Arun Kumar, regional director at ManageEngine, believes AI may hold the key, in helping to identity threats “faster and more accurately” than humans. But he goes further.
“Regulation will also play a vital role in carefully managing the negative impact of AI. It’s important to maintain strong security practices such as compliance with NIST and GDPR regulations,” he told me.
“Change needs to be foreseen and carefully managed—striking a balance between utilising the benefits of AI and limiting the negative side. To this end, collaboration is also paramount, both internally and externally within the cybersecurity community, encompassing researchers, professionals, enterprises and policymakers.”
Other best practices could include enhanced password management, vulnerability scanning and prompt patching, and user education to ward off the threat of phishing. To that we could add several other best practices, outlined by the National Cyber Security Centre (NCSC) here. It’s a tall job for CNI firms on an increasingly tight budget. But the alternative is undoubtedly worse.
End-to-end encryption: What happens next?
Posted: May 3, 2023 Filed under: Uncategorized | Tags: encryption, end to end encryption, government, privacy Leave a commentThe Online Safety Bill (OSB) is still winding its way through parliament. But while much of the analysis so far has been on its provisions to force social media companies to remove “harmful” content, there’s an elephant lurking in the corner of the room. Clause 110 compels not only social media firms but also messaging app providers to identify and take down child sexual exploitation and abuse (CSEA) content.
There’s one big problem here. End-to-end encryption (E2EE), which makes message content impenetrable to providers like WhatsApp. It appears as if the government might be looking at client-side scanning as a solution. Experts I spoke to for an upcoming feature are unconvinced.
What’s client-side scanning?
Put simply, this “accredited technology” would require individuals to download software to their devices. It would run locally, scanning potentially for suspicious keywords and image content that matches a CSEA database, before a message is encrypted and sent. On paper, this preserves E2EE while allowing the authorities to police child abusers. In reality, it will fail on both counts for several reasons.
- Researchers have already worked out it could generate too many false positives to be useful, and could be hacked in other ways
- If client-side scanning were targeted by foreign governments or cyber-criminals, it would put private data potentially at risk
- The bosses of several big-name messaging apps say they’d rather exit the UK than comply with the OSB, which would also make UK firms and consumers less secure
- If client-side encryption comes into force, child abusers will simply gravitate to unpoliced apps, as criminals have in the past with services like EncroChat
- There’s a concern that the technology could be used in the future to police other content types – government mission creep
Matthew Hodgson, CEO of secure messaging app Element, argued that the new provisions directly contradict the GDPR in undermining encryption.
“It undermines privacy and security for everyone because every secure communication app which happens to have abusive users could be obligated to incorporate a third-party scanning solution, which then means every single user is at risk of that scanning solution being exploited by an attacker to break their privacy,” he told me.
“Any business depending on E2EE for privacy may find themselves at a loss, given encryption vendors would be forced to stop providing their services in the UK, as it is literally impossible to preserve privacy whilst also adding a mechanism to let third parties exfiltrate user data.”
Corelight cyber security specialist, Matt Ellison, cautioned against government putting its faith in a “magic technical solution” that doesn’t exist – adding that Apple abandoned similar plans for client-side scanning after a privacy uproar.
“Ultimately the government is proposing to significantly weaken the security of almost the entire nation, for the ability to perform a lawful intercept of an individual suspected of a crime,” he told me.
“Should all vehicles be fitted with a remote kill switch, in case you are deemed to be committing a crime in your vehicle? Should all houses have the same door key type, with authorities maintaining a master key that could get into everyone’s house to gather evidence without you knowing, again, if you are under suspicion?”
Ellison argued that smartphones are much more than just a technically advanced mobile phone.
“The reality is that they are an intimate and highly integrated aspect of our lives and mass surveillance approaches such as this are a gross invasion of privacy and civil liberties.”
What should happen?
According to Hodgson, there are plenty of ways law enforcers could hunt down child abusers.
“These include investigation/infiltration of forums where abusers recruit or advertise, or by analysing communication metadata, or by educating users within apps, and in general, to be mindful of abuse,” he added.
“Blanket surveillance which undermines the privacy of everybody is not the answer.”
Ross Anderson, who wrote a paper on this challenging the conclusions of the NCSC technical director Ian levy, agreed that old-fashioned policing techniques are the answer, rather than technology solutions which promise much but deliver little. The debate between law enforcement/government on one side and encryption specialists/tech vendors on the other has been raging for years. Throughout, the former have argued that tech wizards simply need to apply themselves more diligently to the task in order to find an answer. The latter retort that E2EE can’t be broken without undermining security for everyone.
So where does that leave us? With Labour backing the bill, it will undoubtedly become law. But what of Clause 110? If it remains unchanged, it’s unlikely the government will enforce it. The best privacy and security advocates can hope for is that its most controversial provisions are never enforced. That’s what happened with the Investigatory Powers Act – which incidentally already gives the British government theoretical powers to force tech firms to break encryption. It will probably happen again.
Why Theresa May’s Encryption Plans Are a Danger to Us All
Posted: July 28, 2017 Filed under: Uncategorized | Tags: backdoor, conservative party, encryption, government, investigatory powers act, privacy, privacy central, snooper's charter, strong and stable, surveillance, theresa may, war on terror Leave a comment
I realise it’s been a while since I posted something up here, so here’s an article I wrote recently for Top10VPN’s new Privacy Central site:
The UK has been unlucky enough to know terrorism for quite some time. Many will remember the IRA campaigns of the 1970s and ’80s. This was an era before smartphones and the internet, yet the Irish paramilitary group continued to wage a successful campaign of terror on the mainland.
It continued to recruit members and organise itself to good effect. Politicians of the modern era, led by Theresa May and various members of her government, would do well to remember this when they launch into yet another assault on Facebook, Google, and the technology platforms that are alleged to provide a “safe haven” for Islamic terrorists today.
Now she is calling for greater regulation of cyberspace, something the independent reviewer of terrorism legislation has openly criticised. Along with increasing moves across Europe and the world to undermine end-to-end encryption in our technology products, these are dangerously misguided policies which would make us all less safe, less secure and certainly less free.
Our “Sliding Doors” moment
Every time a terror attack hits, the government continues its war of words not simply against the perpetrators, but against the tech companies who are alleged to have provided a “safe haven” for them. After all, such rhetoric plays well with the right-wing print media, and large parts of the party.
“Safe haven” has become something of a mantra for the prime minister, alongside her other favorite; “strong and stable”. She argues that terrorists are hiding behind encrypted communications on platforms like Facebook’s WhatsApp and Apple’s iMessage, and are using social media platforms like YouTube to recruit members and distribute propaganda.
“We cannot allow this ideology the safe space it needs to breed. Yet that is precisely what the internet, and the big companies that provide internet-based services, provide,” May said after the London Bridge attacks. “We need to work with allied democratic governments to reach international agreements that regulate cyberspace to prevent the spread of extremism and terrorism planning.”
Part of the regulation May wants to bring in could include fining tech companies that don’t take down terrorist propaganda quickly enough. Max Hill QC, independent reviewer of terror legislation, has rightly questioned this hard-line approach.
“I struggle to see how it would help if our parliament were to criminalize tech company bosses who ‘don’t do enough’. How do we measure ‘enough’? What is the appropriate sanction?” he said in a speech reported by The Times.
“We do not live in China, where the internet simply goes dark for millions when government so decides. Our democratic society cannot be treated that way.”
China is an interesting parallel to draw, because in many ways it offers a glimpse into an alternative future for the UK and Europe; one in which government has total control over the internet, where freedom of speech is suppressed and privacy is a luxury no individual can claim to have.
The problem is that no one sees authoritarianism coming, because it happens slowly, drip by drip. Regulating cyberspace would begin a slow slide into the kind of dystopic future we currently know only from sci-fi films. As Margaret Atwood’s heroine Offred says in her acclaimed novel The Handmaid’s Tale: “Nothing changes instantaneously: in a gradually heating bathtub you’d be boiled to death before you knew it.”
In many ways, we sit today at a Sliding Doors moment in history. Which future would you prefer?
The problem with backdoors
End-to-end encryption in platforms like WhatsApp and on our smartphones and tablets is something Western governments are increasingly keen to undermine, as part of this clamp down. It doesn’t seem to matter that this technology keeps the communications of consumers and countless businesses safe from the prying eyes of nation states and cybercriminals – it’s also been singled out as providing, you guessed it, a “safe space” for terrorists.
The Snoopers’ Charter already includes provisions for the government to force tech providers to effectively create backdoors in their products and services, breaking the encryption that keeps our comms secure. In fact, the government is trying to sneak through these provisionswithout adequate scrutiny or debate. They were leaked to the Open Rights Group and can be found here.
It remains to be seen whether the British government could actually make this happen. An outright ban is unworkable and the affected tech companies are based almost entirely in the US. But the signs aren’t good. Even the European Commission is being strong-armed into taking a stance against encryption by politicians keen to look tough on terror in a bid to appease voters and right-wing newspaper editors. Let’s hope MEPs stand up to such calls.
The problems with undermining encryption in this way are several-fold. It would give the state far too much power to pry into our personal lives, something the UK authorities can already do thanks to the Investigatory Powers Act (IPA), which has granted the government the most sweeping surveillance powers of any Western democracy. It would also embolden countries with poor human rights records to do the same.
Remember, encryption doesn’t just keep terrorist communications “safe” from our intelligence services, it protects journalists, human rights activists and many others in hostile states like those in the Middle East.
More importantly, it protects the communications of all those businesses we bank with, shop with, and give our medical and financial records to. The government can’t have its cake and eat it: recommending businesses secure their services with encryption on the one hand, but then undermining the very foundations on which our economy is built with the other.
Once a provider has been ordered to create a “backdoor” in their product or service, the countdown will begin to that code going public.
It’s inevitable.
Even the NSA and CIA can’t keep hold of their secrets: attackers have managed to steal and release top secret hacking tools developed by both. In the case of the former this led to the recent global ransomware epidemic dubbed “WannaCry”.
Why should we set such a dangerous precedent, putting our data and privacy at risk, while the real criminals simply migrate to platforms not covered by the backdoor program?
“For years, cryptologists and national security experts have been warning against weakening encryption,” Apple boss Tim Cook has said in the past. “Doing so would hurt only the well-meaning and law-abiding citizens who rely on companies like Apple to protect their data. Criminals and bad actors will still encrypt, using tools that are readily available to them.”
In short, we need more police officers, constructive relationships with social media companies, and smarter ways of investigating terror suspects. Dragnet surveillance, encryption backdoors and more internet regulation is the quickest way to undermine all those democratic freedoms we hold so dear – and send us hurtling towards that dystopic authoritarian future.
Why F-Secure and Others Are Opposing the Snoopers’ Charter
Posted: October 30, 2015 Filed under: Uncategorized | Tags: decryption, end-to-end encryption, f-secure, government, investigatory powers bill, privacy, snooper's charter, surveillance, whatsapp Leave a comment
It’s widely expected that next week the government will unveil details of its hugely controversial Snoopers’ Charter, aka the Investigatory Powers Bill. To preempt this and in a bid to influence the debate cyber security firm F-Secure and 40 other tech signatories presented an open letter opposing the act.
The bill most controversially is expected to force service providers to allow the authorities to decrypt secret messages if requested to do so in extremis. This is most likely going to come in the form some kind of order effectively banning end-to-end encryption.
I heard from F-Secure security adviser Sean Sullivan on this to find out why the bill is such as bad idea.
To precis what I wrote in this Infosecurity article, his main arguments are that forcing providers to hold the encryption keys will:
- Make them a likely target for hackers, weakening security
- Send the wrong signal out to the world and damage UK businesses selling into a global marketplace
- End up in China or other potentially hostile states a service provider also operates in also requesting these encryption keys – undermining security further
- Be useless, as the bad guys will end up using another platform which can’t be intercepted
I completely agree. Especially with Sullivan’s argument that the providers would become a major target for hackers.
“End-to-end encryption makes good sense and is the future of security,” he told me by email. “Asking us to compromise our product, service, and back end would be foolish – especially considering all of the back end data breach failures that have occurred of late. If we don’t hold the data, we cannot lose control of it. That’s just good security.”
One other point he made was the confusion among politicians about tech terminology as basic as “backdoor” and “encryption”.
“A lot of UK politicians end up putting their foot in their mouth because they don’t properly understand the technology. They try to repeat what their experts have told them, but they get it wrong. UK law enforcement would probably love to backdoor your local device (phone) but that’s a lost cause,” he argued.
“The politicians (who actually know what they’re talking about) really just want back end access. As in, they want a back door in the ‘cloud’. They want to mandate warranted access to data in transit and/or in the back end (rather than data at rest on the device) and fear that apps which offer end-to-end encryption, in which the service provider doesn’t hold any decryption keys, are a threat.”
Let’s see what happens, but given the extremely low technology literacy levels among most politicians I’ve got a bad feeling about this one.
Singapore bids to snuff out APT fire as threats spell double trouble for APAC
Posted: January 24, 2014 Filed under: Uncategorized | Tags: apac, APT, centre of excellence, cyber defence, cyber security, cyber security centre, fireeye, government, hong kong, malware, singapore, singapore IDA Leave a comment
Last week APT and anti-malware firm FireEye announced the creation of a new Cyber Security Centre of Excellence (CoE) in partnership with the Singaporean government. It didn’t make many headlines outside of the city state but I think it’s worth a second look for a few reasons.
First up, FireEye is pledging 100 trained security professionals to this new regional hub, to provide intelligence to help the local government protect its citizens and infrastructure from attack as well as benefitting the vendor’s customers across APAC.
FireEye is one of the few infosec companies I’ve spoken to in this part of the world that is prepared to talk at length about the specific problems facing organisations in the region. More often than not when I try to go down this avenue with a vendor I’ll be told about how threats are global these days and attacks follow similar patterns no matter where you are on the planet.
While I know this is true to an extent, it was nevertheless refreshing to hear FireEye’s APAC CTO Bryce Boland tell me that the reason for building a team in Singapore was to have the necessary local language and cultural skills to deal with specific regional threats.
“We have a lot of countries here, many of which have tense relationships, so we see a lot of that boil over into cyber space,” he told me.
As well as the various hacktivist skirmishes that periodically hit the region, such as those between the Philippines and Indonesia or China and Japan, there are also more serious IP-stealing raids which stems from the fact that APAC represents more than 45 per cent of the world’s patents, Boland added.
As a result, regional organisations face almost twice as many advanced attacks as the global average.
Another reason the news of FireEye’s new CoE warrants attention is what it says about the approach to cyber security by the respective governments of Singapore and Hong Kong.
Although Hong Kong threw HK$9 million (£730,000) at a new Cyber Security Centre in 2012, my impression is that Singapore is more proactive all round when it comes to defending its virtual borders.
It was a view shared by Boland, who pointed to Singapore’s ability to attract and support infosec players looking to build regional headquarters there, as well as its efforts to attract globally renowned speakers to an annual security expo.
In my experience, what few events there are in Hong Kong are poorly attended, attract few speakers from outside the SAR, and rarely provide the audience with anything like compelling or useful content.
