Intercast February 2025 – How does Security say ‘No’
Welcome to the February 2025 edition of Intercast’s monthly newsletter for cybersecurity professionals. As always, we’ll bring you the latest news and views to bring you up to speed.
In This Issue:
- Client Insight: How To Say No
- Is the Cybersecurity Industry Back?
- Google Gives Ground On Play Store Security
- AI Models Could Be ‘Poisoned’ For Cyber Attacks
- Enterprise Admins Get More Control Over Staff Chrome Use
- Microsoft Testing ‘Scareware’ Blocker
- Best Of The Rest
Client Insight: How To Say No
Every month we talk to our clients to find out what’s on their minds and get their angle on the cybersecurity industry. This month we’ve been talking a lot about Rami McCarthy’s post on how security departments can say “no” effectively.
As the post at High Signal Security points out, security departments have often been wary about saying no, but may have taken the approach too far. He argues that a correctly-presented “no” is an important tool for the business as a whole to avoid sub-standard outcomes. The key is presenting the refusal in its correct context, protecting the company from risk and establishing the security team’s judgement.
McCarthy notes that a clear and early “no” is much more effective than delaying to avoid conflict: the earlier the decision, the less disruption it causes in the long run. It’s also vital to be able to justify and explain a refusal.
Is the Cybersecurity Industry Back?
The cybersecurity industry picked up strongly in 2024 but the effects varied significantly between regions and sectors. That’s the big takeaway from an in-depth review of the industry by Mike Privette at Return on Security.
In his third such annual review, Privette noted a big drop in the number of companies with significant layoffs, though jobs getting replaced by AI is a notable trend. He also highlighted that hiring is on the rise in emerging markets while the US had a meaningful decline.
There’s also a notable change in funding for new businesses and products with the volume down but the value up. The effects of the move away from near-zero interest rates are still clear, with investors more risk-averse. Meanwhile mergers and acquisitions have been dominated by buyouts of companies whose main asset is data rather than technology.
Privette also spotted an intriguing trend in the booming AI cybersecurity field: a rise in companies and products concerned with safety and governance of AI.
Google Gives Ground On Play Store Security
Google is to revoke permissions for installed Android apps identified as harmful. It’s a significant step away from its traditional position on the spectrum of security and freedom.
While we often hear about Google removing rogue apps from the Play Store, it’s usually down to users to decide how they respond if the apps are already on their device. That gives the user more control but does risk security, particularly when users don’t hear about the app removal through news reports or word of mouth.
Google’s optional Play Protect feature can already warn users about rogue apps and, in the most extreme cases, block the app completely. That’s not something that appears to happen all that often though.
Now Google has found a middle ground. When it identifies an app previously distributed in the Play Store as potentially harmful, Play Protect will automatically disable the most sensitive permissions. Users will get a notification of this change but can choose to manually restore the permissions.
Meanwhile, switching off Play Protect altogether will no longer be allowed while the device is making a call on a phone, VOIP or video app. That’s to stop social engineering scams such as bogus tech support calls where users are tricked into disabling security measures before allowing remote control of a device.
AI Models Could Be ‘Poisoned’ For Cyber Attacks
Model poisoning in AI used to be a “hacktivist” tactic designed to protest copyright but now it’s a creative cyber attack method. The attackers have adapted the tactic used by artists who manipulated pixels so that models which scraped their images without permission would get bogus results that made the training data less useful.
The new attacks target models such as Microsoft’s Copilot that don’t just use their own training database, but use live access to online documents to make responses more up-to-date and useful. The “ConfusedPilot” attack involves finding documents the models uses as sources and manipulating them to include malicious content.
The idea isn’t to compromise security on the AI models or the user’s system. Instead it’s to trick the models into delivering bogus information (and disregard genuine content), in turn leaving businesses making decisions on a false premise. As often with novel attack methods, the concept itself seems technically plausible, but how effective it is in practice – and what harm it really causes – remains to be seen.
Enterprise Admins Get More Control Over Staff Chrome Use
Browser extensions may be incredibly useful for individuals but they can be a real headache for system admins. Chrome Enterprise customers will soon get more control over extensions on employee machines.
It’s a two-pronged approach, starting with an option to pre-approve extensions. IT departments will even be able to highlight such extensions in a customized landing page in the Chrome Web Store, complete with company logos and images in the user interface.
Later this year, admins will finally get the option to remotely remove installed extensions from an employee’s work account. They’ll also be able to automatically block future downloads and display a custom message explaining the block and how it relates to company security policies.
Microsoft Testing ‘Scareware’ Blocker
The Edge browser will soon make it easier to spot and evade scareware scams, where prominent on-screen messages falsely tell users they’ve been infected by malware. Such scams usually aim to get users to install remote access software or visit rogue sites.
Blocking such scams has previously involved either checking a list of specific URLs housing scareware, or looking for specific wording or display elements, something that’s difficult when scammers keep revising their approach.
Edge will now take screenshots and compare them against a locally-stored machine-learning database to recognise suspicious ‘warnings’. Running everything locally avoids having to send any screenshot data to a remote server, something that would raise privacy concerns. Users who get an alert can opt to continue to the site or close the message altogether, as well as optionally reporting both correct identifications and false positives.
Best of the Rest
Here’s our round up of what else you need to know:
- 57 state-backed cyberattack groups using AI for harm: https://thehackernews.com/2025/01/google-over-57-nation-state-threat.html
- US cyberattack ‘aid’ program passed first real test:https://therecord.media/state-department-falcon-cyber-response-costa-rica-recope
- FBI hacks malware to make it self-destruct:https://www.theverge.com/2025/1/14/24343495/fbi-computer-hack-uninstall-plugx-malware
- Nearly half of businesses delay cybersecurity upgrades:https://www.securitymagazine.com/articles/101349-47-of-organizations-have-put-off-cybersecurity-upgrades