Intercast April 226 – Introducing The FDE

Welcome to the April 2026 edition of Intercast’s monthly newsletter for cybersecurity professionals. As always, we’ll bring you the latest news and views to bring you up to speed.

In This Issue

  • Introducing The FDE (Forward Deployed Engineer)
  • Trust Is Major Issue In Cybersecurity Sector
  • AI Overconfidence Could Be Mitigated
  • CVE Program Under Financial Threat
  • Machine Learning Tackles Warehouse Robot Right-Of-Way
  • LLM Guardrail Breaches Surprisingly Easy
  • Best of the Rest

Client Insight: Introducing The FDE (Forward Deployed Engineer)

Each month we ask our clients what’s on their minds to help us get a broader perspective on the industry. We’ve been getting a lot of questions about a relatively new role, the Forward Deployed Engineer (FDE).

Like any new term, the precise definition varies, but the broad principle is that it’s somebody from a technical background who spends time in a customer’s environment to truly understand their needs. In the words of a Palantir FDE, “While a traditional software engineer, or “Dev,” focuses on creating a single capability that can be used for many customers, [FDEs] focus on enabling many capabilities for a single customer.”

It’s an exciting role that brings together elements of several distinct sectors including software engineering, consultancy sales and customer support. One thing we’ve definitely learned from our discussions is that somebody who puts too much emphasis on any of those individual elements won’t do a good job overall.

Trust Is Major Issue In Cybersecurity Sector

Only one in 20 organizations have full trust in their cybersecurity vendors according to a new study. Many customers aren’t even sure how they could vet a new cybersecurity partner.

The Sophos study of 5,000 “IT and security leaders” in 17 countries found only five percent had full trust. To be fair, that leaves a broad spectrum between a degree of understandable caution and cynicism, and complete hostile mistrust that becomes unhealthy!

Perhaps the most concerning finding was that 78 percent said it was somewhat or very challenging to assess the trustworthiness of new cybersecurity partners, while 64 percent found similar challenges with existing partners. Commonly cited reasons including information provided by vendors lacking detail, being hard to understand, or even being conflicting.

The lack of trust has serious consequences, with many finding it harder to assess their actual level of cybersecurity risk and determine the appropriate level of oversight.

AI Overconfidence Could Be Mitigated

It’s no secret that the apparent confidence of an AI model is not necessarily a sign of the reliability of its output. An MIT graduate student has published a paper exploring a way to measure uncertainty in a model. It’s less about the model’s internal consistency and more about how often different models disagree on the same task.

The most common approach to testing a model’s reliability is to submit the same prompt repeatedly and check if it gives the same answer, measuring “aleatoric uncertainty”. However, Kimia Hamidieh argues that this rewards models which are confident, even if they are “wrong”.

Her response is a “total uncertainty metric” which also evaluates a model on 10 common and realistic tasks and then sees how closely it matches the output of a range of models, an approach known as “epistemic uncertainty”.  The two uncertainty measures then combine to create an overall trust level.

CVE Program Under Financial Threat

The Common Vulnerabilities and Exposures program, a bedrock of security cooperation for more than a quarter of a century, faces a potentially existential funding crisis. Even without potential funding cuts, the system could be swamped by AI-powered automated bug reports.

In principle at least, CVE reduces the risk of duplicated efforts to tackle the same security flaws and makes tracking bugs and fixes much easier. Identifying and numbering unique vulnerabilities makes it possible for businesses and governments to not only know what bugs exist, but how serious their impact could be.

Now senior figures at the board behind CVE say they are concerned about future funding by the Department of Homeland Security, with the need for an annual contract renewal causing ongoing uncertainty.

Officials also say that AI tools spotting and reporting bugs might not be as helpful as they seem. Reports through GitHub rose 224 percent in a three-month period and it may be a case of quantity over quality, making it harder to identify the most serious vulnerabilities.

Machine Learning Tackles Warehouse Robot Right-Of-Way

Machine learning is often associated with extremely complex topics and advanced systems, but sometimes it’s just about stopping robots colliding with one another. A new approach learns from the movements of warehouse robots to develop efficient rules for which should give way.

Many automated warehouse systems use dynamic routing where robots receive new instructions only after completing a task. It’s an approach that works perfectly until there’s a problem, at which point the resulting bottleneck can be so severe that humans have to shut the entire system down and intervene.

The authors of a new study used a combination of traditional planning algorithms and a more dynamic neural network to decide which robots should get priority when passing through the same space. They reported a 25 percent increase in throughput in testing and note that real world improvements of even just a couple of percent would make a significant financial difference in giant warehouses.

LLM Guardrail Breaches Surprisingly Easy

Researchers have found LLM-powered security guardrails are themselves vulnerable to some creative attack methods. The “automated fuzzing” approach doesn’t require direct access to the inner workings of the systems.

The systems use so-called ’AI Judges’ to enforce security and safety policies in generative AI models. The attack method developed by Unit 42 involves trying to trick these judges into allowing violations.

The attack effectively involves posing as a human user and interacting normally with the model to try to build up a picture of its language network. They aim to map out connections and find the literal gap between the tokens representing “allow” and “block”.

They then test to see which other tokens are more likely to steer the model towards “allow”. The ultimate goal is to find or develop a route that is so strong, it overcomes the fact that the content in question does actually violate the policy being applied.

Best of the Rest

Here’s our round up of what else you need to know: