Intercast March 2024 – Baseline Cybersecurity Controls

Welcome to the March 2024 edition of Intercast’s monthly newsletter for cybersecurity professionals. As always, we’ll bring you the latest news and views to bring you up to speed.

IN THIS ISSUE:

  • Client Insight: Government Tips
  • ChatGPT Abused By State Sponsored Groups
  • Mood-Monitoring Business Tools Spark Debate
  • Fake LastPass App Got Into Apple Store
  • Google’s AI Gemini Suffers Embarrassing Setback

Insights

Have a look at the CSE’s “BASELINE CYBER SECURITY CONTROLS FOR SMALL AND MEDIUM ORGANIZATIONS V1.2”  Conceptualizing infosec in a smaller organization often helps us look at things more holistically. This gives more perspective when working in a more siloed role within the enterprise.

A few highlights include:

  • Assessing Organizational Controls- Which refers to what is going on inside your organization. Ie, size, Assets, degree of cyber security investment levels.
  • Assessing Baseline Controls- Which refers to How the organization specifically Reduces Cyber Risk and Responds to Incidents as they occur. Ie, Developing IR plans, patching systems, providing end user training.

Mood-Monitoring Business Tools Spark Debate

Tools to monitor sentiment in employee communications have prompted a mixed response. What some see as a creative feedback tool, others consider a worrying invasion of even the restricted privacy that employees expect at work.

The AI-powered tools, used by companies including Delta and Walmart, monitor internal communication tools such as Slack and Teams. Rather than simply look for keywords, they aim to analyze sentiment in both text and images.

In some cases companies are only accessing aggregated data, for example trying to find out if particular sectors of the workforce are responding to a new corporate policy in different ways. In other cases the tools are specifically monitoring individuals, for example to spot bullying behavior and harassment.

While the tools’ creators tout their effectiveness for understanding and improving the corporate culture, one critic quoted by CNBC said they verged into “thoughtcrime” territory. There’s also some question over when the aggregated data may contain so much detail that it’s possible to break it down and identify individuals.


ChatGPT Blocking State Sponsored Groups

OpenAI has blocked five state-sponsored groups from using ChatGPT for malicious purposes. The company hopes to crack down on misuse while hacking groups are still exploring how to exploit AI.

The groups are all known to be backed by governments in China, Iran, North Korea and Russia. The good news is that at this stage they weren’t using ChatGPT to create malware or exploitation tools.

Instead, some of the groups were simply using it as a research tool. While ChatGPT is only designed as a way to generate text, it can be used as a (potentially inaccurate) way to summarize existing online information. The groups were looking for details of specific satellite and radar technologies and high-profile individuals who could be compromised.

The other groups appeared to be generating and testing content for spear phishing campaigns and other social engineering. Analysts fear it’s too simple to create thousands of bogus emails in the hope that at least one proves convincing.


Fake LastPass App Got Into Apple Store

The bad news keeps coming for LastPass, though this time it’s not to blame. Apple somehow allowed a lookalike app into its official iOS app store.

It’s the last thing LastPass needed after suffering a breach of its systems and then taking months to enforce rules tightening requirements for user master passwords.

The good news is that there’s no evidence that the people behind the bogus app were able to get hold of anyone’s master password or access their existing vault. However, the risk remains that users may have typed login details for other sites into the scam app.

The real question is how Apple’s review process didn’t pick up what was clearly a scam, particularly given the sensitive nature of a password manager app.


Google’s AI Gemini Suffers Embarrassing Setback

In any area of technology, including security, you always need to account for the human element. That’s certainly the case with Google’s embarrassing launch of an image creation feature  in its AI tool Gemini.

Users quickly discovered that requesting historical images of people generated results with a level of racial diversity that wasn’t exactly historically typical. The most talked about included a response to a request for the US Founding Fathers that included several black men. Meanwhile a set of “1943 German soldiers” included what was clearly a female of East Asian heritage.

Google eventually blocked the tool from generating images of humans altogether. It appeared it had deliberately tweaked the algorithm to include more diverse images, something that is arguably necessary if the algorithm has been trained on datasets that are disproportionately made up of particular groups such as white males.

The problem was that the algorithm wasn’t set to override (or at least downplay) this diversity when dealing with historical groups which were more homogenous in race or gender. It raises wider questions about artificial intelligence tools, in particular whether it’s possible to correct for biases in training databases without introducing unexpected problems.


Best of The Rest

Here’s our round up of what else you need to know: