Skip to content
· 5 min read CRITICAL CVE-2024-8534 @Sdmrf

The Citrix NetScaler Situation Just Got Worse

Mass exploitation of CVE-2024-8534 is ongoing. Notes from helping clients figure out if they're compromised.

On this page

Three clients called last week. Two more this week. All the same thing: Citrix NetScaler alerts.

We’re in the middle of another mass exploitation wave. Here’s what’s happening.

Background

CVE-2024-8534 (and related issues) in Citrix NetScaler Gateway and ADC. Authentication bypass and remote code execution. The patches dropped in November, but we’re seeing the exploitation peak now.

Why the delay? Attackers were quietly exploiting for weeks before ramping up to mass scanning. The organizations that patched quickly are fine. The ones that waited… aren’t.

Affected:

  • NetScaler ADC and Gateway 14.1 before 14.1-29.72
  • NetScaler ADC and Gateway 13.1 before 13.1-55.34
  • NetScaler ADC 13.1-FIPS before 13.1-37.207
  • NetScaler ADC 12.1-FIPS before 12.1-55.321

If you’re running Gateway or ADC exposed to the internet and haven’t patched: assume compromise until proven otherwise.

What I’m Seeing

Webshells Everywhere

The primary payload is webshell deployment. Attackers get code execution, drop a PHP or ASPX webshell, use it for persistence and further access.

Common locations we’ve found:

/var/nslog/ns.log.php
/var/vpn/bookmark/*.php
/netscaler/ns_gui/vpn/scripts/*.php

Filenames trying to blend in. Sometimes timestamped. Sometimes mimicking legitimate files.

Credential Harvesting

Once in, they’re grabbing:

  • VPN user credentials (often visible in configs or logs)
  • Admin credentials
  • Session tokens
  • Anything in the ns.conf file

If your NetScaler was compromised, assume every credential that touched it is burned.

Lateral Movement Setup

Several compromises I’ve seen: attackers didn’t immediately do anything visible. They grabbed creds, mapped the network, maybe set up a tunnel. Then waited.

The “wait” is concerning. Suggests preparation for ransomware or targeted intrusion. Immediate commodity attacks are easier to spot than patient ones.

Post-Exploitation Tools

Seen deployed:

  • Cobalt Strike beacons
  • Custom reverse shells
  • Network scanning tools
  • Credential dumping utilities

This isn’t script kiddies. It’s organized groups with playbooks.

Triage Steps

If you have NetScaler devices:

1. Check Version

show ns version

Are you patched? If not, stop reading and go patch.

2. Look for Webshells

find /var /netscaler -name "*.php" -type f -newer /var/nsconfig/ns.conf
find /var /netscaler -name "*.aspx" -type f

Anything unexpected? Investigate.

3. Check Running Processes

ps aux | grep -v "^root\|^nobody\|^nsroot"

Unknown processes? Unknown parent processes for known ones? Red flag.

4. Review Access Logs

Look for:

  • Unusual source IPs accessing management interfaces
  • Requests to unexpected URLs
  • POST requests to static file paths (webshell activity)
  • Authentication from unexpected locations

5. Check Scheduled Tasks

crontab -l
cat /etc/crontab
ls -la /etc/cron.*

Persistence often shows up here.

6. Review ns.conf Changes

diff /nsconfig/ns.conf /nsconfig/ns.conf.backup

Any unexpected changes to configuration?

If You’re Compromised

Found evidence of compromise? Here’s the playbook:

Immediate:

  1. Isolate the device (don’t power off - you want forensics)
  2. Revoke all credentials that touched the device
  3. Block attacker IPs at perimeter (if identified)
  4. Check other internet-facing assets for similar compromise

Investigation:

  1. Take forensic images before changing anything
  2. Analyze webshell/malware functionality
  3. Determine what data was accessed
  4. Map lateral movement (if any)
  5. Check for persistence mechanisms

Recovery:

  1. Rebuild NetScaler from known-good image (don’t just patch)
  2. Apply all patches
  3. Rotate all credentials (including service accounts)
  4. Reset VPN user passwords
  5. Review and harden configuration

Don’t just patch a compromised device. If they had code execution, they may have persistence you haven’t found. Rebuild.

The Pattern Continues

This is the same story we keep living:

  1. Critical vulnerability in edge device
  2. Patch released
  3. Slow patching due to change management/scheduling
  4. Mass exploitation begins
  5. Scramble

Citrix. Fortinet. Palo Alto. Ivanti. Cisco. Every major vendor. Every year.

The edge device model is broken. These devices are:

  • Exposed to the internet by design
  • Running complex code with limited visibility
  • Critical infrastructure that’s hard to patch quickly
  • Targets for sophisticated attackers

And we keep being surprised when they get owned.

Longer Term

After this is cleaned up, consider:

1. Reduce internet-facing exposure

Does the management interface really need to be on the internet? Can VPN authentication use additional controls?

2. Monitoring for edge devices

These devices don’t run EDR. You need network and log-based detection. Monitor for:

  • Unusual outbound connections
  • New files in suspicious locations
  • Unexpected administrative access

3. Faster patching process

What’s your time-to-patch for critical vulns in edge devices? If it’s more than 48-72 hours, that’s too slow for the current threat landscape.

4. Assume breach planning

If your NetScaler gets owned tomorrow, what’s the blast radius? Network segmentation? Credential isolation? The answer should be “limited” not “everything.”

Current Status

As of this writing, exploitation is ongoing. Attackers are scanning for unpatched devices constantly. New compromises happening daily.

If you haven’t checked your NetScaler devices this week, do it now.


The lesson from every edge device vulnerability is the same: patch faster, monitor better, assume breach. We keep learning it the hard way.

Related Articles