If you have not applied the mitigations below you should consider your appliance compromised and need to follow your incident response process. The speed and proliferation of this exploit has been very fast, and it seems no matter how big or small you are the appliances are being exploited at a mass level.

Quick Summary

  1. December 17th a Vulnerability notice came out from Citrix for their Citrix ADC product line, they also released a mitigation that day.
  2. Over the holiday’s security researchers worked on reverse engineering that mitigation to try and make an exploit. Multiple articles and scanners were released as the world started to patch up for this vulnerability, no public exploit during this period.
  3. January 11th 4x Exploits were released that use this exploit to help gain remote access to the appliance. Citrix also announces some dates for Firmware updates that will fix this issue.
  4. If you have a Citrix Gateway Server (Virtual IP) or remote access enabled for your Citrix on the internet you are in scope for this fix. If you are running other VIPs on your Citrix ADC they are not affected at this time. Citrix Cloud appears to also have mitigated this since December 17th so people using Citrix Cloud services were protected.
  5. A Firewall does not slow down and/or stop this vulnerability. This is a full unauthenticated vulnerability, no matter your MFA or other Gateway\VPN\AAA\SSO (Anything under Citrix Gateway) settings. Don’t think because it is behind firewall X it is fine; it isn’t until the mitigation is applied or the pending firmware that is about to come out.
    1. If there was a properly configured Web Application firewall in front of your Citrix Gateway this vulnerability doesn’t apply to you in most cases. There are reports of Palo Alto having the signatures and blocking attacks that I have seen.
  6. If you have an AWS Citrix ADC Instance you need to change your nsoot password because by default it is the same as the instance ID,
  7. Make sure to check your license file expiration and backup your configuration before starting any of the steps below. Getting a support file is the easiest in most cases.

Threat Timeline

    1. December 17th an empty CVE was reserved by Citrix for this vulnerability.
    2. December 17th a Vulnerability notice came out from Citrix for their Citrix ADC product line, they also released a mitigation that day.
      1. (Announcement)
      2. (Mitigation)
    3. Over the holiday’s security researchers worked on reverse engineering that mitigation to try and make an exploit. Multiple articles and scanners were released as the world started to patch up for this vulnerability, no public exploit during this period.
    4. January 10th Project Zero India Released Public POC along with TrustedSec (First 2 Exploits
    5. January 11th
      1. Two more public exploits are released
        1. TrustedSec and Project Zero India
      2. Citrix Announces the expected patch timeline.
    6. January 15th
      1. Citrix Releases their Verification Tool
    7. January 12-16th
      1. Citrix updated the announcement and the mitigation along with other blogs. They also added 2 more commands and a better flow to also include internal remediation along with rebooting the appliances.
    8. January 16th
      1. The SD-WAN Appliances are suspect to the same vulnerability also and are added to the support article.
      2. Specific builds of NetScaler were announced that they cannot be mitigated with just the responder policy and they need a firmware upgrade along with the mitigation. The builds are 12.1 51.16, 51.19 and 50.31.
    9. January 17th
      1. More updates to the Citrix Support Article to include SDWAN along with the CISO clarifying blog statements from earlier in the week.
    10. January 19th
      1. The 11.1 and 12.0 remediated firmware was released.
      4. Updates on the other patched firmware’s coming out sooner which is great
    11. What’s Next?
      1. Waiting for the Firmware based Mitigation to be released (2 of 5 are released no on the 19th)
      2. Waiting for more admins to apply these fixes using the firmware and not the responder policy
      3. People will find out in this process they may have been exploited and will need to know what to do. The items I have used for incident response are in this post. Like with all cyber security things there is a possibility of a very advanced attacker could have exploited your system and have removed their tracks and or pivoted to another system in your deployment too. Your comfort level will determine your path for incident response.

The Story

This has been exciting couple weeks in the Citrix world. This is very quick blog post to just get a collection of information out there on this issue. Let’s first take a couple steps back to get an idea of the scope of this issue and if it may apply to you. Most Citrix deployments are using a Citrix ADC aka NetScaler as the front door to their deployment for remote access. Many also use the Citrix ADC to load balance and secure other web applications along with a couple other possible roles too (Network Swiss Army Knife). These systems are deployed all over the world all over the world, you can get an idea with this very basic Shodan query. (Shodan is a search engine for Internet-connected devices that lets you search for all kinds of things that are on the Internet)

This is a very rough count of the NetScaler\ADCs in the world, you will see more below about this. This was a number that was quoted in many articles and to me this seems about right, based on the number of Citrix ADC clients around the world. This number will always fluctuate because it takes a while to scan 4.7 billion IP addresses. People are also updating their devices all the time along with the queries on the Shodan side change too, more about this below.

Example High Level Query

Next Thing, what is a CVE? 

It stands for Common Vulnerabilities and Exposures; this basically means someone has found something that could be used in a malicious method to cause unintended access or other things to some software and or operating system (aka Bad Computer Things). If you want to dig deeper than this I would just read through this link here and this is also is the site that they will be entered too for tracking.

Overall in the Cyber Security World there are many, many times a CVE is created but an exploit (When the vulnerability is weaponized for easy repetition) isn’t even created (Always exceptions). Usually based on the normal vendor disclosure process they may have enough time to fix it so it may be fixed as soon as the CVE is released. There are other instances where the disclosure process is so short that the CVE and/or even the exploit are released very quickly without any notifications to the vendor, or just a couple days\weeks of time which may not be enough. Sometimes CVE\Vulnerabilities and or Exploits are released on Twitter and other social media outlets and go into the wild from there. Exploits are tools to attackers and IT security professionals and there are new things getting added to their Bat Belt almost every day\week to help them get into and around devices. This time it was Citrix’s turn.

This vulnerability is bad because the Citrix Gateway, VPN, AAA or SSO servers are placed on the internet so people can work from anywhere internet connected. Exposure for every client was built in, no one had to break in they just had the scan the internet. Then this is an unauthenticated vulnerability which means someone just needs to find your IP and URL and the exploit can run. This is by definition one of the worst vulnerabilities any system can have, and this is a big deal.

This is the first time in a while that a Citrix vulnerability has even got an exploit created. It has been a while and the social media whirlwind of notifications and exploitation information proliferation has made it get riled up pretty quick.

Top Links

Example of the Mitigation code that would be executed by the Citrix ADC so it would stop answering these malicious requests. This is a directory traversal vulnerability that allows an attack to read files from the file system and then start down the path for remote code execution and eventual remote access to the system in an elevated state. Bad permissions in these directories is part of the problem also and don’t need traversal in that sense to execute.

The Mitigation (1-11-20)

  • enable ns feature responder
  • add responder action respondwith403 respondwith “\”HTTP/1.1 403 Forbidden\r\n\r\n\””
  • add responder policy ctx267027 “HTTP.REQ.URL.DECODE_USING_TEXT_MODE.CONTAINS(\”/vpns/\”) && (!CLIENT.SSLVPN.IS_SSLVPN || HTTP.REQ.URL.DECODE_USING_TEXT_MODE.CONTAINS(\”/../\”))” respondwith403
  • bind responder global ctx267027 1 END -type REQ_OVERRIDE
  • save config

Go to this site to check if there are updated methods added. This article has been updated a couple times since its initial release.

Big Big Note for Citrix Gateway Customers (Not ADC\NetScer customers):

If you have a Citrix Gateway VPX, that doesn’t have load balancing or many other functions it is just an ICA\SSL bridge for users to come in then you cannot apply this fix. As soon as you enter “enable ns feature responder” that command will fail because you are not licensed for it. Please go create a Citrix Support Case and you will be able to get a trial license to Citrix ADC Standard Edition which allows responder policies. When the new firmware comes out then you can apply that and that will fix this issue, but you may also have some other work to do because that license will eventually expire, and I have not tested what would happen to the gateway site. My estimation is that this will continue working because that feature is allowed on all appliances, but we will see how that goes. There may not be as many clients in this situation, but this has to be called out and we need to make sure they know also.

Bad Firmware Build

From Citrix “In Citrix ADC and Citrix Gateway Release “12.1 build 50.28”, an issue exists that affects responder and rewrite policies causing them not to process the packets that matched policy rules. Citrix recommends that customers choose one from the following two options for the mitigation steps to function as intended:”. If you are on either of these versions, it is recommended to upgrade to the latest the updated 12.1 Build 50.28 or a later 12.1 Build like 12.1 build 55.13 (after testing) until the new remediated firmware is released later this month.

SD-WAN Addition

This vulnerability also affects two older versions of our Citrix SD-WAN WANOP builds 10.2.6 and version 11.0.3. All other SD-WAN products are not impacted. The SD-WAN used much of the same code base so this correlation makes some sense, but then you can see the AAA server feature was removed after these two builds in the 10.x and 11.x family.

Exploit Citrix Summary

Four exploits were released on the 11th of January 2020 (4 shown here) there were only 2 in 2019 before this one. Then there was a dry spell between 2019 and 2017 and then on top of that it has been since 2015 since there was an external bug like this and I think about the product growth over the past couple years and this becomes a much bigger problem (more appliances more risk). If you want to check things out on this front just go to this site and search for Citrix or your vendor of choice to see what is out there.

10 Years of CVEs and Exploits for Citrix

As you can see with 166 CVEs and only 7% ever turned into Exploits in the past 10 years, so this is not a common problem.  This shows that there wasn’t enough interest in many of these CVEs because there were easier ways to break into a deployment than to exploit ShareFile or Receiver or other things that you need to be inside to do.  This isn’t just about picking the lowest fruit sometimes in Penetration Tests and Real Attacks you just pick it up off the ground where you my step on it while doing simple scans.  This is because so many deployments don’t patch (CVEs that get Exploits), have bad Privileged Account Management, have little to no logging and many more core principals are missing.

As you can see with 166 CVEs and only 8% ever turned into Exploits in the past 10 years, so this is not a common problem. This shows that there wasn’t enough interest in many of these CVEs because there were easier ways to break into a deployment than to exploit ShareFile, Receiver or other things that you need to be inside to do. This isn’t just about picking the lowest fruit sometimes in Penetration Tests and Real Attacks you just pick it up off the ground where you my step on it while doing simple scans. This is because so many deployments don’t patch their systems regularly (CVEs that get Exploits), have bad Privileged Account Management, have little to no logging and many more missing security core principals.

Exploits (2 are in the Exploit DB) (Most Updated So Far)


This is the best deep dive article on this Exploit and how it is works.

These exploits that have been released just scratch the surface on what could be possible and are just propping the door open for the next things found. It will just be a matter of time from before more details on how the Citrix ADC works and other things will be found based on this new remote access. One thing that has also happened for better or worse there are a lot of security researchers looking at these systems right now and more things may be found. With this exploit and social media excitement and proliferation attackers will be focused on this product line too. They will be learning where the config files are how the files system works and many other Citrix ADC things will be learned by these researchers to help fix “all the things” and the attackers who are trying to break and “steal all the things”.

This is a very, very important vulnerability to fix because you allowing external access makes you vulnerable and many other CVEs that never got exploits you had to have access to say to the NSIP and the juice wasn’t worth the squeeze then. This one is worth the effort and it shows with now 4 exploits released over just after 48 hours after January 10th.

It will only get worse for non-mitigated systems in the next couple weeks. This is where how many systems are vulnerable is gets really fuzzy. That is because there only so many ways to test the vulnerability and so many IP address and so many easy and/or harder ways to figure out what is a Citrix Gateway VIP on the internet right now.


Numbers Fun, How Bad is it?  (It Depends J)

  • With this “vuln:cve-2019-19781” Shodan Query I show around 14,787 vulnerable boxes right now to this exploit. All the systems I had time confirm they were all truly vulnerable. This number seems almost too low like something with this query is missing and if the job has completed.
  • I have completed this same scan multiple times and it is now growing. Why is that? Because the new detection methods are being applied and it takes a while to scan 4.2 billion IP address. I think this number will keep moving from each researcher based on what they used and how up to date it is. There is a classification problem of what a Citrix Gateway\AAA\VPN site is and how to detect that and then scan it with the know detection methods.
  • Then with this one looking for a page title “title:”Netscaler” port:”443″” shows 43,006 systems and I didn’t have time to scan all of those IPs.
  • Then with the final “http.waf:”Citrix NetScaler”)” shows 120,610 systems and that is based on how it is “finding a NetScaler” and how people have named sites and many of these sites are not vulnerable because they are just load balanced or WAF enabled VIPs.
  • Then there is TripWire who just scanned IPs the old-fashioned way it seems, and they came up with 39,378 vulnerable systems around January 8th. This is a better number in my opinion because they appear to have crawled and scanned a very large block of IPs.
  • There is now a scan of from Bad Packets. Based on their scans they are seeing over 25,121 vulnerable systems out there right now. They use BinaryEdge which is a Shodan equivalent that found over 60k deployments (IPs) to do their scan. I think this is the best estimation to date and based on that I think there is still a lot of work to go.

These different numbers are all because of a couple factors. Some scanned from their own systems and IPs.  Then others are just relying on Shodan services which take some time to crawl through the 4.7 billion IPV4 IP addresses.  There are also things that come into play that  These differences also make it very hard to find out how bad is it and how many more people need to fix their deployment. I feel confident based on my scans the CVE scan on Shodan is good and there are over 20k+ vulnerable systems, but beyond that I’m not as confident on the higher counts minus I think it could fall somewhere between 20-30k systems and it will take more time to get better counts. If I’m trying to find out how bad this is on the research side, there are way more people doing this to attack. When I look at the Shodan output of the 14,787 sites there are some bigger names on there still all over the world that need to act and apply the mitigation.

Four Types of Systems Out there

  1. Completely Neglected
    1. They are running firmware that is months to years behind that already has multiple vulnerabilities and bugs in it now, and they also are not keeping up with the latest news on this product or they would have already upgraded to something more modern.
  2. Not Notified
    1. There is also a large contingent that just don’t know and don’t know there is an exploit now that is starting to be shared today and would fix it if they knew. The talk on Twitter and Blogs are fun, but will they see it?  This is where Citrix, Partners and others need to reach out to let them know.
  3. Waiting for Change Control
    1. Then there are large organizations that are awaiting testing and change control to implement this, I saw some friction over the past couple weeks over twitter.
    2. Perfect storm for most banking and Retail clients that are usually on a change freeze from Black Friday until the end of this week to make sure money and product can flow during the holidays. I know a couple large clients that couldn’t patch yet because of that and are just now starting to get ready to promote to Prod from Test.
  4. Forgotten About
    1. I deployed that thing a couple years ago and we stopped using it or only random people are using it and we don’t get calls about it, so it was never “fixed” or removed. There seems to be some of these out there.


Shodan Queries and their outputs

(vuln:cve-2019-19781) 14,787 Found (My verification scan showed 100%)

(title:”Netscaler” port:”443″) 43,006 Found (Not all assessed)

(http.waf:”Citrix NetScaler”) 120,610 Found (Not all assessed)


Shodan Queries and their outputs

(vuln:cve-2019-19781) 1,377 Found (My scan showed 100% vulnerable, only so much time)

(title:”Netscaler” port:”443″) 43,006 Found (Not all assessed)

(http.waf:”Citrix NetScaler”) 120,610 Found (Not all assessed)


Shodan Math

1-11-2020 3PMCST 1-11-2020 3PMCST 1-15-2020 1PMCST
Citrix ADCs~ 120,610 Citrix ADCs~ 43,006 Citrix ADCs~ 120,610
Vulnerable~ 43,006 Vulnerable~ 1,377 Vulnerable~ 14,787
% Vulnerable 36% % Vulnerable 3% % Vulnerable 12%
% Mitigated 64% % Mitigated 97% % Mitigated 88%


Tripwire Blog Math

Tripwire Blog
Citrix ADCs~ 58,620
Vulnerable~ 39,378
% Vulnerable 67%
% Mitigated 33%

Bad Packets + Tripwire Math

Bad Packets Blog
Citrix ADCs~ 58,620
Vulnerable~ 25,121
% Vulnerable 43%
% Mitigated 57%

Pick your poison on how good or bad it still is.

These are still ~estimated counts with Shodan queries, and these numbers are constantly going to be changing and so this is just a snapshot in time. There are also many ways to try and identify what is a Citrix ADC VPN server is and also what probe was used to see if it was vulnerable too. Remember not every Citrix ADC VIP on the internet is vulnerable, only Citrix Gateway\VPN\AAA servers are.

CVE Related Links

Blogs about it (Updated with clarification and inclusion of the 50.28 firmware (How to decrypt passwords) (Tons of Content for Red and Blue Team along DFIR information)


Citrix ADC Security Reference Link


Special AWS Citrix ADC Note

@KevTheHermit showed another problem with AWS Citrix ADC Instances during his research that the nsroot password is the same as the instance ID which makes it that much easier to exploit the appliance further once the initial attack is successful. We suggest logging into the console and changing the nsroot password after you deploy an appliance and especially after this incident is patched and or mitigated too.

How to Check if your Vulnerable

Basically, there is a URL query where you try to traverse into the system directories, and you can view things. If you’re vulnerable you see contents of those files if not, then you don’t see anything or a 403 Forbidden.  There are many scanner options.

My Favorite and simplest method is to just use this CURL command because it doesn’t require any coding just a MacOS or Linux endpoint.

From a MAC and/or Linux Box

curl https://YourURL.something/vpn/../vpns/cfg/smb.conf –path-as-is

You will see things like this


encrypt passwords = yes

name resolve order = lmhosts wins host bcast

That is the file contents being read over the internet, that is very bad!


Citrix Checkers (Citrix Tool) (requires Python 2.x or 3.x)


Open Source Checkers  (US Government) – Recommended (Online Scanner from @zentura_cp)  (PowerShell)  (NMAP script)  (nmap script)  (Chinese)  (Russian – Windows Binary)

Commercial Checkers

More information on the forensics

ssh -t [address] ‘grep -r “/../vpns” /var/log/http*’


Other Detection Methods and outputs:

GET /vpn/../vpns/services.html

GET /vpn/../vpns/cfg/smb.conf

Mitigated                    HTTP/1.1 403 Forbidden

Not Mitigated             HTTP/1.1 200 OK


Many deployments already have ControlUp and they have made a script based action that can detect and remediate this on existing integrated Citrix ADC appliances

There are many things that are coming out and have come out to help find if your vulnerable, many monitoring systems are also jumping on the bandwagon to help monitor these things now but also monitor things in the future too ongoing so look at your existing monitoring or vulnerability management solution to find out what options you have.

Shodan has some offers right now for a little bit of money $49 USD to monitor a couple IPs for bad things. Sign up here this will be a good tool to have going forward that is external and looking for things as they come up. Shodan is scanning the internet all the time so this just uses that power to help you. Check out too.


You should be logging things like mentioned above to have a chance to know if you’re being probed and or attacked. When turning up the logging make sure test this to make sure you are paying attention to log growth to make sure it doesn’t get away from you. There are many free to near free options for a syslog server. Having some external retention is better than none also and will allow you to have better alerting if integrated too.

This is also a good time to think about the layers of your logging. You need to go from the ADC out each step to ensure each connection and participating member has logging to know if something has left the ADC and landed on another planet\system.


  1. Citrix ADC
    1. Domain Controllers – LDAP (Most important thing to log)
    2. Firewalls – (Knowing what is coming in and out is key)
    3. DNS
    4. File Servers
    5. Database Server
    6. Other Servers
    7. Virtualization Host
    8. Virtualization Management
    9. Top of Rack Switches
    10. Core Switch\Switches
    11. DMZ or other connected firewall instances
    12. Network ACLs
    13. Network Flow Data
    14. Desktop Systems
    15. Copiers
    16. IoT, HVAC and Access Control Devices


Citrix ADC Logging Setup

You can use many free Syslog servers and that is a good place to start.  I hope most have an existing SIEM and they should just integrate into it but if that isn’t an option having external log retention is key.  I have done a couple threat assessments for this vulnerability and many didn’t have more than 2-3 days of logs on the logs that really matter like httperror.log (detection of the exploit running and what payloads), bash.log (what was done) and others.  In those cases we had to lean on their teams “feeling” as to if people ran the stock exploit or customized it and how long the threat window was open and if there was detection of lateral movement to determine how far we would go with the remediation post exploit and that isn’t a good position to be in.

Adding some specific logs related to the CVE to help track the constant scanning.

add audit messageaction MsgAct_ CVE-2019-19781_WARNING “\” CVE-2019-19781 Attack from IP \”+CLIENT.IP.SRC+\” – URL: \”+HTTP.REQ.URL.PATH.HTTP_URL_SAFE+\” (headers: \”+HTTP.REQ.FULL_HEADER.HTTP_HEADER_SAFE+\”)\”” -logtoNewnslog YES

set audit syslogParams -logLevel ALL -userDefinedAuditlog YES

set audit nslogParams -logLevel ALL -userDefinedAuditlog YES

set responder policy ResPol_Fix_CVE-2019-19781 -logAction MsgAct_CVE

Tier 1 Target List

When I’m doing Citrix ADC threat assessments for clients this is a very important part of the findings. These are devices that are in the configuration that will likely be scanned and or lateral movement may be attempted on first.  Think about the things in your configuration like your DNS, NTP, Domain Controllers (Authentication Policies), SSL vServers, Services, Server Groups and Servers, Virtual IPs and your Gateways. These devices should be examined for any malicious or unusual content from January 9th through the remediation date and if you were exploited once the forensics are completed and the boxes have been rebooted. This isn’t a guaranteed thing that they will go after first, but it is the most logical place to look next.

Detecting Exploitation

There is not an easy way right now to prove what someone has done. With 4x public exploits they each leave a different artifact\signature which is helpful for detection but is not 100%. The key to remember these are the Public exploits the were private until the 10th and that also doesn’t mean there still are others out there in the wild that people not sharing for their own gains. Exploits are editable in most cases where someone can change the name of the file that will be dropped, the name of the user account, the name of the process, the path of the query and many other options which means the possibilities increase exponentially that someone may have exploited the system. There are also advanced attackers and basic attackers too, one will clean up after themselves and come up with very clever ways to disappear into the system to hide from detection.

If you run Nessus, then you can use this .YAR file below to do a privileged scan to look for the common detection methods.


Exploit Auditing

Great Links about this Auditing Process.

Disclaimer just like stated before this will not detect every exploit, but it may help detect some anomalies if the attacker didn’t modify the public exploits and/or didn’t clean up after themselves. Most attackers will use the default exploit, and these are some of the documented artifacts that could be left behind. This list of commands was grabbed from many sources and will be very dynamic as other variants, workarounds and new developments come up. We can count on this changing constantly as more infections happen and further forensics is completed with a larger sample set.

Look at this first for things you didn’t do. If you don’t normally do a lot on your ADC these should be very quiet along with, they may have entries from weeks, months or years ago the last time you were on. There are more precise queries just below. Also understand this is a cat and mouse game even in this blog. As we disclose what we have seen and how to spot how to find the attackers they are using this information against us by changing their tactics to avoid detection.

Exploitation Check Quick Punch List v1

  1. Check your License, I have heard of some who rebooted their devices and actually had an expired license.
  2. Get a Support File aka a Backup System -> Diagnostics -> Get Support File and save that file off.
  3. All the commands below are in NSCLI and if you SSH into the box and use Shell then you can drop the Shell prefix.
  4. Check the date on the box to help correlate log findings
    1. shell date
  5. Check your config change date
    1. Shell ls -l /netscaler/ | grep netscaler
    2. Shell ls -l /nsconfig | grep netscaler
      1. What is the date of your netscaler.conf ?
      2. Does that look right? Look for file links to other places.
    3. Check local account password file
      1. Shell ls -lh /etc/passwd
        1. Check to see when the file was modified. If after the exploit and it wasn’t you, then you need to change that password as soon as possible.
        2. I recommend changing the password to nsroot or any local accounts if any exploits are detected. In many cases
      2. Shell cat /etc/passwd
        1. Look to see what accounts are in there.
        2. Root, nsroot, daemon, operator, bin, nobody, sshd, nsmonitor are default.
      3. Check your Logs
        1. shell ls -lh /var/logfile
          1. Are the files there? Are they really small?
        2. Bad File Check
          1. If any of these files are more or less than 8-9 characters and a random filename, then this is a sign of a more advanced attacker that changed the stock exploit. If you see this, you need to adjust your remediation accordingly. Pwnpzi1337.xml is the file name for the Project India exploit
          2. shell ls /netscaler/portal/templates/*.xml
            1. Should be no XML files here.
            2. If infected look at the file dates here.
              1. shell ls -lh /netscaler/portal/templates/
            3. shell ls /var/tmp/netscaler/portal/templates
              1. This directory should not exist.
              2. If infected look at the file dates here.
                1. shell ls -lh /var/tmp/netscaler/portal/templates
              3. shell ls /var/vpn/bookmark/*.xml
                1. Most commonly doesn’t exist but shouldn’t have XML files in there either.
                2. If infected look at the file dates here.
              4. Cron Jobs (Persistency Methods)
                1. shell cat /etc/crontab
                2. shell crontab -l -u nobody
              5. Crypto Check
                1. shell top -n 10
                  1. NSPPE-xx (Packet Engine) should be 100% or close, if another process is there you may have been mined
                2. PCAP
                  1. Shell find / -name “*.cap”
                  2. This will be a sign of a more advanced attacker that may have been sniffing.
                3. Shell Logs
                  1. shell cat /var/log/bash.log | grep nobody
                    1. looking for user access from the nobody user.
                  2. shell gzcat /var/log/bash.*.gz | grep nobody
                    1. looking for user access from the nobody user in zipped logs.
                  3. Apache Log Check
                    1. shell “cat /var/log/httperror.log | grep -B2 -A5 Traceback”
                    2. shell “gzcat /var/log/httperror.log.*.gz | grep -B2 -A5 Traceback”
                    3. shell grep -iE ‘POST.*\.pl HTTP/1\.1\” 200 ‘ /var/log/httpaccess.log -A 1
                    4. shell grep -iE ‘POST.*\.pl HTTP/1\.1\” 200 143’ /var/log/httpaccess.log -A 1
                    5. shell grep -iE ‘GET.*\.xml HTTP/1\.1\” 200’ /var/log/httpaccess.log -B 1
                    6. shell grep -i ‘.pl HTTP/1.1″ 200 143’ /var/log/httpaccess.log | grep POST
                      1. All these are looking for specific items when it comes to moving .pl and .xml files in or out of the system.
                    7. shell cat /var/log/httperror.log
                      1. This is looking at the raw contents of the file overall this is just to look for other items that stand out.
                    8. Persistent Scripts
                      1. shell ps -aux | grep python
                      2. shell ps -aux | grep perl
                        1. These are both known persistency methods to use scripts to run reverse shells and other tasks. You should only see the grep command in this list of processes.
                      3. Password\Account Check
                        1. shell ls -l /etc/passwd
                          1. Look at the file date too if something has been recently added.
                        2. shell cat /etc/passwd
                          1. Anything look suspicious, local accounts?
                        3. Check TCP Connections
                          1. Shell netstat -natu
                          2. Look for non-local IP addresses in your VLANs. Internal IP should be checked also incase another box was compromised.
                        4. Check your Authentication Profiles
                          1. Were they set to TLS or SSL and now they are PlainText?
                            1. I have seen some get changed in some audits and online.
                            2. This is also the sign of an advanced attacker too.
                          2. This will relate back to if your configuration ns.conf was changed or not changed luckily.
                          3. If it was PlainText before then you need to work on getting this set to TLS and SSL as soon as possible if you find signs of exploits or not.
                        5. Check your Certificates
                          1. These are the SSL Certificates you may want to rekey especially if your box has been exploited.
                          2. I recommend if you have any signs of exploitation to rekey your SSL certificates. Some larger deployments may have a more difficult road ahead because the number of other places that Certificate is bound and the possible outages\disruptions an SSL certificate change may have.
                          3. I have seen some clients not rekey because they had good logs to be able to prove they didn’t do anything beyond doing the exploit and didn’t pivot into the system configuration. Most clients may only have a couple days of
                        6. Check your Configuration File for Potential Tier 1 Targets
                          1. View your ns.conf and that will create your Tier 1 Target list which was most likely accessed first if the device was exploited.


If Exploited

Your mileage will always vary on what you need to do based on your threat landscape. Here are some of my thoughts I have been telling customers that have found evidence of an exploit executed.

What compliance bodies cover your business? Finance\Banking, SOX, PCI, HIPPA, State/Local and Government Laws.

If you are under one of these frameworks, then you need to follow those procedures for those compliance bodies. There are also ethical considerations based on what certifications and professional groups you are part of that have provisions for incident response and disclosure.

Here are links to these two well know Incident Response processes guides

One thing we know so far is that around the 1-10-20 the first public exploit was released and there are some reports of infections on the 9th as that is right when it came out. In most cases the risk is much lower if you patched your system before 2020 versus later this month.

This should come into your process for your next steps.

I have found traces of an exploitation what now?

This still depends, one thing most Citrix ADC deployments don’t get configured is good SNMP and SYSLOG logging and they may not have a good way to search, filter or alert if there are artifacts found. If you have absolute logging and you are confident, they didn’t do anything then you may be able to move on with your life. Then if you did find something and you were able to confidently remove their remote access then you could move on.

But most will find they will find some trails and they may not be able to connect the dots on what was done and where they may have gone to and it could be easier after detection to just reset the devices.

My Next Advice will be changing over the next 2 weeks.

Sample Incident Response Paths

There is no right and perfect answer I can give that will fit everyone’s situation these are my thoughts as of now on 1-19-20 and they may change after this as I learn more about next steps and as things are released on the defense and or attack side related to this vulnerability. There is no right answer, IT security is the land like most lands that is ruled by “it depends”. I highly suggest if you find anything else other than these files only in those 3 directories, I would consider the box compromised and go down the more cautious path.  In some of these I’m suggesting taking the more cautious path especially when there is no logging to confirm what they did or didn’t do.  You need to work with your team to decide the best course of action based on your situation because this is a team sport.  There can always be a better way to fix things like this but based on the evidence you have on the device and around the device (1st Tier Targets) then you may be ok by reducing your risk and going from there.

  • Mitigate: This should be where you start no matter what. With the new firmware or the responder policy.
  • Exploits Detected During your Audit.
    • Start your incident response process.
      • With Good Device Logging
        • Signs of Advanced Attacks or Persistency
          • Build New and Migrate
        • No Signs of Advanced Attacks or Persistency
          • Remediate and Keep Running
        • Without Device Logging
          • Signs of Advanced Attacks or Persistency
            • Build New and Migrate
            • Factory Reset
          • No Signs of Advanced Attacks or Persistency
            • Build New and Migrate
            • Factory Reset
          • With Good 1st Tier Target Logging
            • Signs of Advanced Attacks or Persistency
              • Build New and Migrate
              • Factory Reset
            • No Signs of Advanced Attacks or Persistency
              • Remediate and Keep Running
            • No 1st Tier Target Logging
              • Signs of Advanced Attacks or Persistency
                • Build New and Migrate
                • Factory Reset
              • No Signs of Advanced Attacks or Persistency
                • Build New and Migrate
                • Factory Reset

Response Definition and Thoughts

  • Build New and Migrate – Start your incident response process then you can start this process This is relatively easy for VPX and SDX customers because of the virtual nature and the flexibility of the configuration platform. MPX Migrations are a different story because of how the factory reset works and the base image is maintained, there could a very low risk if it was an advanced attacker to persist through a firmware upgrade and or factory reset. The likelihood may be lower, but it is still possible (just like anything in the cyber world).
  • Factory Reset – Start your incident response process and remove the .xml files and anything else detected and reboot and check for persistency again. Then start the process to do the factory reset. There are scripts that can be obtained from Citrix that will go through the process. This will wipe the system to the lowest level before reloading the operating system, but everyone will trust this method so far based on their threat landscape and may want more.
    • The most drastic method would be to RMA the devices to get the drives reloaded and this may be a good or a bad idea based on your lifecycle, platform and your redundancy plan too. I would only suggest that if you saw advanced techniques used and have confirmed lateral movement based on their techniques to go possibly go down this path. I know Citrix is working on a what if options and
  • Remediate – Start Incident response process and remove the .xml files and anything else detected and reboot and check for persistency again. If you have good logs then you will know if anything was done, if not then I would look at your threat landscape and if you have logging on 1st Tier Targets or anything else to know if you need to consider it needing to be factory reset and/or if you need to build new & Migrate.
  • Logging Tiers
    • Good Local Logging
      • You are in the best position to see what happened locally to know if there were any attempts of lateral movement of if the exploit was just ran like most.
    • Good 1st Tier Target Logging
      • You are in the best position to see if lateral movement happened or was even attempted. These should be the first things that may be targeted and if you saw successful lateral movement then you should be the most worried and proceed with more caution on your remediation path. If not, then the then you have can lower the risk of the threat and just take the remediation path.
    • No Local Logging
      • You are in the worst position to see what happened locally to know if there were any attempts of lateral movement of if the exploit was just ran like most. You have to proceed with more caution on your remediation path.
    • No 1st Tier Target Logging
      • You are in the worst position to see if lateral movement happened or was even attempted. These should be the first things that may be targeted and if you saw successful lateral movement then you should be the most worried and proceed with more caution on your remediation path.

I hope people can remove infection and have good enough logging to feel confident they are no longer in and be able to resume your normal activities without some of these steps.

Other Good Follow-up Steps

There are two main things you need to make a decision on if there are any traces of exploitation.

  1. Change NSROOT Password
    1. I recommend doing this no matter what you find or what logs you have. This is a chance to get nsroot in rotation for PW changes. ADC Management should be bound to LDAP and NSROOT should only be used for emergencies.
  2. Change LDAP Service Account (or another authentication service)
    1. Change this password along with I recommend changing to another account if possible so you will have different SID too. This can be a passive change without anyone noticing if tested before the rollout.
  3. Change SSL Keys
    1. Good Logging
      1. Maybe you are fine if you are 100% sure it is good.
      2. There is still a part of me that wants to say rekey all the things, but I know how much work that can be in a large shop.
    2. No Logging
      1. I think you have to rekey everything on there. There is PEM and PFX protection, but I have seen a lot of places that use very simple passwords for those and could be brute forced offline.  Since we don’t know we need to protect the company.

Password Thoughts

I recommend changing your passwords for all your local accounts on the box if there is any inkling of a successful exploit. Go ahead and change it because in many deployments it may have never been changed after it was originally deployed 4-7 years ago. If you see signs of command line access and/or tampering you can most likely count on the attacker being able to crack the password on Pre 11.0 firmware’s it was AES256 and in later builds it is using AES512 which can also be susceptible with cracking too. Make sure it is bound to LDAP securely also and you have alerts setup for NSROOT logins.

LDAP Thoughts

If you have saw some levels of exploitation, I would also make sure and change any service account that was defined within the Citrix ADC configuration. The most common is the LDAP\Kerberos bind account. Someone getting an exploit to run on a Citrix ADC doesn’t mean they are Domain Admins, but it may not take too long depending on your controls and logging.  This is a very easy change that if tested can be seamless to users.

Certificate Thoughts

Depending on what you found with your audit will help figure this one out too.  If you had good logging and you can see access requested to this file, then you must rekey. If you don’t have good logging, then you should also rekey. If you have a wildcard certificate, then that is also another big problem too and the more sites it is bound to the more your risk and exposure will be.  The worse thing that can happen is that you assume it is ok and someone is standing up phishing site with your certificate that all your training will not prevent the clicks.  This can lead to much bigger problems if someone has access to your certificates and I would suggest proceeding with caution and doing a rekey.  This could be an ok time based on the expiration date of the current certificate.  I have seen some swap to another certificate registrar in this process to just to change things up, but they had logs that the file was accessed and downloaded along with other advanced techniques were detected.

If you want to run a Honeypot

DFIR Videos (How it works) (SANs)  (SANs)

Citrix Video Response to Nationaal Cyber Security Centrurm advice to turn of Citrix ADCs.

Products affected

  • Citrix ADC and Citrix Gateway version 13.0 all supported builds
  • Citrix ADC and Citrix Gateway version 12.1 all supported builds
  • Citrix ADC and Citrix Gateway version 12.0 all supported builds (Patch Released)
  • Citrix ADC and Citrix Gateway version 11.1 all supported builds (Patch Released)
  • Citrix ADC and Citrix Gateway version 10.5 all supported builds
  • Citrix SD-WAN WANOP software and appliance models 4000, 4100, 5000, and 5100 all supported builds

Call to Action

  • First make sure you have a logging and alerting solution for your Citrix ADC deployment. SYSLOG and SNMP are highly recommended. Do you know who is logged into your Citrix ADC right now? Do you know what is going on right now, are you being scanned, are they in, what are they doing next?
  • If you have a Citrix Gateway, make sure you have mitigated your deployment.
  • If you are Citrix Partner, you should be reaching out to all your clients to let them know.
  • If you provide Citrix Managed Services, you better make sure your deployment and your customer deployments are good.
  • Anyone in this whole Citrix ADC world should be subscribed to  and\or to hear about these things from the source.  You can follow some of the CTPs and CTAs and other Citrites on Twitter or LinkedIn, but you may miss something. Depending on how often you check and their algorithms of what you should see too.
  • Citrix please send an email notification if possible, to all Citrix ADC clients for the past 10 years on the MyCitrix contacts about this CVE and its mitigation right now and then send another round out when the firmware is released for all major versions too if possible.

Security Nerd Thoughts

If you think what the Citrix ADC is doing for customers it may be the center of Internal and External networks, PCI network, DMZ networks and many other protected items. If someone gains remote access to your solution think about what goes through your box today and think about those SSL Private Keys and an attacker, then having the ability to run packet captures (What can they see now?). Simply put getting a remote shell on a Citrix ADC doesn’t totally own the deployment instantly (They are not domain admins yet, keyword yet) but with a little more time and pressure depending on what is in your config and how you are doing things along with what applications your using it will just be a matter of time. I would put this as a Defcon 1 situation especially if you have a Citrix Gateway server stood up and you have hundreds or thousands of other traffic types flowing through this box. Please mitigate and watch for this patch so you don’t end up in a very bad situation that is preventable. I would also be doing vulnerability scanning everywhere at this time and making sure the perimeter that an attacker would have to breach from your Citrix ADC is known and defended to the best of your ability.




In closing this will be a living blog and I will send updates out on LinkedIn and Twitter when we learn more. The one great thing that was added on the 11th is that there is a rough timeline for each firmware for the permanent fix which is great but as always that will take a while to flatten out and there will still be some outliers that will not Patch and/or Mitigate.


Below is the Citrix anticipated firmware release schedule for a fix without the mitigations in place. I hope it is sooner, but I know this touches a lot of things and I know it needs to be thoroughly tested so it doesn’t cause other issues.

Version Refresh Build Expected Release Date
10.5 10.5.70.x 31st January 2020
11.1 11.1.63.x
12.0 12.0.63.x
12.1 12.1.55.x 27th January 2020
13.0 13.0.47.x 27th January 2020



Last but not least some credits for some of the people that have been working on this issue since it first arrived. There are so many more people that are not on this list because they behind the scenes that I have not seen too.

Citrix Team – Working to get the information out there along with working on these new firmware’s. They are having to work on 5 patches at once based on the differences with the code families which makes this that much more difficult.

Daniel Weppeler @_DanielWe – Logging Responder Policy to Detect Probes\Attacks

Florian Roth @cyb3rops – Nessus YAR File for Exploitation Detection

CTP Anton van Pelt @AntonvanPelt & CTA Mads Petersen @mbp_netscaler & Jan Tytgat

@jantytgat – Constant work with the CTP\CTA and Citrix Teams to work on many fronts.

KevTheHermit @KevTheHermit – AWS Instance Password Vulnerability Disclosure on top of the CVE

Bad Packets Report @bad_packets – The Whole Bad Packets Team

Kevin Beaumont @GossiTheDog – Tons of promotion of the problems seen along with some details on his honeypot and what he has seen.

Mpgn @mpgn_x64 – Details on the exploit and variations of the exploits

Nick Carr @ItsReallyNick – Details on the exploit and incident response tips.

Digi Cat u/digicat – Reddit User, Amazing Running News Blog.

Ben Sadeghipour @NahamSec – DFIR YouTube video and other contributions

SANs Team – Articles and DFIR and Deep Dive Videos

Craig Dods @0xCraig – Password Implications and Research

Manuel Kolloff @manuelkolloff – Exploitation Post Exploit walkthroughs.

FireEye Team

Thank you to Rick Cole for creating our earliest detection coverage and the team of Mandiant incident responders for input for this blog—especially Austin Baker, Brandan Schondorfer and John Prieto—and for all of the consultants who are responding to or securing their client environments from this vulnerability. Thanks also to Nicholas Luedtke from our Vulnerability Intelligence team for his assistance in refining this blog’s disclosure and tooling timeline.


I’m sorry for the more rushed blog (not perfect) than normal, but this situation warrants me trying to get my thoughts and suggestions out there as fast as I can to help as many people as I can.





Fusce ut ipsum tincidunt, porta nisl sollicitudin, vulputate nunc. Cras commodo leo ac nunc convallis ets efficitur.



  • 12345 North Main Street,
    New York, NY 555555

  • 1.800.555.6789


Go to Top