New: ESG Technical Validation: One-Click Segmentation. Download now!
 
 

Missing Microsegmentation Allows APT to Go Where No Attack Has Gone Before: NASA’s Databases

Last week the Office of the Inspector General for NASA published a report on a massive data breach at the Agency’s Jet Propulsion Laboratory (JPL), a federally-funded research and development center focused on robotic exploration of the solar system. JPL’s IT networks contain data and information that “control spacecraft, collect and process scientific data, and perform critical operations.” It’s also worth mentioning that they are connected to additional NASA networks that, if breached could lead to significant national security issues and potentially loss of life. This is not hyperbole (which is too prevalent in many cybersecurity contexts). If a hostile nation-state were to exploit weaknesses in NASA’s vast partner network and gain access to space shuttle launch controls, for example, the threat actors could alter flight plans, crash a shuttle, or direct it to unrecoverable destinations. If you think this line of thinking is absurd, please Google “Stuxnet” and see how a simple vulnerability led to the physical destruction of an Iranian uranium enrichment plant used in the country’s nuclear program.

Given that JPL (and NASA, more generally) is dealing with such sensitive information, it would behoove the agency to take the utmost precaution with its IT systems. After all, JPL is responsible for numerous groundbreaking technological innovations that have put the United States’ space program at the forefront of the industry. Adversarial nations gaining access to a treasure trove of today's JPL inventions would be a long-term strategic win in the race for interstellar travel and materials research.

However, as we learn in the report, approximately 500 MB of data related to Mars mission were lost in a data breach that occurred in April 2018, and this breach was far from the only cybersecurity incident the agency has experienced in the last 10 years. According to the OIG’s report, JPL lost 22 gigabytes of program data in January 2009 to an attacker associated with a Chinese IP address. Another assumed-China attack occurred in 2011 when JPL detected full-system unauthorized access and exploit of 18 servers containing highly sensitive information. This intrusion resulted in 87 gigabytes of lost data. In 2014 an adversary uploaded malware to JPL’s systems. 2016 brought an exploited website misconfiguration that led to remote code execution. In 2017 attackers penetrated JPL’s networks through a coding flaw and “were able to upload, manipulate, and execute various files and commands unrelated to controlling spacecraft.” Various destructive levels of data loss, unauthorized access, and malicious intent were successfully executed in the last decade. Surely the agency was working hard to prevent system compromise and all of the potentially-associated matters of national security.


Visit Edgewise at Black Hat, August 3-8, 2019 Booth #1612


Network sprawl

To date, JLP maintains an IT network comprised of 3,500 servers, over 26,000 computer systems, and 3 subnets. According to the OIG’s report, JPL has implemented “multiple firewalls,” a “remote network gateway” through which partners access JPL, and a “zone-based architecture to limit indirect access through external-facing applications.” So far this sounds a-OK. But upon further examination, it seems that JLP, which is managed under contract by the California Institute of Technology (Caltech), failed to execute basic security fundamentals. And if there’s one thing cybersecurity practitioners preach, it’s building a strong security foundation. In reality, though, root causes of every major security breach in the last decade can be traced to failure to practice the security basics.

In the case of JPL (which, again, is highly connected to other NASA agencies), the OIG found 8 key fundamental failings related to the latest data breach (and apply to the preceding ones as well):

  1. Incomplete asset inventory, leading to lack of visibility, data classification, and ability to monitor and manage IT systems.
  2. Lack of proper segmentation, allowing unauthorized access to sensitive data.
  3. Failure to document partner network requirements for security controls, permitting inconsistency and deficiency in implemented security controls. 
  4. Insufficient ticketing processes, which left critical issues unaddressed for extended periods of time. In some cases this meant delayed or non-existent reporting between JPL and NASA. 
  5. Miscommunication among system admins who didn’t understand the scope of their role and responsibilities related to log management and identification of malicious activity.
  6. Absence of threat hunting capabilities, meaning that JPL was not actively looking for unauthorized and/or potentially harmful activity.
  7. Lack of security training and certification opportunities for security- and IT-related staff, leading to insufficient knowledge and skills required to manage a modern, complex technology ecosystem.
  8. Inadequate incident response plans. JPL’s staff did not follow the recommendations outlined in the IR plan when incidents surfaced. Further, the plan itself deviated from industry best practices such as those outlined by NIST and SANS. Alarmingly, staff was not made fully aware of the plan—neither its scope nor its details. 

In effect, with its refusal to attend to the security basics, NASA was just waiting for a cyber incident to turn catastrophic. 

As a security vendor that offers microsegmentation, I posit that properly implemented microsegmentation—the absence of which is stated as the number 2 key failing in the OIG’s report—could have compensated for several of the other missing security controls that facilitated JPL’s decade-long breach spree. While it is stated that “zoning” was implemented in JPL’s networks, clearly their method of zoning wasn’t strong enough to keep threat actors out of sensitive systems and away from proprietary data related to the U.S. space program. 

There is zoning, there is coarse-grained segmentation, and there is microsegmentation. Just to make things a bit more confusing, there’s microsegmentation, then there’s microsegmentation. Not all microsegmentation solutions are built the same, and there are valid reasons that traditional, address-based microsegmentation isn’t implemented in organizations’ networks.

In the next post, I will explain a new approach to microsegmentation and show how, exactly, application-centric microsegmentation could have saved Houston, er Pasadena, from having a problem.

 

Katherine Teitler, Director of Content

Written by Katherine Teitler, Director of Content

Katherine Teitler leads content strategy and development for Edgewise Networks. In her role as Director of Content she is a storyteller; a translator; and liaison between sales, marketing, and the customer. Prior to Edgewise, Katherine was the Director of Content for MISTI, a global training and events company, where she was in charge of digital content strategy and programming for the company's cybersecurity events, and the Director of Content at IANS, where she built, managed, and contributed to the company's research portal.