To date, JLP maintains an IT network comprised of 3,500 servers, over 26,000 computer systems, and 3 subnets. According to the OIG’s report, JPL has implemented “multiple firewalls,” a “remote network gateway” through which partners access JPL, and a “zone-based architecture to limit indirect access through external-facing applications.” So far this sounds a-OK. But upon further examination, it seems that JLP, which is managed under contract by the California Institute of Technology (Caltech), failed to execute basic security fundamentals. And if there’s one thing cybersecurity practitioners preach, it’s building a strong security foundation. In reality, though, root causes of every major security breach in the last decade can be traced to failure to practice the security basics.
In the case of JPL (which, again, is highly connected to other NASA agencies), the OIG found 8 key fundamental failings related to the latest data breach (and apply to the preceding ones as well):
- Incomplete asset inventory, leading to lack of visibility, data classification, and ability to monitor and manage IT systems.
- Lack of proper segmentation, allowing unauthorized access to sensitive data.
- Failure to document partner network requirements for security controls, permitting inconsistency and deficiency in implemented security controls.
- Insufficient ticketing processes, which left critical issues unaddressed for extended periods of time. In some cases this meant delayed or non-existent reporting between JPL and NASA.
- Miscommunication among system admins who didn’t understand the scope of their role and responsibilities related to log management and identification of malicious activity.
- Absence of threat hunting capabilities, meaning that JPL was not actively looking for unauthorized and/or potentially harmful activity.
- Lack of security training and certification opportunities for security- and IT-related staff, leading to insufficient knowledge and skills required to manage a modern, complex technology ecosystem.
- Inadequate incident response plans. JPL’s staff did not follow the recommendations outlined in the IR plan when incidents surfaced. Further, the plan itself deviated from industry best practices such as those outlined by NIST and SANS. Alarmingly, staff was not made fully aware of the plan—neither its scope nor its details.
In effect, with its refusal to attend to the security basics, NASA was just waiting for a cyber incident to turn catastrophic.
As a security vendor that offers microsegmentation, I posit that properly implemented microsegmentation—the absence of which is stated as the number 2 key failing in the OIG’s report—could have compensated for several of the other missing security controls that facilitated JPL’s decade-long breach spree. While it is stated that “zoning” was implemented in JPL’s networks, clearly their method of zoning wasn’t strong enough to keep threat actors out of sensitive systems and away from proprietary data related to the U.S. space program.
There is zoning, there is coarse-grained segmentation, and there is microsegmentation. Just to make things a bit more confusing, there’s microsegmentation, then there’s microsegmentation. Not all microsegmentation solutions are built the same, and there are valid reasons that traditional, address-based microsegmentation isn’t implemented in organizations’ networks.
In the next post, I will explain a new approach to microsegmentation and show how, exactly, application-centric microsegmentation could have saved Houston, er Pasadena, from having a problem.