Paper Summary: Testable Cyber Requirements for Flight Software
This post summarizes my 2025 IEEE Aerospace Conference paper "Testable Cyber Requirements for Space Flight Software" co-authored with Gregory Falco.
Nobody can test "the system shall be secure." It checks a compliance box, but a test engineer reading it has nothing to work with. What does "secure" mean? Against what threats? How would you verify it? The requirement doesn't say. And this is the norm in space systems, where cybersecurity requirements are written to satisfy auditors rather than to drive development and testing.
This paper presents a methodology for turning that situation around: start from the flight software architecture itself, analyze what each component actually does, and derive security requirements specific enough that an engineer can write a test for every one of them.
Secure by Component, Not by Checklist
The core insight is that useful security requirements have to come from understanding the software, not from mapping a generic controls catalog. We call the approach "secure-by-component." Rather than starting from a framework like NIST SP 800-53 and trying to figure out which controls apply to which parts of the system, we start from the flight software architecture and work outward toward the threats.
The process has six steps. First, decompose the flight software into its low-level components. Second, analyze the attack surface of each one -- what are its inputs, outputs, and dependencies? What could an adversary touch? Third, identify specific threat techniques from the SPARTA framework, guided by the HAT TRICK cyberspace threat matrix. Fourth, select cyber resilience principles from NIST SP 800-160 Volume 2 to counter those threats. Fifth and sixth, redesign each component into a "secure block" with detailed cybersecurity requirements attached.
The result is a set of shall statements that a systems engineer recognizes and a test engineer can verify. Each one traces back through the chain: this requirement exists because this resilience principle mitigates this threat technique, which exploits this attack surface on this component. Nothing is there because a generic checklist said so.
HAT TRICK and the Threat Matrix
The HAT TRICK framework, developed by JHU/APL for national security systems, deserves separate attention. HAT TRICK -- High Adversary Tier Threat Response Interdicting Cyberspace Kill-chain -- provides a structured way to bound the threat space so you can be systematic about it rather than guessing at what an adversary might do.
At its core is a cyberspace threat matrix that crosses three access vectors (direct physical access, remote access via network or communications link, and indirect access via supply chain or trusted insiders) with five threat events (writing malicious data, executing malicious programs, executing valid programs maliciously, denying authorized data, and obtaining system data). Any cyberspace attack has to use one of those vectors and produce one or more of those events. By designing the system to be resilient against each combination, you get coverage against both known and unknown threats, including zero-days.
For our case study, we focused on the integrity of commanding as the mission resilience priority, so we scoped to the first three threat events -- writing malicious data, executing malicious programs, and executing valid programs maliciously. The last two (denying authorized data and obtaining system data) were out of scope for this particular priority but would be addressed for a different one.
What the Requirements Actually Look Like
We applied the methodology to a notional Command and Data Handling subsystem with four components: Command Reception and Validation, Telemetry Generation, Data Storage and Logging, and Health and Status Monitoring. The process generated 15 main requirements per component, each with detailed sub-requirements.
Take CR-1: "The command reception system shall prevent unauthorized data from being written to the command processing subsystem." That addresses the Substantiated Integrity principle and directly counters the "write malicious data" threat event. It decomposes into sub-requirements like implementing role-based access control for data entry permissions, validating all incoming commands against predefined acceptable formats and values, and logging any attempts to write unauthorized data. Each sub-requirement is testable. An engineer can set up a test case, execute it, and get a pass or fail.
Compare that to "the system shall authenticate commands." The difference isn't just specificity. It's that the secure-by-component requirement has a traceable rationale -- you know why it exists, what threat it addresses, and what resilience principle it implements. When a mission changes or a new threat emerges, you can trace the impact through the chain and update the right requirements instead of hoping your abstract language still covers you.
Beyond Requirements
The paper also covers technical implementation guidance that cuts across all flight software subsystems: memory-safe programming languages (with a strong recommendation for Rust), operating system security features like process isolation and sandboxing, zero-trust communication between subsystems, formal methods for verifying critical properties, and secure developer ecosystem practices. These aren't afterthoughts -- they're driven by the same resilience principles that generated the requirements.
The space industry has decades of experience writing testable requirements for thermal performance, power budgets, and structural loads. Security requirements deserve the same rigor, and the same traceability from high-level objectives down to test procedures. This paper shows one way to get there.