<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom"><title>James Curbo</title><link href="https://curbo.space/" rel="alternate"/><link href="https://curbo.space/feeds/all.atom.xml" rel="self"/><id>https://curbo.space/</id><updated>2026-02-05T12:30:00+00:00</updated><entry><title>Paper Summary: Cyber Resilience for Cislunar Space</title><link href="https://curbo.space/posts/2026/02/paper-summary-cyber-resilience-for-cislunar-space/" rel="alternate"/><published>2026-02-05T12:30:00+00:00</published><updated>2026-02-05T12:30:00+00:00</updated><author><name>James Curbo</name></author><id>tag:curbo.space,2026-02-05:/posts/2026/02/paper-summary-cyber-resilience-for-cislunar-space/</id><summary type="html">&lt;p&gt;&lt;em&gt;This post summarizes my 2025 IEEE SMC-IT paper "Cyber Resilience in Cislunar Space: Security Strategies for Large-Scale Space Infrastructure" co-authored with Gregory Falco.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;A round-trip signal to the Moon takes approximately 2.56 seconds. For assets at Earth-Moon Lagrange points, it's longer. That number sounds small until you think about …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;em&gt;This post summarizes my 2025 IEEE SMC-IT paper "Cyber Resilience in Cislunar Space: Security Strategies for Large-Scale Space Infrastructure" co-authored with Gregory Falco.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;A round-trip signal to the Moon takes approximately 2.56 seconds. For assets at Earth-Moon Lagrange points, it's longer. That number sounds small until you think about what an attacker can accomplish in the time between detecting a compromise and getting a response command back to the spacecraft. In LEO, ground operators have near-real-time control. At lunar distances, they don't. And that's before accounting for the periods when communication is blocked entirely -- by the Moon itself, by solar interference, or by an adversary who jams your uplink at the moment you need it most.&lt;/p&gt;
&lt;p&gt;This paper argues that the entire security paradigm has to change for cislunar space, and not incrementally.&lt;/p&gt;
&lt;h2 id="seven-problems-that-dont-exist-in-leo"&gt;Seven Problems That Don't Exist in LEO&lt;/h2&gt;
&lt;p&gt;We identified seven challenges that distinguish cislunar cybersecurity from the Earth-orbit security problem most people are thinking about. Communication latency is the obvious one, but it's not the hardest.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stakeholder complexity&lt;/strong&gt; is, in my view, the most underappreciated. A traditional satellite mission has one operator. Cislunar infrastructure involves national space agencies, commercial companies, startups, academic institutions, and international consortia -- all with different security postures, different encryption protocols, different risk tolerances. Joint missions where partners use incompatible communication frameworks aren't hypothetical; they may well become the norm. Every interface between organizations is an attack surface.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Constrained computing&lt;/strong&gt; makes the security engineer's job difficult in ways that have no LEO equivalent. Radiation-hardened processors are optimized for reliability, not performance. Power comes from solar arrays with variable output. Every CPU cycle and every watt spent on security is a cycle and a watt not spent on the mission. You can't just run a full intrusion detection suite when your processor was designed for deterministic real-time control, not general-purpose computing. The paper argues for lightweight cybersecurity solutions that balance resource efficiency with protection -- encryption that minimizes processing overhead, anomaly detection that operates within tight telemetry bandwidth -- and sketches what this could look like, but the engineering is far from solved.&lt;/p&gt;
&lt;p&gt;The other five -- diverse mission profiles, open communication channels vulnerable to interception, the high stakes of attacks on life-support or resource extraction, the impossibility of physical repair, and the communication intermittency I already mentioned -- each independently complicates the security picture. Together, they demand a different approach entirely.&lt;/p&gt;
&lt;h2 id="who-attacks-a-spacecraft"&gt;Who Attacks a Spacecraft?&lt;/h2&gt;
&lt;p&gt;The paper lays out four categories of threat actors. The one people rarely discuss in the space context is &lt;strong&gt;insider threats&lt;/strong&gt;. Not the dramatic spy scenario, but the mundane reality of collaborative missions with dozens of stakeholders. A contractor with inadequate security training clicks a phishing email. A disgruntled employee on a multi-stakeholder program introduces a vulnerability. These aren't exotic threats -- they're the same ones that plague every large organization on Earth, except the system being compromised is 384,400 kilometers away and can't be physically accessed.&lt;/p&gt;
&lt;p&gt;Nation-states targeting competitors' lunar resource extraction, criminal organizations deploying ransomware against a habitat's life-support systems, hacktivists sabotaging autonomous mining operations they consider unethical -- the threat landscape is as varied as it is consequential. &lt;strong&gt;Supply chain compromise&lt;/strong&gt; is particularly concerning in this context. A tampered component with dormant malware, activated once the satellite reaches its operational orbit. You've launched your vulnerability into cislunar space, and there's no recall.&lt;/p&gt;
&lt;h2 id="twelve-ways-to-stay-in-the-fight"&gt;Twelve Ways to Stay in the Fight&lt;/h2&gt;
&lt;p&gt;The core of the paper maps the &lt;a href="https://csrc.nist.gov/pubs/sp/800-160/v2/r1/final"&gt;NIST SP 800-160 Volume 2&lt;/a&gt; cyber resilience techniques onto the cislunar environment. Twelve techniques, each adapted to the specific constraints and threats we'd identified.&lt;/p&gt;
&lt;p&gt;A few stood out as particularly well-suited to the problem:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Adaptive response&lt;/strong&gt; is the direct answer to the latency problem. Systems that use AI and machine learning to identify anomalies and act without waiting for ground commands. The paper gives the example of detecting navigation signal spoofing by comparing received data against historical patterns and onboard camera imagery, then autonomously switching to a verified backup signal. The system responds in milliseconds. A ground operator would take seconds at minimum -- if they're even in contact.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Non-persistence&lt;/strong&gt; is elegant in its simplicity. Periodically reset non-critical components. Regenerate cryptographic keys on a schedule. Disable unused network ports. You're not trying to find the adversary's foothold -- you're wiping the surface they'd attach to. In cislunar space, where you can't send a forensics team, preventing persistent access is more practical than hunting for it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Deception&lt;/strong&gt; is the most unconventional of the twelve. Deploy a decoy relay satellite transmitting fake telemetry data, and an adversary wastes resources attacking it while you gain intelligence on their tactics. It also acts as a deterrent by increasing the cost and complexity of successful attacks -- particularly useful given the expanded attack surface of cislunar networks, where it's harder for an adversary to distinguish real assets from decoys across vast distances.&lt;/p&gt;
&lt;p&gt;The remaining nine -- redundancy, segmentation, analytic monitoring, dynamic reconfiguration, substantiated integrity, privilege restriction, realignment, diversity, and coordinated protection -- are each discussed with specific cislunar applications. Redundancy through multiple relay satellites on different orbits. Segmentation isolating life-support from public communication interfaces. Coordinated protection sharing threat intelligence between lunar habitats, relay satellites, and ground stations in real time.&lt;/p&gt;
&lt;h2 id="the-window-is-open"&gt;The Window Is Open&lt;/h2&gt;
&lt;p&gt;What makes this paper urgent rather than academic is timing. Cislunar infrastructure is being designed and built right now. The architectural decisions being made today -- communication protocols, trust models, access control frameworks -- will be locked in for the operational lifetimes of these systems. We have the chance to build security in from the start instead of bolting it on after the architecture is frozen, which is exactly what happened with the terrestrial internet. The NIST resilience framework gives us a structured way to do it. Whether the community uses it is a different question.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.researchgate.net/publication/389428480_Cyber_Resilience_in_Cislunar_Space_Security_Strategies_for_Large-Scale_Space_Infrastructure"&gt;Read the full paper on ResearchGate&lt;/a&gt;&lt;/p&gt;</content><category term="Research"/></entry><entry><title>Paper Summary: Choosing a Runtime for Secure Rust Flight Software</title><link href="https://curbo.space/posts/2026/02/paper-summary-choosing-a-runtime-for-secure-rust-flight-software/" rel="alternate"/><published>2026-02-05T12:00:00+00:00</published><updated>2026-02-05T12:00:00+00:00</updated><author><name>James Curbo</name></author><id>tag:curbo.space,2026-02-05:/posts/2026/02/paper-summary-choosing-a-runtime-for-secure-rust-flight-software/</id><summary type="html">&lt;p&gt;&lt;em&gt;This post summarizes my 2025 IEEE SMC-IT paper "Alcyone: A Blueprint for Secure Rust Flight Software" co-authored with Gregory Falco.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;When I wrote about the &lt;a href="https://curbo.space/posts/2026/02/paper-summary-the-cfs-attack-surface-from-the-bottom-up/"&gt;attack surface of cFS and RTEMS&lt;/a&gt;, the natural follow-up question was: what would you build instead? Alcyone is our answer, or at least the start …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;em&gt;This post summarizes my 2025 IEEE SMC-IT paper "Alcyone: A Blueprint for Secure Rust Flight Software" co-authored with Gregory Falco.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;When I wrote about the &lt;a href="https://curbo.space/posts/2026/02/paper-summary-the-cfs-attack-surface-from-the-bottom-up/"&gt;attack surface of cFS and RTEMS&lt;/a&gt;, the natural follow-up question was: what would you build instead? Alcyone is our answer, or at least the start of one. It's a blueprint for a cyber-resilient flight software architecture, designed from scratch in Rust, grounded in the secure-by-component methodology from the &lt;a href="https://sagroups.ieee.org/3349/"&gt;IEEE P3349 Working Group&lt;/a&gt;. The paper lays out a subsystem decomposition, derives cyber requirements from threat models, and evaluates seven Rust-compatible runtime platforms for embedded execution. That runtime evaluation turned out to be the most practically useful part of the work.&lt;/p&gt;
&lt;h2 id="the-architecture-in-brief"&gt;The Architecture in Brief&lt;/h2&gt;
&lt;p&gt;Alcyone decomposes flight software into five secure blocks, each with bounded interfaces and explicit enforcement surfaces for security requirements:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Real-Time Kernel and Hardware Abstraction&lt;/strong&gt; -- task scheduling, resource coordination, secure boot, MPU-based memory isolation. This is the root of trust.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Command and Data Handling (C&amp;amp;DH)&lt;/strong&gt; -- command reception, validation, authenticated dispatch, telemetry generation, data logging, and health monitoring. Each function operates within a privilege-constrained domain.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Fault Detection, Isolation, and Recovery (FDIR)&lt;/strong&gt; -- continuous monitoring of telemetry trends and task heartbeats, rule-based fault attribution, and recovery actions ranging from reconfiguration to safe mode transitions.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Interprocess Communication and Bus Management&lt;/strong&gt; -- message routing with queue-based isolation, cryptographic integrity on message payloads, access control on routing, and audit trails. This subsystem enforces a zero-trust communication model at runtime.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Mission Applications&lt;/strong&gt; -- payload management and data collection, operating in restricted domains with verified interface access to the rest of the system.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Each subsystem is implemented as a standalone Rust crate. Traits define cross-cutting contracts like command handlers, telemetry producers, and fault reporters, and the compiler enforces interface adherence at build time. Shared mutable state is eliminated. Communication happens through explicit message-passing. The development practices follow &lt;a href="https://research.google/blog/eliminating-memory-safety-vulnerabilities-at-the-source/"&gt;Google's Safe Coding Initiative&lt;/a&gt; and the &lt;a href="https://csrc.nist.gov/projects/ssdf"&gt;NIST Secure Software Development Framework (SSDF)&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;This is a design paper. Alcyone is a blueprint and roadmap for a notional LEO science mission, not a completed implementation. A software-in-the-loop prototype is under development.&lt;/p&gt;
&lt;h2 id="picking-a-runtime-the-practical-question"&gt;Picking a Runtime: The Practical Question&lt;/h2&gt;
&lt;p&gt;The part of this paper I expect people to find most useful is the evaluation of Rust runtime platforms. If you're building embedded Rust for a real-time system today, your choice of runtime determines what isolation guarantees you get, what scheduling model you're locked into, and what hardware you can target. We evaluated seven options:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;RTIC&lt;/strong&gt; is a real-time concurrency framework for ARM Cortex-M. It gives you static scheduling and memory safety, but limited dynamic tasking and limited asynchronous support. It's mature and well-understood, which counts for a lot in this domain.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Tock OS&lt;/strong&gt; provides fine-grained isolation enforced at the kernel level. Strong safety guarantees, but it imposes real constraints on system layout and application structure that may not fit every mission profile.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Embassy&lt;/strong&gt; is an async embedded runtime with a cooperative task model. It's flexible, supports low-power applications, and integrates well with modern hardware abstraction layers. If you need async in embedded Rust, this is the leading option.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Ariel-OS&lt;/strong&gt; builds on Embassy and adds policy enforcement and separation of concerns. It's newer and explicitly targets secure embedded systems.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Drone OS&lt;/strong&gt; is a cooperative multitasking runtime focused on deeply embedded systems. It emphasizes zero-cost concurrency and interrupt safety.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Hubris&lt;/strong&gt; is the most architecturally interesting option for security work. Oxide Computer developed it as a microkernel with capability-based access control and strict interface boundaries, implemented entirely in Rust. Its design philosophy aligns naturally with secure-by-component decomposition.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;seL4&lt;/strong&gt; is the outlier -- not Rust-native, but a formally verified microkernel with strong isolation guarantees backed by mathematical proof. For hybrid deployments or missions that need the highest assurance, it remains a serious candidate.&lt;/p&gt;
&lt;p&gt;The final platform selection depends on the mission: what hardware you're targeting, what scheduling model you need, and how much isolation you require. There's no single right answer, but having a structured comparison in one place was something I wished existed when I started this work.&lt;/p&gt;
&lt;h2 id="why-rust-specifically"&gt;Why Rust, Specifically&lt;/h2&gt;
&lt;p&gt;The paper identifies nine properties of Rust that matter for flight software: memory safety, type safety, concurrency safety, zero-cost abstractions, trait-based interface enforcement, formal verification compatibility, immutable data by default, embedded suitability, and high assurance potential. I won't rehearse all nine -- the first three are well-known. The one worth highlighting is trait-based interface enforcement, because it does something C coding standards cannot.&lt;/p&gt;
&lt;p&gt;In a secure-by-component architecture, every subsystem has defined privilege boundaries and interface contracts. In C, you enforce these through discipline, code review, and static analysis tools. In Rust, you encode them as traits, and the compiler rejects code that violates them. The enforcement is structural, not procedural. It doesn't depend on every developer remembering the rules. That's a meaningful difference when you're building systems that operate for years without human intervention.&lt;/p&gt;
&lt;p&gt;The limitations are real. Toolchain qualification under DO-178C isn't there yet, though &lt;a href="https://ferrocene.dev/"&gt;Ferrocene&lt;/a&gt; and GNAT Pro for Rust are making progress. The formal verification ecosystem is immature compared to Ada/SPARK. Some embedded targets for space-grade hardware remain unsupported. These are practical obstacles, not hypothetical ones.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://www.researchgate.net/publication/393317738_Alcyone_A_Blueprint_for_Secure_Rust_Flight_Software"&gt;Read the full paper on ResearchGate&lt;/a&gt;&lt;/p&gt;</content><category term="Research"/></entry><entry><title>Paper Summary: Testable Cyber Requirements for Flight Software</title><link href="https://curbo.space/posts/2026/02/paper-summary-testable-cyber-requirements-for-flight-software/" rel="alternate"/><published>2026-02-05T11:30:00+00:00</published><updated>2026-02-05T11:30:00+00:00</updated><author><name>James Curbo</name></author><id>tag:curbo.space,2026-02-05:/posts/2026/02/paper-summary-testable-cyber-requirements-for-flight-software/</id><summary type="html">&lt;p&gt;&lt;em&gt;This post summarizes my 2025 IEEE Aerospace Conference paper "Testable Cyber Requirements for Space Flight Software" co-authored with Gregory Falco.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Nobody can test "the system shall be secure." It checks a compliance box, but a test engineer reading it has nothing to work with. What does "secure" mean? Against what …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;em&gt;This post summarizes my 2025 IEEE Aerospace Conference paper "Testable Cyber Requirements for Space Flight Software" co-authored with Gregory Falco.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Nobody can test "the system shall be secure." It checks a compliance box, but a test engineer reading it has nothing to work with. What does "secure" mean? Against what threats? How would you verify it? The requirement doesn't say. And this is the norm in space systems, where cybersecurity requirements are written to satisfy auditors rather than to drive development and testing.&lt;/p&gt;
&lt;p&gt;This paper presents a methodology for turning that situation around: start from the flight software architecture itself, analyze what each component actually does, and derive security requirements specific enough that an engineer can write a test for every one of them.&lt;/p&gt;
&lt;h2 id="secure-by-component-not-by-checklist"&gt;Secure by Component, Not by Checklist&lt;/h2&gt;
&lt;p&gt;The core insight is that useful security requirements have to come from understanding the software, not from mapping a generic controls catalog. We call the approach "secure-by-component." Rather than starting from a framework like &lt;a href="https://csrc.nist.gov/pubs/sp/800-53/r5/upd1/final"&gt;NIST SP 800-53&lt;/a&gt; and trying to figure out which controls apply to which parts of the system, we start from the flight software architecture and work outward toward the threats.&lt;/p&gt;
&lt;p&gt;The process has six steps. First, decompose the flight software into its low-level components. Second, analyze the attack surface of each one -- what are its inputs, outputs, and dependencies? What could an adversary touch? Third, identify specific threat techniques from the &lt;a href="https://sparta.aerospace.org/"&gt;SPARTA&lt;/a&gt; framework, guided by the HAT TRICK cyberspace threat matrix. Fourth, select cyber resilience principles from &lt;a href="https://csrc.nist.gov/pubs/sp/800-160/v2/r1/final"&gt;NIST SP 800-160 Volume 2&lt;/a&gt; to counter those threats. Fifth and sixth, redesign each component into a "secure block" with detailed cybersecurity requirements attached.&lt;/p&gt;
&lt;p&gt;The result is a set of shall statements that a systems engineer recognizes and a test engineer can verify. Each one traces back through the chain: this requirement exists because this resilience principle mitigates this threat technique, which exploits this attack surface on this component. Nothing is there because a generic checklist said so.&lt;/p&gt;
&lt;h2 id="hat-trick-and-the-threat-matrix"&gt;HAT TRICK and the Threat Matrix&lt;/h2&gt;
&lt;p&gt;The HAT TRICK framework, developed by JHU/APL for national security systems, deserves separate attention. HAT TRICK -- High Adversary Tier Threat Response Interdicting Cyberspace Kill-chain -- provides a structured way to bound the threat space so you can be systematic about it rather than guessing at what an adversary might do.&lt;/p&gt;
&lt;p&gt;At its core is a cyberspace threat matrix that crosses three access vectors (direct physical access, remote access via network or communications link, and indirect access via supply chain or trusted insiders) with five threat events (writing malicious data, executing malicious programs, executing valid programs maliciously, denying authorized data, and obtaining system data). Any cyberspace attack has to use one of those vectors and produce one or more of those events. By designing the system to be resilient against each combination, you get coverage against both known and unknown threats, including zero-days.&lt;/p&gt;
&lt;p&gt;For our case study, we focused on the integrity of commanding as the mission resilience priority, so we scoped to the first three threat events -- writing malicious data, executing malicious programs, and executing valid programs maliciously. The last two (denying authorized data and obtaining system data) were out of scope for this particular priority but would be addressed for a different one.&lt;/p&gt;
&lt;h2 id="what-the-requirements-actually-look-like"&gt;What the Requirements Actually Look Like&lt;/h2&gt;
&lt;p&gt;We applied the methodology to a notional Command and Data Handling subsystem with four components: Command Reception and Validation, Telemetry Generation, Data Storage and Logging, and Health and Status Monitoring. The process generated 15 main requirements per component, each with detailed sub-requirements.&lt;/p&gt;
&lt;p&gt;Take CR-1: "The command reception system shall prevent unauthorized data from being written to the command processing subsystem." That addresses the Substantiated Integrity principle and directly counters the "write malicious data" threat event. It decomposes into sub-requirements like implementing role-based access control for data entry permissions, validating all incoming commands against predefined acceptable formats and values, and logging any attempts to write unauthorized data. Each sub-requirement is testable. An engineer can set up a test case, execute it, and get a pass or fail.&lt;/p&gt;
&lt;p&gt;Compare that to "the system shall authenticate commands." The difference isn't just specificity. It's that the secure-by-component requirement has a traceable rationale -- you know why it exists, what threat it addresses, and what resilience principle it implements. When a mission changes or a new threat emerges, you can trace the impact through the chain and update the right requirements instead of hoping your abstract language still covers you.&lt;/p&gt;
&lt;h2 id="beyond-requirements"&gt;Beyond Requirements&lt;/h2&gt;
&lt;p&gt;The paper also covers technical implementation guidance that cuts across all flight software subsystems: memory-safe programming languages (with a strong recommendation for Rust), operating system security features like process isolation and sandboxing, zero-trust communication between subsystems, formal methods for verifying critical properties, and secure developer ecosystem practices. These aren't afterthoughts -- they're driven by the same resilience principles that generated the requirements.&lt;/p&gt;
&lt;p&gt;The space industry has decades of experience writing testable requirements for thermal performance, power budgets, and structural loads. Security requirements deserve the same rigor, and the same traceability from high-level objectives down to test procedures. This paper shows one way to get there.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://ieeexplore.ieee.org/document/11068629"&gt;Read the full paper on IEEE Xplore&lt;/a&gt; | &lt;a href="https://www.researchgate.net/publication/388733648_Testable_Cyber_Requirements_for_Space_Flight_Software"&gt;ResearchGate&lt;/a&gt;&lt;/p&gt;</content><category term="Research"/></entry><entry><title>Paper Summary: The cFS Attack Surface, from the Bottom Up</title><link href="https://curbo.space/posts/2026/02/paper-summary-the-cfs-attack-surface-from-the-bottom-up/" rel="alternate"/><published>2026-02-05T11:00:00+00:00</published><updated>2026-02-05T11:00:00+00:00</updated><author><name>James Curbo</name></author><id>tag:curbo.space,2026-02-05:/posts/2026/02/paper-summary-the-cfs-attack-surface-from-the-bottom-up/</id><summary type="html">&lt;p&gt;&lt;em&gt;This post summarizes my 2024 IEEE SMC-IT paper "Attack Surface Analysis for Spacecraft Flight Software" co-authored with Gregory Falco.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;There's a principle from formal verification of financial algorithms that stuck with me: you cannot properly reason about the behavior of a system higher in the stack unless you have verified …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;em&gt;This post summarizes my 2024 IEEE SMC-IT paper "Attack Surface Analysis for Spacecraft Flight Software" co-authored with Gregory Falco.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;There's a principle from formal verification of financial algorithms that stuck with me: you cannot properly reason about the behavior of a system higher in the stack unless you have verified the properties of the subsystems executing its intentions. &lt;a href="https://doi.org/10.1007/978-3-319-63046-5_3"&gt;Passmore and Ignatovich&lt;/a&gt; wrote that about trading systems, but it applies directly to flight software. The RTOS is the foundation everything else depends on. If an attacker can manipulate it, nothing above it can be trusted--and the layers above may not even detect that anything has changed.&lt;/p&gt;
&lt;p&gt;Rather than starting at the application layer where most security discussions happen, we went to the bottom of the stack and worked up.&lt;/p&gt;
&lt;h2 id="the-stack-nobody-examines"&gt;The Stack Nobody Examines&lt;/h2&gt;
&lt;p&gt;We chose NASA's core Flight System (cFS) running on the RTEMS real-time operating system because both are open source, widely used in actual missions, and architecturally representative of how most flight software is built. The cFS architecture is a layered stack: hardware and BSP at the bottom, then RTEMS, then the Operating System Abstraction Layer (OSAL), then the cFS platform services, and finally the mission applications on top. We deliberately focused our analysis on the lower layers--the RTOS and OSAL--because they had received almost no security scrutiny despite being the layers everything else rests on.&lt;/p&gt;
&lt;p&gt;What we found was revealing, not because of exotic vulnerabilities, but because of how many attack-relevant properties are simply inherent to the architecture.&lt;/p&gt;
&lt;h2 id="everything-runs-in-one-address-space"&gt;Everything Runs in One Address Space&lt;/h2&gt;
&lt;p&gt;RTEMS is a single-process RTOS. All code--the operating system, the abstraction layer, every application--shares one address space and is statically linked into a single binary image. There is no memory protection between tasks. If an attacker gains code execution in any component, they have unrestricted access to all memory and can influence every other task on the system.&lt;/p&gt;
&lt;p&gt;This isn't a bug. It's the design. RTEMS prioritizes simplicity and deterministic real-time performance, and memory protection adds overhead and complexity that conflicts with those goals. But from a security perspective, it means there's no containment. A compromise anywhere is a compromise everywhere. FreeRTOS vulnerabilities that led to remote code execution and data leakage show this isn't theoretical--RTOSes do get attacked, and when they lack isolation, a single vulnerability compromises the entire system.&lt;/p&gt;
&lt;h2 id="the-abstraction-layer-paradox"&gt;The Abstraction Layer Paradox&lt;/h2&gt;
&lt;p&gt;The OSAL exists for good reasons. It lets cFS run on RTEMS, vxWorks, Linux, or FreeRTOS without changing application code. Developers can prototype on Linux and deploy on RTEMS. That flexibility is valuable.&lt;/p&gt;
&lt;p&gt;But abstraction layers add code, and more code means more potential for vulnerabilities. The OSAL wraps each target OS's functionality in C, and the wrapper implementations sometimes include additional logic to bridge gaps between what the OSAL API promises and what the underlying OS actually provides. Those gaps are where bugs hide. The abstraction also obscures interactions with the underlying system, making it harder to tune security controls or detect malicious activity at the application layer. You're trading visibility for portability, and in a security context, visibility matters.&lt;/p&gt;
&lt;h2 id="living-off-the-land-in-orbit"&gt;Living Off the Land, in Orbit&lt;/h2&gt;
&lt;p&gt;One finding that stands out: RTEMS includes a full shell, analogous to a Unix shell, with commands for file system manipulation, memory dumping and editing, system information queries, and managing the dynamic loader. The OSAL's BSP configuration activates this shell with all commands enabled. During development and testing, this is useful. On an operational spacecraft, it's an attacker's toolkit already installed and waiting.&lt;/p&gt;
&lt;p&gt;An adversary who gains access doesn't need to upload tools--the shell provides substantial "living off the land" capabilities. Combined with the lack of memory protection, an attacker could use the shell to inspect memory, modify running code, or load new executable modules through the dynamic loader. RTEMS does implement user and group access controls for shell commands, but given that the RTOS has no memory protection between tasks, those controls can be circumvented by any code running in the same address space.&lt;/p&gt;
&lt;p&gt;The BSP configuration also enables four file systems by default (ImFS, DOS/FAT, DevFS, and RFS) without disabling any ImFS functionalities. Each enabled file system is code sitting in memory, and vulnerabilities in unused file system drivers become attack surface that serves no operational purpose.&lt;/p&gt;
&lt;h2 id="the-path-forward"&gt;The Path Forward&lt;/h2&gt;
&lt;p&gt;The recommendations follow directly from the findings: simplify. Disable the shell, the dynamic loader, and unused file systems in flight configurations. Evaluate whether the OSAL's abstraction is worth its security cost for missions committed to a single RTOS. Consider memory-safe languages for new development. Apply formal verification to the RTOS layer, because until we can verify the foundation's behavior, we're building on assumptions.&lt;/p&gt;
&lt;p&gt;None of this is easy, and the paper doesn't pretend it is. These are architectural properties baked into systems with decades of flight heritage. But understanding the attack surface is the prerequisite for reducing it, and that understanding has to start at the bottom of the stack.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://ieeexplore.ieee.org/document/10795076"&gt;Read the full paper on IEEE Xplore&lt;/a&gt;&lt;/p&gt;</content><category term="Research"/></entry><entry><title>Paper Summary: A Research Agenda for Flight Software Security</title><link href="https://curbo.space/posts/2026/02/paper-summary-a-research-agenda-for-flight-software-security/" rel="alternate"/><published>2026-02-05T10:30:00+00:00</published><updated>2026-02-05T10:30:00+00:00</updated><author><name>James Curbo</name></author><id>tag:curbo.space,2026-02-05:/posts/2026/02/paper-summary-a-research-agenda-for-flight-software-security/</id><summary type="html">&lt;p&gt;&lt;em&gt;This post summarizes my 2023 IEEE SMC-IT paper "A Research Agenda for Space Flight Software Security" co-authored with Gregory Falco.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;When I started my dissertation research, I went looking for the academic literature on flight software security. I found policy papers about space being a contested domain, a growing body …&lt;/p&gt;</summary><content type="html">&lt;p&gt;&lt;em&gt;This post summarizes my 2023 IEEE SMC-IT paper "A Research Agenda for Space Flight Software Security" co-authored with Gregory Falco.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;When I started my dissertation research, I went looking for the academic literature on flight software security. I found policy papers about space being a contested domain, a growing body of work on ground segment security, and almost nothing about the software actually running on spacecraft. That gap motivated this paper.&lt;/p&gt;
&lt;p&gt;The goal wasn't to propose solutions. It was to systematically lay out what needed to be studied, so that researchers from adjacent fields--security, formal methods, embedded systems--would have a map of the territory and a reason to show up.&lt;/p&gt;
&lt;h2 id="the-core-argument"&gt;The Core Argument&lt;/h2&gt;
&lt;p&gt;Flight software occupies a strange position in the security world. It's the most critical software on a spacecraft, controlling everything from command and data handling to guidance, navigation, and attitude control. Compromise the flight software and you own the vehicle. And yet, the community that builds it has focused almost exclusively on quality and fault tolerance--protecting against random failures, not intelligent adversaries.&lt;/p&gt;
&lt;p&gt;The distinction matters. Fault tolerance assumes failures are probabilistic and predictable. An adversary is neither. An attacker actively probes, adapts, and stresses the system in ways that environmental faults never will. The layers of redundancy that protect against a bit flip from a cosmic ray do nothing against someone who understands the command protocol.&lt;/p&gt;
&lt;p&gt;The industry has operated under security-through-obscurity for decades, but that era ended when NASA open-sourced cFS and JPL released F Prime. Adversaries and researchers now have the same access to production-quality flight software architectures. That's a good thing--if researchers use it.&lt;/p&gt;
&lt;h2 id="twelve-research-directions"&gt;Twelve Research Directions&lt;/h2&gt;
&lt;p&gt;The paper's main contribution is a structured research agenda with twelve specific items, organized around the design considerations we identified from reviewing current flight software practice. A few that I think are especially important:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Secure-by-design flight software architecture (Agenda Item L).&lt;/strong&gt; This is the capstone item and the one closest to my own ongoing work. The argument is that incremental patching of existing C-based flight software stacks won't get us where we need to be. We need architectures built from the ground up with security as a design constraint--formal verification integral to the design, memory-safe programming languages throughout, isolation between components enforced at the OS level. The &lt;a href="https://www.darpa.mil/program/high-assurance-cyber-military-systems"&gt;DARPA HACMS&lt;/a&gt; program proved this was possible for aircraft systems using seL4 and verified application code. Nobody had done the equivalent for spacecraft.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Understanding the full attack surface (Agenda Item I).&lt;/strong&gt; Flight software is a layered system: CPU microcode at the bottom, kernel services and device drivers in the middle, mission applications on top, all communicating through interfaces that were designed for functionality, not security. We proposed a methodology that starts from the architectural decomposition and systematically characterizes every interface where two software components communicate--every seam where misuse could produce undefined behavior. This is fundamentally different from the ad hoc vulnerability hunting that characterizes most flight software security analysis today.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Programming language selection (covered in Design Considerations, Section III-F).&lt;/strong&gt; We made the case that the C language carries fundamental security risks that coding standards and static analysis can only partially mitigate. The paper surveys alternatives beyond Rust, including Ada SPARK, D, Nim, and Ivory, and discusses advanced type theory work like dependent types and linear types that could further restrict unsafe behavior. The &lt;a href="https://media.defense.gov/2022/Nov/10/2003112742/-1/-1/0/CSI_SOFTWARE_MEMORY_SAFETY.PDF"&gt;NSA had just released guidance&lt;/a&gt; recommending memory-safe languages, and the aerospace industry wasn't paying attention.&lt;/p&gt;
&lt;h2 id="the-frameworks"&gt;The Frameworks&lt;/h2&gt;
&lt;p&gt;Two frameworks anchor the paper's approach to the problem. &lt;a href="https://csrc.nist.gov/pubs/sp/800-160/v2/r1/final"&gt;NIST SP 800-160 Volume 2&lt;/a&gt; defines cyber resilience as the ability to anticipate, withstand, recover from, and adapt to adverse conditions--including deliberate attack. That document gives us resilience objectives. Bailey's four principles for space cyber resilience--robustness, opacity, constraint, and responsiveness--translate those objectives into something applicable to flight software design.&lt;/p&gt;
&lt;p&gt;On the threat side, &lt;a href="https://sparta.aerospace.org/"&gt;SPARTA&lt;/a&gt; (Space Attack Research and Tactic Analysis) was the best available framework for cataloging adversary tactics and techniques against space systems. We used it to ground the agenda in real attack patterns, but noted it was still in its early stages and needed significant expansion and validation.&lt;/p&gt;
&lt;p&gt;The paper was deliberately an invitation. Several of the twelve agenda items became the basis for our subsequent work--the attack surface analysis of cFS, the investigation of secure programming languages, the Alcyone secure flight software architecture.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://ieeexplore.ieee.org/document/10207527"&gt;Read the full paper on IEEE Xplore&lt;/a&gt;&lt;/p&gt;</content><category term="Research"/></entry></feed>