- Joined
- Mar 1, 2024
- Messages
- 128
- Thread Author
- #1
I recently went back and read Ken Thompson’s lecture from the 1984 Turing Awards. The basic idea is well known, but still very interesting: even if you carefully audit a program’s source code and convince yourself it’s clean, that doesn’t mean the resulting binary can be trusted. If the compiler itself is compromised, it can silently inject malicious behavior during compilation, regardless of what the source says. Thompson even shows and example of how a compiler can be taught to recognize specific programs (like a login utility) and insert a backdoor, and then also recognize its own source code and perpetuate that behavior forever, even after the original malicious code has been removed. Here's the example (click the images to magnify):
For the whole lecture: Thompson 1984 - Reflectionson Trusting Trust
The lecture basically forces you to accept the fact that trust in software is transitive and historical. You’re not just trusting the program you’re reading, but the compiler, the compiler that built that compiler, the system it ran on, and so on, stretching back to something you ultimately accept on pure 'faith'.
Reading it today, you start to see parallels with modern supply-chain attacks, compromised build environments, and the growing emphasis on reproducible builds (Nix and GNU Guix, for example), bootstrapping etc. In a sense, a lot of current security work feels like we’re slowly rediscovering and trying to contain the implications Thompson showed decades ago. What I’m not sure about is if this is even a problem that can be actually solved. Techniques like diverse double compilation and reproducible builds clearly help, but they seem more like ways to narrow the gap than to eliminate it entirely. At some point, there’s always a root of trust that can't be proven.
Check out the whole lecture, it's just 3 pages and it has a very simple and clear language, it's worth it imho tbh ngl
For the whole lecture: Thompson 1984 - Reflectionson Trusting Trust
The lecture basically forces you to accept the fact that trust in software is transitive and historical. You’re not just trusting the program you’re reading, but the compiler, the compiler that built that compiler, the system it ran on, and so on, stretching back to something you ultimately accept on pure 'faith'.
Reading it today, you start to see parallels with modern supply-chain attacks, compromised build environments, and the growing emphasis on reproducible builds (Nix and GNU Guix, for example), bootstrapping etc. In a sense, a lot of current security work feels like we’re slowly rediscovering and trying to contain the implications Thompson showed decades ago. What I’m not sure about is if this is even a problem that can be actually solved. Techniques like diverse double compilation and reproducible builds clearly help, but they seem more like ways to narrow the gap than to eliminate it entirely. At some point, there’s always a root of trust that can't be proven.
Check out the whole lecture, it's just 3 pages and it has a very simple and clear language, it's worth it imho tbh ngl