The (De-)Evolution of Trust In Computer Systems

Spice Labs surveys applications using cryptographic hashes to provide on-demand, comprehensive maps, enabling confident scoping, modernization planning, and breach response with accuracy and measurability.

Steve Hawley
Steve Hawley
Engineer

The (De-)Evolution of Trust In Computer Systems

Research software engineers were a trusting lot. This stemmed from the fact that computer systems fell into two broad categories: systems that were being used for very specific, mission critical tasks and systems that were used for research or fun and honestly, shouldn’t research be fun? The former were typically sequestered behind closed doors and under tight control because the last thing you needed was the system that ran your payroll to not function. But computers are fun and interesting and do nifty things. For example, early S100 Bus computers generated huge amounts of radio frequency interference. If you carefully wrote your program, you could get it to play music on a nearby AM radio. If you could do that, why would you want the computer behind closed doors?

This was, more or less, the philosophy of the creators of UNIX. In the early 80’s, I had a job at Bell Laboratories in Murray Hill. I was making violins, but that’s a story for another day. I asked my boss if I could have an account and he thought that was a great idea. I ended up with access to a couple of VAXes which were running a version of UNIX written by the folks down the hall. Between my office and the venerable UNIX room, there was a side lab that housed the VAXes. If you could get into the building, you could walk into these rooms. Similarly, the source to the operating system was available on the system. One of the things I remember doing was poking around and looking at the code. At the time, I didn’t know what I was looking at, but still I figured out where the code for logging in lived as well as several other main UNIX utilities that I compiled on my own.

I also perpetrated other shenanigans because they were allowed. Or at least they weren’t disallowed. For example, I learned about the stty command which can make changes to a serial terminal’s settings. It turns out that you could make changes to someone else’s terminal settings. For example if you did stty 0 > /dev/ttyFOO where FOO was the number of an active terminal, you could remotely log out another user. I did a quick who command and saw that Dennis Richie, the creator of C and UNIX, was logged in. So I logged him out. This wasn’t a big deal because Dennis frequently left himself logged in on one of the terminals in the UNIX room and walked away. This UNIX room, like the room with the VAXes, was fully accessible to anyone who could get in the building. I’d like to say that building security was tight, but it wasn’t. If you picked the right time of day, you could easily get into the building with a forged ID badge. Not that I ever did that.

On another evening, I was messing around with Blit terminals. These were “smart” terminals with an 800x1024 1 bit display powered by a 68000 processor. In their power-on state, they acted like a regular “glass” terminal, but through escape sequences you could run a window manager which would also multiplex I/O so you could have multiple shells running in various windows or little downloadable apps. One such app was an icon editor. I quickly discovered that the mail app would use icons for users (if available) from a public directory to show who had sent you an email. I pulled up the icons of the various people in the department into the icon editor and for fun, I changed Ken Thompson’s icon such that his head was conical. Inadvertently, I saved it over the system-wide file for him. Whoops. Again, these things were possible because the system was engineered to be open and accessible. After all, what kind of maniac would edit the mail icon for Turing Award winner, Ken Thompson? Me. That’s who. Ken, if you read this, I hope you have forgiven me. Maybe this event was the inspiration for his acceptance speech for the award which was ultimately about trust.

Computers were playthings that should be accessible because this is what drove innovation.

This was generous, optimistic and naive at the same time.

Things changed in the mid 1980’s. One of the drivers of that was home computers that had hard drives. People started writing viruses. Initially, these were pieces of code that could find an executable and modify them to include their own code. In this way, the code was self-propagating. If a floppy disk was in the drive, the virus would attach itself to any executables on the floppy disk. If you moved that disk to another computer and ran code from that floppy, it would propagate further. Early viruses were largely benign. As you can imagine, university computer labs were easy targets for sneakernet transmitted viruses.

Then came the Morris Worm. This was a virus that used various weaknesses in UNIX systems to propagate across the internet. His code had several means of getting into remote systems, including taking advantage of mail servers left in debug mode, a buffer overflow in the remote finger command, and a remote shell weakness. It’s not clear what his end goal was in creating the worm. I have suspicions, though. I crossed paths with him several times when I was at Bell Labs (his father worked there). In one of those times, he showed me a program he wrote that could dissect encrypted save files for the game rogue and allowed you to edit them to give you an advantage in the game. Why? Because he could. And while the public statement regarding the worm was that he did it to show the inherent vulnerability of the internet, I’m fairly certain that he did it not for any noble purpose but because he could.

A side effect of this was that the scales fell away from the eyes of many system administrators. They lost a great deal of time repairing the damage. A common trait in system administrators is laziness - so much so that they do a great deal of work just to be lazy. Crises are unexpected interference in working to be lazy.

Another side effect was that when people saw what could be done and found that money could be made, this created an incentive for breaking into systems and exfiltrating information or using the systems as jumping off points for attacking other systems. For example, using a remote system to send mass email scams or collecting user and credit card data which can be used to fund other crimes, or stealing industry or state secrets. When my mom passed away, it was hours after her death was registered with the state, that her bank account got completely drained and was quickly used to buy a huge number of concert tickets. Someone had figured out that there is a narrow window of opportunity between someone’s death and when the executor has access to their finances that accounts can be drained.

Around this time, I started hearing the phrase “zero trust” being bandied about in the industry. This is the notion that you should never trust anyone trying to access remote resources without verification with as much information as is available. It also makes other assumptions that your networks are already compromised. This is an inversion of the older UNIX approach. It also harkens back to the critical system approach of early computing where the resources were much more closely guarded.

I’ve heard political pundits involved in security alleging that the next big war against nation states will not be a kinetic war but an information war. I attended a talk at Microsoft that was given by the security team and while they were prevented from talking about specifics they repeatedly referred to attacks perpetrated by “nation states”. It was eye-opening and it makes sense: why attack physically when you could sow distrust in a government by taking out the power grid, water treatment, hospital systems, banking systems, and other societal buttresses?

The advantage of living through all these events is that rather than being frozen by them, we can instead use them to inform how we approach how we are building our systems at Spice Labs. We build systems to protect data the way that we would want our data protected. Don’t get me wrong, computers are still extremely neat and fun to play with, but the past 4 decades have taught us that we need to silo the fun and the serious in a way that honors each in an appropriate way. We are seeing a looming issue on the horizon with how security and trust is managed on critical systems: it is that the reliance on security using conventional encryption is a trip into the sunset. This is like the Y2K problem but with a deadline that is actively changing and not in a good way. As research organizations are building systems that have more and more Qubits and as there are refinements to Shor’s algorithm, it’s as if we’re getting a daylight savings time change on when that sunset is going to happen. Being prepared for that inevitability is important and unfortunately, most organizations are still in the stage where they don’t know what they don’t know. Spice Labs builds tools that you can use to quantify the PQC vulnerabilities in your code and all its dependencies - even the deployed systems - and this assessment can be done with or without the aid of your engineering team.