Security Design Attractors
Traditionally applications that provide security and privacy features have a tendency to fall into two different categories. One category is what I'll call the open model. These applications are open source, federated, distributed or peer-to-peer. They often use open standards. They generally have a fairly bad user experience, complicated interaction patterns and a tendency to stagnate. The proprietary model is the opposite. Applications and systems that are closed. In many cases they use no standards. They are usually centralized in one way or another. Applications in this model have a better user experience. Usually it's well integrated between different systems and categories of information. These systems are also updated and renewed at a faster pace than applications from the open model.
So far, these descriptions aren't necessarily tied to privacy and security applications, but it feels to me as if these patterns are exaggerated because of the intrinsic complexity of real privacy and security. Further, there are some real problems with both of the above models - and these problems are much worse for sensitive applications. First, for real security you can't really trust anything that isn't open source - which means that the proprietary model can never be as safe as the open model. The centralized aspects of the proprietary model also usually means that user data will be collected in one large pile, where it's easier to access in various ways - both by legitimate and illegitimate parties - everything from ad companies to crackers to intelligence services with national security letters.
As anyone who has ever tried to use an open source tool for security, the user experience is a real challenge for these systems. It is extremely hard to use most of these tools correctly, so even though the open model theoretically can provide more security than the proprietary one, the lack of good user design often leads to bad security outcomes even when using these systems. How many people have ever sent an email unencrypted when they thought it was encrypted? Or given away identifying information over Tor? Or a million other types of mistakes that are all too easy to make.
Before I continue, I want to mention that of course there are exceptions to these two generalizations. There are some open source projects with excellent user interfaces. There are lots of proprietary products with terrible user experience, badly updated and so on. But I wanted to talk about the trend of these two attractors and why it happens.
A few weeks ago, Moxie Marlinspike wrote about this phenomena from the perspective of Signal here. His perspective is worth reading. I agree with him about many of the incentives that make this happen. But I don't agree with the conclusions. (From my perspective, I actually consider Signal to follow the proprietary model. The reasons for that include their reliance on centralized systems, but also the fact that it's impossible for most users to verify that the Signal they have on their phone match the source that has been published. Thus calling it open source feels incorrect to me. I wrote about this perspective here.).
So what are those incentives? The major ones are talked about in Moxie's post. Centralization gives more control, which allows systems to provide a better experience. You can provide for better integration over various platforms. You can update and change the protocols faster. You can even roll out security patches much faster than you can in the open model. On the other hand, the reasons why the open model works the way it does has a lot to do with the lack of centralization. Once you've rolled out a protocol and more than one application uses it, it becomes very hard to change, and in the future you'll end up having to rely on just the bare minimum to guarantee people get the same experience. Traditionally it has also been hard to involve UX experts in open source projects. And even if that was easy, many developers have a hard time seeing why it's needed. Many open source projects are designed for the people implementing them - and developers aren't exactly the typical user for many systems these days.
So. A lot of these incentives comes back to centralization of control. And Moxie's argument above is that this centralization is important enough for security that it's worth giving up other benefits for it - at least that's how I understand his post. I don't agree - dictatorships and totalitarian political systems can be much more efficient than more democratic or autonomous political systems. That doesn't mean we should choose them.
This was a lot of theoretical talking. So in order to make it more concrete, it's worth looking at an example of these kinds of failure modes. One that makes the difference very obvious is IRC versus Slack. You can see how well Slack can create a compelling experience. How they can provide history to all your devices, and have a unified experience where new features are rolled out to all platforms at the same time. On the other hand, Slack keeps all their data in one big pile, and you can't really trust them to not do bad stuff with it. And if there's a security breach of their systems, it will likely affect everyone using Slack. IRC on the other hand is an open protocol. It's implemented by a large number of applications and servers. You can set up your own server quite easily, and no matter what client you use, you'll be able to interoperate with all the other clients and servers out there. But IRC also has not changed much in 20-30 years, and most new features can't be relied on, since so many clients don't support them.
The reason I'm writing this post is actually to say that both of these alternatives are bad. They are natural attractors, because it's more costly and harder to do the right thing. It takes more effort and more specialized knowledge. For the proprietary model there is usually no financial benefit to avoid the trappings of it, while the open model would like to be better, but most of the time doesn't have access to the right resources to make it happen.
We need to avoid both of these black holes of design space. We need to create systems that are both open and distributed, while at the same time providing better integration, faster evolution and better user experience. This is not impossible - it is just a bit more costly. But for users to get real privacy and security we can't let that stop us. We need to think one step further and build systems that will protect our users, not just with open design and source, not just with good crypto, but with good user experience as well.