Security of Client Implementations

codasalt
codasalt
Community Member

Is there a whitepaper describing the security considerations that went into writing the client implementations themselves, rather than the overall system architecture design? Here are some examples of the kinds questions I'm looking for answers to. However, I imagine there are other reasonable questions I haven't thought of. It would be great if there were a document that comprehensively went through everything.

Question Examples:

What steps do you take to promote memory safety in the software clients; eg., using memory safe languages, avoiding unsafe functions like strcpy, running analysis tools like valgrind, etc?

Do you use any sandboxing / privelidge separation within the clients? For example, the 1Password Brain presumably has a large amount of parsing code either directly or through libraries. Parsing code is notorious for being an easy source of memory safety vulnerabilities. Does 1Password take steps to protect the other parts of 1Password and the rest of the system in the event this or another component has a vulnerability?

Do you do any fuzzing?

How thorough are your test suites?

In selecting the compiler and compiler options, what steps do you take to minimize the risk of compiler bugs?

In the software clients, do you implement some of the crypto yourselves or only use third party libraries?

Is all crypto implemented in assembly? If not, how do you ensure the compiler does not accidentally introduce side channels like emitting non constant time instructions or introducing branches or memory accesses that depend on secret data?


1Password Version: Not Provided
Extension Version: Not Provided
OS Version: Not Provided
Sync Type: Not Provided

Comments

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    Is there a whitepaper describing the security considerations that went into writing the client implementations themselves,

    Good news: There is a whole chapter of the white paper specifically for this.

    Bad news: That chapter is filled with a placeholder saying that it hasn't been written yet..

    What steps do you take to promote memory safety in the software clients; eg., using memory safe languages, avoiding unsafe functions like strcpy, running analysis tools like valgrind, etc?

    The details differ platform by platform, and I will not give a complete answer here.

    • Analysis tools: Xcode, Visual Studio, Android Studio/Gradle take the place of older scanning tools. These are integrated into our build processes. It really is impressive what sorts of static and run time analyses these IDEs perform pretty much for free (well, you have to switch some of them on.) The tools added in Xcode 8 for run time checks (during tests) and for identifying undefined behavior are particularly cool. And Visual Studio is also really impressive.

    • Memory (and type) safe languages

      • I'd love to say that for iOS and Mac we've migrated fully from Objective-C to Swift, but that isn't at all the case. Not even all new code is written in Swift, but we are hoping that once Swift settles down a bit and the tool chain is improved, we will do so.

      • Our JavaScript, where needed, is written in TypeScript. This gives us far far better type safety than native JS would. It's still possible to shoot yourself in the foot, but you have to go out of your way to do so.

      • On windows, we use C#, and are moving some utilities to Rust. You can't get more memory safe than that, but the move to Rust is in early stages.

      • Android is still Java. We configure Gradle and Android Studio to warn as much as possible about unsafe code practices.

      • Back-end is Go. Personally, I don't think this is as memory safe as many of its advocates suggest, but it is enormously safer than many alternatives.

      • We are using more common code in Go (compiled to linkable libraries).

    • Parsing code separation

      Now I will get preachy. I have pretty much joined the Church of LangSec (Formal Language Theory principles applied to secure software development). Ideally, I would like to see all parsing (which means pretty much any data/input/network read) to be handled by a parser that was generated from a formal specification of the expected language. We aren't there yet, but we know that input must be first checked syntactically for correctness before it is to be interpreted. And we avoid ad-hoc regexes for such validation.

      The degree of separation varies on a platform by platform basis. Not everything is as fully isolated as we would ideally like in every instances, but it is something we are moving toward.

    • Fuzzing: For the server, definitely. For input to clients, we could be more thorough. This is a process, and we are improving these sorts of things incrementally.

    • "Do you implement some of the crypto yourselves or only use third party libraries?"

      We implement as little as possible. We'd love to never implement any. What we do implement is always higher level constructions that we have a need for and for which there aren't well-established libraries for all the platforms. So, for example, we needed to implement Secure Remote Password. We've published the source for that

      Also, when we introduced OPVault, we needed an authenticated encryption mode that would work for all of our platforms, and so we needed to build our own Encrypt-then-MAC construction. We also tend to not use key wrapping standards, and just encrypt keys the same way we encrypt everything else. It's a bit more expensive in space and time, but we avoid potential errors by pretty much saying "this is how we encrypt stuff". If AES-256-GCM is overkill for some things, so be it.

    • "Is the crypto implemented in assembly?"

      Funny you should ask that. Yes it is. For cryptographic primitives, we use the SDKs and standard libraries from the operating systems. We are in no way capable of writing such code ourselves. The reason that I said "funny you should say that" is because I was just talking to colleagues last week about this. There is a temptation to put all of the crypto into our cross platform development, but it is a temptation we have to resist exactly because of this. Leaving side channels aside, we want to be able to take advantage of the hardware support for AES and hashing that many chipsets offer, and the way to do that is to use the libraries that come standard on the platform.

    OK. This has been a lot of fun, but it's 2:20 AM, and I need to get to bed.

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    I'm going to use your example of strcpy to exemplify a broader point about the security decisions that go into coding practices. Now of course we don't use strcpy, but we don't use strncpy either. This is because in our most C-like environment (Objective-C) we are using NSString or NSMutableString for all of our string functions (or their Swift equivalents). This is great because it precludes a very large class of bugs of the sort that you were asking about with your example.

    Some background

    Let me add for those who are struggling to follow this discussion. There are kinds of programming bugs that are really really easy to make in C that have major security implications.1 Heartbleed is a famous example, but this category of bug can also be used to allow an attacker to inject malicious code into a running process by providing malicious input. Even a malformed image file can be used to do this kind of stuff (until the libraries were fixed a few years back).

    See the C section of this comic by Mart Virkus of Toggl:

    Git the princess comic

    In that comic (and Heartbleed) failure to check for bounds or terminators properly meant reading from bits of process memory that were beyond what you intended. What is worse is then the same kinds of errors allow you to write to portions of process memory that should be out of bounds.

    Memory safe (and type safe) languages are designed to make these sorts of errors much harder. And so that is one of the things that @codasalt was asking about. I talked about our languages choices above. Modern "safe" languages differ in their safety in this regard.

    Safety and trade-offs

    By ceding memory management to the language and its tools we get a great deal of safety, but we also give up a certain amount of control. This is most apparent when people ask us whether secrets are wiped from memory when 1Password locks. By using things like NSString (and Secure Input which requires data of this type), we get a lot of protections. But what we lose is the ability to zero or overwrite the region of memory when we wish to. As a consequence secrets will remain in 1Password's process memory for some amount of time after 1Password is locked. And that amount of time is unpredictable.

    So this is a security/security tradeoff. And when looking at security design, we make lots of decisions of this nature. Sure we could write all of the secret handling in C and have the fine control we would need to erase portions of memory when we wanted to, but we would be foregoing all of the safety and protections that higher level objects do for us. Some systems, like openssl, did write their own memory management layer, but this is one of the things that went wrong with Heartbleed. Also, in this case, we figure that an attacker who can read your computer's memory after you have locked 1Password can probably do so before you have locked it. So the security gain of immediate overwriting of secrets is relatively small.

    I should say that we were more concerned about this when Direct Memory Attacks were easier. But even then, the advantages of using higher level string objects with all of the memory protections and management offered by the languages was better than returning to the C-like ways of doing this.

    One could argue otherwise. PasswordSafe, for example, has made the opposite decision. But they have made other security/security tradeoffs that we disagree with (for example, it is "safer" in some respects to have no browser integration, but that means that it can't offer the kind of phishing prevention that 1Password offers).

    The point?

    I suspect that I had a point in mind when I started to write this. But I will just make something up now.

    Secure code development is hard. It's not only hard because you need to follow practices that deal with the kinds of things that Codasalt pointed out, but because sometimes there are difficult decisions as well. This means developing an understanding of the risks and consequences of different choices.

    And if this sounds like fun to you (it really is a lot of fun): We are hiring.


    1. There is a bit on irony in the fact that everything that I used to love about C is now stuff that I hate about it now that security is my primary focus. ↩︎

This discussion has been closed.