How is 1Password code audited to insure no backdoors are present?

ozarkcanoer
ozarkcanoer
Community Member

Any user of 1Password puts confidence in AgileBits security provisions for our password vaults. What I can't remember reading about is how does AgileBits audit it software to insure no employee has inserted a backdoor into 1Password or put in key-logging code to forward names and passwords to some server? If auditing is conducted, how frequently is the source code and final builds of the programs reviewed by in-house or other auditors?

Thanks,

Comments

  • Hi @ozarkcanoer‌

    I've asked @jpgoldberg‌ to comment on this process.

    Thanks!

  • khad
    khad
    1Password Alumni

    Until @jpgoldberg has a chance to say more, you may wish to review the existing blog posts he has written related to this topic, @ozarkcanoer‌:

    1Password and The Crypto Wars

    On the NSA, PRISM, and what it means for your 1Password data

    You have secrets; we don’t. Why our data format is public

  • ozarkcanoer
    ozarkcanoer
    Community Member

    Kahad, thanks for the reading, but it only peripherally touches on my question. I dont use snitch on my Mac and not aware of anything like it for iOS. I hope someone at Agilebits can reply with an overview

  • Megan
    Megan
    1Password Alumni

    Hi @ozarkcanoer‌

    I'm sorry to hear that @Khad's articles didn't answer your question. We'll just have to wait til our Chief Defender Against the Dark Arts himself has time to weigh in then, I certainly can't add any more detail than has been shared above. :)

  • sjk
    sjk
    1Password Alumni

    Hi @ozarkcanoer,

    Here a related comment from @jpgoldberg in the lengthy Could there be a backdoor? discussion:

    If we only released an update once a year or so, it would be feasible to have trusted third parties review the source code and every step from that source to the actual binaries that get distributed. But because we release more frequently, that just isn't feasible. It is enormously expensive and dramatically slows down the release process. And even if we let you examine the source (under an appropriate NDA) it would be difficult to prove that the source that you see is the same source behind the binary that gets distributed.

    I hope that helps answer your question. :)

  • littlebobbytables
    littlebobbytables
    1Password Alumni

    @ozarkcanoer‌

    I'm not going to suggest for even the slightest moment that my post here is in somehow a replacement to hearing from @jpgoldberg‌ I just wanted to add that I've run Little Snitch for a good few years now and one or two programs at most get the make any connection you want type rule. With 1Password and 1Password Mini I have six rules to cover very explicit URLS and ports which covers checking for updates and rich icons. If it were trying to transmit to some odd address Little Snitch would inform me.

    I'll shut up now and leave it to the experts.

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    Hi @ozarkcanoer‌!

    The short answer is that we have no formal auditing procedures in place beyond our regular development practices. Also, I strongly suspect that you won't be satisfied with my longer answer either.

    We feel gitty, oh so gitty.

    We git repositories for our Mac, iOS, and Android sources and a Perforce store for 1Password for Windows source.

    We do have "many eyeballs" over the source. Because iOS and OS X are built from the same source tree, those have the most eyeballs. Pull requests are all put out for review and approval. A much smaller subset of us have power to merge a pull request into the actual release and beta branches. Again, anyone with read access to look at these. Our reasons for not publishing the source have to do with intellectual property concerns and not security. So read access to the source is fairly wide within the organization.

    Many (informal) eyeballs

    On the whole, it is a general "trust of many eyeballs" instead of a formal procedure. For example, I do not review all pull requests.
    But I, or anyone with read access, can look over previous merges and commits and play the "git blame" game. Despite my occasional nagging, not everyone signs their commits with PGP/GnuPG, but nobody has ever repudiated a commit.

    It is certainly possible that someone could insert something malicious into the code without it being detected, but the chances of detection remain sufficiently high that I don't think anyone would try.

    Some areas of code I pay more attention to

    If I were going to try to build a malicious crypto tool, I know what I would do. And so when I first got access to the source years ago, these places are the first places I looked. Also because I talk about these things so much, others know, too.

    The most obvious would be to sabotage key generation. I'll step through a somewhat artificial example (ignoring many details of our actual key derivation) to show how this might work. Other than one step of concealment, this is actually the same procedure used "export approved" cryptographic systems in the 1990s.

    Your data is encrypted with a 256 bit AES key that is generated from a CSPRNG. But suppose that I just get 40 bits of data and make the other 216 bits constant. So this means that when trying to brute force a key, I'd only have to go through 2^40 (2^39 on average) guesses. This is well within reach of lots of people. The 216 bit constant would be one of the two backdoor secrets. Let's call it b1.

    Now of course, if every 256 bit key began (or ended) with the same 216 bits, b1, someone would notice. (Again, this wasn't a problem with the deliberately crippled crypto for US export requirements of the 1990s. All but 40 bits of the keys were set to zeros.). But to do that surreptitiously I would need to make the keys look random. So I would simply HMAC them with another backdoor secret, b2. Thus with knowledge of b2 and which 40 of the 256 bit keys were genuinely random, one could brute force 1Password encryption keys.

    What is cool about this, is that it would be very hard for someone using and testing the app's behavior to detect this. The generated keys will look perfectly random. There is no rogue communication. All of the data conforms to published specifications.
    You would only notice if generate about 2^20 (square root of 2^40) of these crippled keys, at which point you have a greater than 50% chance of seeing duplicate keys.

    The value of thinking like a criminal

    That above example of "how I would do it" is not meant to scare you but to reassure you. I'm not the only one with a sufficient level of "trust but verify" to look key generation and other similar parts of the code. And I'm not the only one who can "think like a criminal". This way of unnatural thinking comes naturally to people who think about security. And as such, if there really is someone evil (or someone being coerced to be evil) among us, the chances of getting caught are sufficiently high that I don't think that anyone would risk it.

    I suspect that you won't find this satisfactory. But as we grow in size, we will probably evolve more formal procedures. In the old days, everybody knew what everyone else was doing, but we (and the code) have grown enormously, and so become a bit more bureaucratic is inevitable.

  • ozarkcanoer
    ozarkcanoer
    Community Member

    @jpgoldberg‌ Thank you very much for your longer explanation. Actually, the many eyes you rely upon seems to me to be functionally close to an audit. My initial concern was with 1P sending usernames and passwords out to a bad guy's server in the net, but at least for the version that @littlebobbytables‌ is running it appears nothing like that is occuring. That testing should be added to your future internal audits. In the meantime, keep up the good work on the math side and the improved UI we are enjoying on Mac OS X and iOS!

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    I'm glad I could help.

    The fact that you (or anyone) can inspect the network traffic that 1Password engages really does limit scope for the kind of mischief you describe. That is simply something that would be detected almost immediately. Keep in mind that if something like what were discovered, we would be out of business in a day. I just don't see how it could be a good deal for us even if we were evil. Quite simply, if you look at things from a purely economic point of view, it is in our interest to have happy customers.

    And that is why I emphasize the many eyeballs, because that a part of the defense against being coerced to be evil. We have developers from four different countries (six if you want to count Quebec and Scotland as separate). So even if some of us Americans could be compelled to do evil by our government, we wouldn't expect that to be kept quiet by everyone else. A conspiracy grows less plausible (or feasible) with the number of people who have to keep it secret. Remember that the whole being evil thing backfires enormously upon detection. So evil-doers would not pursue plans that risk detection.

This discussion has been closed.