Do you have any comment on supply chain attack on Passwordstate?

rationaloutlook
rationaloutlook
Community Member

I came to know about compromise of passwords of customers of Click Studios' Passwordstate. It was done by compromising the update mechanism of that software. More details here: https://arstechnica.com/gadgets/2021/04/hackers-backdoor-corporate-password-manager-and-steal-customer-data/

This is very concerning because an attack like that cannot be avoided (afaik) by use of any kind of end-to-end encryption, as the malicious update should have full access to decrypted data. I wanted to know if you would like to reassure your customers after going through what exactly happened with Passwordstate, and then outlining what steps you take to decrease susceptibility to such attacks.

It's also a concern because even companies like Microsoft and SolarWinds have been compromised recently with supply-chain attacks.


1Password Version: Not Provided
Extension Version: Not Provided
OS Version: Not Provided
Sync Type: Not Provided

Comments

  • sneakybeaky321
    sneakybeaky321
    Community Member

    Agreed, I was just about to create a post asking about this! It’s a concern for any software for sure though

  • jpgoldberg
    jpgoldberg
    1Password Alumni
    edited April 2021

    [I am updating this post for clarity and new information. Last edited Apr 25 18:16:52 UTC 2021]

    @rationaloutlook asked some very important questions, which I will try to address piece by piece.

    This is very concerning because an attack like that cannot be avoided (afaik) by use of any kind of end-to-end encryption

    You do have to trust that the 1Password client isn't malicious, and therefore it is correct to want to know that our mechanism for updating and distributing the 1Password clients is secure against evilgrade attacks.

    if you would like to reassure your customers after going through what exactly happened with Passwordstate

    It is really tempting to make guesses about what went wrong there, but we simply don't know. In particular I'd like to know why standard code-signing defenses and integrity checks didn't prevent the attack. I can speculate about that, but am hoping for a more detailed report from Click Studios in the coming days.

    and then outlining what steps you take to decrease susceptibility to such attacks.

    The details of 1Password's process of fetching and installing updates differs from platform to platform. And it very much differs whether it is via an app store. In the app store case, we provided a signed binary to the store, and the upgrade process makes sure that what gets installed in the thing that we signed.

    Digital signatures

    Before I can go into this, I should say a word about code signing. As most people reading this know, there are ways go creating digital signatures. These involve a private key and a public key. The public key and private are mathematically related in a way that allows the holder of the private key to sign data which can be verified by any holder of the public key. A signature that verifies proves that the signature of some data was created by a holder of the private key. Any change in the signed data will result in the signature not validating.

    Code signing signatures are a bit different than most other digital signature mechanism, but not in ways that matter for this discussion.

    The 1Password upgrader

    When you upgrade 1Password on Windows or on Mac (not the App Store version), 1Password reaches out to one of our servers to see if there are updates.1 That process is over TLS, but a hash of the update is included.

    TLS offers one check, but it hardly sufficient. The integrity hash verification on the download itself offers an additional check, which on its own would not be enough, but it adds an early way to detect some tampering. The next thing the update does is check that the code signing signature is valid. (The operating system will also check that at installation or launch.)

    Our updater additionally checks that the signature is from us. For non-app store delivery, operating systems only check that the signature is valid and come from a known developer. Our update does the additional check that it is a valid signature from us.

    There are a couple of other things as well, during the update. To the extent possible, we check that the existing version of 1Password still passes its checks. (Of course that is the version that is running the checks, but it is an easy check to perform and it would make such tampering with your local copy of 1Password more difficult.)

    Advice to our users

    There really isn't anything for our users to do. We've been aware of the threat of attacks on upgrade processes for a very long time, and as you see above, we have built in many layers of defenses that are automated and behind the scenes.

    But I will list a few standard things that should apply to generally, and is not 1Password specific.

    • Don't by-pass your system's software integrity checks. And that includes the code signature checks that happen at install time.
    • Other things being equal (which they rarely are) prefer curated app stores.
    • Ask, as you are doing here, how the upgrade process performs integrity checks.

    Looking under checking hood

    If you would like to manually perform some of the checks our update process does automatically you can do the following. I will describe this for Mac.

    On Mac you can check that the application is properly signed and by us by running the following in a Terminal windows

    codesign -dvv /Applications/1Password\ 7.app
    

    and then look at the Authority fields, which should look like. (Note that things are different if you use the Mac App Store, which handles all of these stuff in different ways)

    Authority=Developer ID Application: AgileBits Inc. (2BUA8C4S2C)
    Authority=Developer ID Certification Authority
    Authority=Apple Root CA
    

    To see that the signature matches what is on disk (which is separate from the above, which shows who signed it) use

    codesign --verify --verbose /Applications/1Password\ 7.app
    

    which should come back with

    /Applications/1Password 7.app: valid on disk
    /Applications/1Password 7.app: satisfies its Designated Requirement
    

    There is actually a really nice tool for checking these signatures (and more) by our friend Patrick Wardle of Objective-See called What's Your Sign. It is really good at finding ways to present the right information to you without you having to deal with obscure Terminal commands and interpret the output of them.

    My colleague, @Matthew_1P, has added instructions for Windows. Again, these checks are enforced automatically, and we are just listing them here in case you want to see more of what is going on behind the scenes.

    Checking a direct download

    So far I have spoken about what our updater does. If you download 1Password directly, take a look at this guide to performing additional checks on it if you wish.

    Designing against evilgrade

    "Evilgrade attacks" (subverting the upgrade process to get people and systems to install malicious code) have been around for a long time. Most famously would be attack in Flame which subverted the Windows update process. That was an incredibly sophisticated attack. But simpler ones against easier targets have been a thing.

    We've been aware of these sorts of threats for a very long time; and so have designed our code signing key management and our distribution and upgrade processes with all of that in mind. Without knowing exactly what went wrong with Click Studios, I can't say with certainly that the attack there would never have worked against 1Password. But, if you will forgive the conceit, if the attack would have worked against us, why would the attackers go after anyone else?


    1. We have considered pinning the site certificate into the updater, but have so far rejected the idea, as the risk to availability that it introduces is greater than the risk it would be preventing. ↩︎

  • XIII
    XIII
    Community Member
    edited April 2021

    Isn’t it a best practice that software verifies updates before installing them? (Instead of putting that burden on the user)

    I sure hope 1Password already does this (and if not that AgileBits starts working on this “immediately”)

  • I just wanted to jump in to add the steps for checking on Windows. You can check the code signature by following these steps:

    1. Open File Explorer and navigate to %LOCALAPPDATA%\1Password\app\7 (enter this into the address bar)
    2. Right-click the main 1Password (.exe) file and choose Properties.
    3. Select the Digital Signatures tab
    4. You should see "AgileBits Inc." listed. Select it, and click Details. It should state "This digital signature is OK", which tells you the signature is valid.
    5. Select the Details tab, scroll down and find the "Thumbprint" – this value should either be 6e37fd226a08c9e8c25e654ec4a82984c58caf7c or 9f3f1046502e86b964d60f408a32c647d349bade
    6. Repeat these steps for the second 1Password (.dll) file.

    If you're a Powershell user, you can do this in one line:

    "dll","exe" | % { Get-AuthenticodeSignature "$env:LOCALAPPDATA\1Password\app\7\1Password.$_" | Format-List }
    

    This will give you a lot of output, but the information you need to check is the same:

    • That the [Subject] under SignerCertificate is CN=AgileBits Inc., O=AgileBits Inc., STREET=317 Adelaide street West, L=Toronto, S=Ontario, PostalCode=M5V1P9, C=CA
    • That the [Thumbprint] under SignerCertificate is 6E37FD226A08C9E8C25E654EC4A82984C58CAF7C or 9f3f1046502e86b964d60f408a32c647d349bade
    • That the output contains Status: Valid and StatusMessage: Signature verified.

    You'll need to check the output for these twice – once for the .exe file and once for the .dll file.

  • SvenS1P
    edited April 2021

    @XIII If 1Password updates itself, then we automatically verify the digital signature before installation. You can read a bit more about verifying the code signature in our support article.

  • valor
    valor
    Community Member

    @Matthew_1P there really needs to be a detailed response from an official (blog?) channel on this.

    One of my primary reservations in subscribing was handing over the sync control and full storage stack to you. At least before I had the option of local (offline) sync or Dropbox. Now, I’ve entrusted even more of the infrastructure to you, and require 1Password to have Internet access to who-knows-where for it just to function.

    In short, if you did experience a supply chain attack and vaults/credentials were exfiltrated, how would you know? How would I know?

  • XIII
    XIII
    Community Member

    If 1Password updates itself, then we automatically verify the digital signature before installation.

    That’s what I hoped for. Thanks!

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    Isn’t it a best practice that software verifies updates before installing them? (Instead of putting that burden on the user)

    Of course, and that is what 1Password does. I will edit my post to make it clear that the 1Password updater does what is described in those manual checks.

  • XIII
    XIII
    Community Member

    Of course, and that is what 1Password does. I will edit my post to make it clear that the 1Password updater does what is described in those manual checks.

    Thank you for the additional information in your edit.

  • jpgoldberg
    jpgoldberg
    1Password Alumni
    edited April 2021

    I've updated my initial posting, but I also don't want to make it too huge, so I will branch off into some other topics here.

    @valor is certainly correct that this would be blog-worthy or some other more findable and permanent document. Over the years, a lot of my blog posts started out from things discussions such as this. I have a habit of using our discussion forums as a "first draft of blog post". So no promises, but you certainly are correct. I also can't recall whether we published something about our defenses against evilgrade attacks in the past. The need to defend against attacks on the upgrade process fully preceded and are independent subscription versus stand-alone use, and so this is something that we have been building defenses for for a very long time.

    In short, if you did experience a supply chain attack and vaults/credentials were exfiltrated, how would you know?

    Supply chain

    "Supply chain attack" means many different things in many different contexts. I have been avoiding the term not because I don't want to discuss it, but I think it isn't that relevant to the current discussion. It is not at all clear to me whether Click Studios was victim of a supply chain attack at all. It is their users who were. It may be that the malicious DLL got into their product through a supply chain attack, or it might be that their build and distribution systems were directly compromised.

    Also the defenses against supply chain attacks depend heavily on what is being supplied. SolarWinds is an IT tool; it is not a software dependency. The way you defend against an attack on one is very different than the way you defend against an attack on the other, although there are a few commonalities.

    In general (and this is necessarily vague, as the concrete methods depend on specifics)

    1. Know what you depend on
    2. Know what damage could be done if any of those things you depend on were to be malicious or compromised
    3. Limit the number of things you depend on (but don't go to extremes; always rolling your own introduces other security problems)
    4. Organize data flow and permissions so that the compromise of some component does limited damage

    At 1Password, number 4 has been central to what we do from our inception. If you build a system to resist insider attacks1 then you also limit the damage from compromises of the tools that an individual uses. As we have grown, particularly over the past couple of years, we've built procedures to improve 1 and 3. We already had such things in place for software dependencies, but in the past two years we've built up systems for knowing what things we and our people us that scales. (When I joined the company, I was one of perhaps a dozen people. Now we employ more than 400 people, so what worked back in the old days definitely needed a major overhaul). Number 2 kind of works in there with number 3.

    You will note that "increased vetting of suppliers" isn't on my list. Obvious you want to vet your suppliers and get a sense of their data flow and protections. But things like Microsoft or SolarWinds would not have been detected by increased vetting.

    Intrusion detection

    Anyway, the question @valor asked is really more about intrusion detection and response. How would we know if something bad happened and how would we inform users. There is saying that there were two kinds of web services: those that have been breached, and those who have been breached but don't know it. While that is over pessimistic, it highlights the problem of intrusion detection. Sure we have intrusion detection systems in place, and are adding to those. But even an intruder who successfully gets in may also be able to evade detection. All we can do is make it harder for them to do so.

    There is an interesting problem with some defensive measures. We compartmentalize data and processes very strongly, but a smart IDS needs to be able to correlate activity in one domain to another, and so requires data flows that go against some of our security design. Balancing these sorts of things is very tricky, and comes down to looking at a lot of special cases and developing a deep understanding of the tools that are used. If we log all database activity those logs tell a great deal about user activity, so how do we protect access to those logs. Can we have US-based members of our incident response team see those sorts of logs for our 1password.eu users?

    I mention those to give an idea of how fluid this process is. The slogan that "Security is a process, not a product" is especially true for IDS. Everything is under continual review and subject to tuning and change. IDS is never a "solved problem." But in combination to our over all security design, which should severely limit the damage of a compromise, it is should give you a great deal of confidence.

    How would I know?

    This, of course depends. If we learn of a leak or compromise of any of your data, we are obligated to let you know. That obligation is not merely a regulatory one, but it is a moral one. For privacy reasons we never asked where users resided, and so we decided to treat everyone as if they resided in an EU country with respect to regulation.2 So in these sorts of terms, consider yourself covered by the EU's notification rules even if you don't reside in an EU country.

    Unfortunately, many breaches are detected not by the breached organization, but by the publication of breached data. In that unfortunate case, we would learn at around the same time. Furthermore, we would all have the very frustrating experience of us not being able to say anything substantive until an investigation is complete. Early hypotheses about what happened are often wrong, and so reporting on the "state of the investigation" is often unwise, despite how frustrating that is.

    In either case, I would remind everyone that we have gone to extreme measures to make sure that a breach of the data that we hold would have a minimal impact on our users.


    1. We certainly don't plan on an insider attack, but by designing a system that limits the damage an insider can do we build a system that limits the damage that a compromise which gets inside (whether through a malicious individual or something else). It's the same with breaches, we don't plan on being breached, but we have to plan for it. And that is why we've designed 1Password so that a breach would do very limited damage to our users. ↩︎

    2. For tax reasons, we do have to know what tax rules apply to paying customers, and so we are obligated know where paying customers live. But we still want to offer the strongest privacy protections to everyone. ↩︎

  • tpv0rtex
    tpv0rtex
    Community Member

    Would you comment on third-party and open-source software you use (in building your product, or that goes directly into your product)? How do you detect malicious code appearing in those? (I realize this is a very hard problem. But I’m interested in your current thoughts and processes.)

  • wizard86pz
    wizard86pz
    Community Member

    Thanks @jpgoldberg. I'm sure everything you describe is 100% valid for beta updates delivery and installation. Right? :)

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    I'm sure everything you describe is 100% valid for beta updates delivery and installation. Right?

    That is correct, @wizard86pz The updaters don’t care if it is a beta or not with respect to this checking.

  • jpgoldberg
    jpgoldberg
    1Password Alumni
    edited April 2021

    A note on managing open source software

    Would you comment on third-party and open-source software you use?

    As you correctly note, @tpv0rtex, this is not an easy problem. The first step is to know what you depend on. Over the years we’ve put into place tooling that collects this for each release. This allows us to manually check whether we depend on X should we learn of a problem with X. We also try to minimize dependencies, but we do so with an awareness that rolling our own introduces its own risks.

    But more importantly, it is better do detect things earlier and automatically. Here it tends to depend on the particular ecosystem. For example, we check Rust’s cargo audit with each build. We use GitHub’s Dependabot for the things that that covers, and we use similar tools where available for other code bases.

    This is far from perfect. First of all, the tools for this kind of stuff vary greatly from language to language. Rust offers great tools for this, Go not so much. But on the other hand, Go has a very
    rich standard library meaning we don’t need so many third party libraries, while Rust is the other way around. These systems also require that security fixes get labeled appropriately by each package/library/crate maintainer. So coverage is not nearly as complete as we’d like.

    The good news is that this is all getting much better. Has you asked me this question five years ago, I would have just groaned and tried to change the subject. Those were the days when open source dependencies were not so much free as in “free beer”, nor free as is “free speech”, but free as in “free puppy.”

  • tpv0rtex
    tpv0rtex
    Community Member

    Thanks, @jpgoldberg. That's the type of info I was looking for.

    I wonder if, at some point, major companies will fund security researchers to do ongoing security audit/reviews of all new releases of the well-used open source software. It seems in everyone's best interest.

    And I love "free as in 'free puppy'".

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    @tpv0rtex, I can't take credit for the "free puppy" line. I heard it at some conference.

    And yes, there is scope to support various projects that can improve open source security. Obviously, I can't promise anything here.

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    A note on publishing hashes

    In the old days, it was common to publish a cryptographic hash (MD5, as it really was the old days) of a download to help provide an integrity check. But that isn't something that is useful today. It was a practice that made some sense when the following assumptions held

    1. The hash was published over a more secure channel than the thing it was a hash of.
    2. You didn't have a better mechanism (such as a signature).
    3. You could pretend to yourself that people would actually compare the hashes well enough.

    None of those really hold today. Let's talk about each.

    Assumption 1

    The hash was published over a more secure channel than the thing it was a hash of

    Back in the day, TLS/SSL was expensive, and so you might deliver a text web page (something small) over it, and you'd deliver the binary (something big) over a less secure channel (ftp, for example). In that case you were delivering the hash over a channel that couldn't be tampered with over the net. So (given the other assumptions) this worked, as any tampering with the binary download would be detected when the hash was checked.

    Today, hashes are not delivered over more tamper-proof channels than the things they are hashes of. Often they are delivered over the very same channel. And so the really offer no additional security value.

    Assumption 2

    You didn't have a better mechanism

    A Digital Signature is a far better mechanism, as it does much more than just the hash check. In addition to delivering it with the thing that it is signing, in proves that the signer holds a particular private key. Whatever is wrong with PKI (and there is plenty), those problems affected the authenticity of a hash at least as much as we have with digital signatures.

    Assumption 3

    You could pretend to yourself that people would actually compare the hashes well enough

    I suspect that we all knew that nobody every compared the hashes. They liked seeing them there as a signal that they could check if they wish, but very few people actually checked.
    And when we did check, we'd look at the first few hex-digits and the last few hex-digits. If people are only checking a few bytes of the hash, then creating a second pre-image is not that hard.

    But what has changed is that there have now been user studies confirming that people very rarely check and on the exceedingly rare occassions that when they do, they only check a few hex digits.

    What are they good for today?

    I'm sure it is possible to contrive an example where publishing the hash does something that a signature (and particularly a code signing signature) doesn't cover, but it isn't going to be a way to address the question of authenticating software downloads and updates.

    It can be a geeky signal of having been in security in the 1990s. That is, it is an affectation of a certain style of security. I certainly am not above such things, as I have my PGP fingerprint on my business card. Publishing hashes that way is fine as an affectation (or perhaps annoying, depending on tastes).

    I worry more about providers who think it is the right approach to the problem today. It suggests that they never really understood why publishing hashes in the 90s was a good thing and on how those assumptions have changed. This is a not uncommon thing in the security community. Something that was a good thing to do at one time remains a "thing we do for security" long after the circumstances have changed.

  • MrC
    MrC
    Volunteer Moderator

    And of course there is another reason why they were useful, beyond security. It was common for files to be corrupted during transit to and from mirrors, or to end users.

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    That is true, @MrC, but again the signatures handled accidental data corruption as well as addressing malicious tampering.

  • MrC
    MrC
    Volunteer Moderator

    Who was going to pgp sign their 75-part Usenet pr*n file uploads! :-)

  • jpgoldberg
    jpgoldberg
    1Password Alumni

    @MrC wrote

    [...] pgp [...]

    and thereby triggered a rant.

    I was not happy to learn that PGP was still the way code signing works on Linux. First of all, the web-of-trust has been a failure. As terrible as X.509 is with way too much trust in hundreds of certificate authorities, it is better than any practical alternative.

    Back in the 90s, I did try to teach interested people to use PGP, I was one of those "PGP is going to change the world" people and I worked at it. And failed. As did many others. To use PGP safely, one must understand the distinction between trusting that a key belongs to who it says it does and trusting that entity as an introducer. This is not a user interface problem, it is just putting way too much burden on the user to understand subtle distinctions.

    So sure, in principle the Web-of-Trust is more trustworthy than relying on centralized CAs, but that is only the case if used properly by a majority of the users. And that isn't going to happen. So right now, you are stuck trusting the signing key we use for the 1Password CLI because we publish the key over TLS (and so you are back to trusting a CA).

    One of the things that makes code signing different than ordinary digital signatures is how expiry is dealt with. When you use a code signing scheme, you get a signed time stamp from a trusted time server. With PGP you can set your local clock to whatever you want to get whatever time stamp you want in a signature. This matters because we don't want anything signed with a key after it has expired, but we don't want to revoke the key, as we want previously signed things to remain valid. So when PGP is used for code signing, there really isn't any way to expire the key.

    So I was disappointed when we started to offer our CLI and then 1Password for Linux beta that package managers do PGP at best. I'm also sad that PGP didn't change the world they way I'd hoped a quarter of a century ago.

This discussion has been closed.