Public Key - Man in the middle
Hi.
As described in the "White Paper" on "Appendix E - Verifying public keys" there is a theoretical possibility for a man in the middle attack for shared items. As I understand this attack is always possible if the public key is exchanged (sharing, adding to groups, etc.).
That means the "private" vault in the team account is always protected? If an hacker could steal the private key (as described in Appendix E) he would only be able to encrypt the shared items or can he encrypt the private vault too?
Regards
Phoniex
1Password Version: Not Provided
Extension Version: Not Provided
OS Version: Not Provided
Sync Type: Not Provided
Comments
-
Hi @Phoniex,
As I understand this attack is always possible if the public key is exchanged (sharing, adding to groups, etc.).
Theoretically, this could be used during account recovery too (since recovery generates a new key pair), but you are right: generally speaking, that specific attack only applies when sharing vaults is involved, or recovery is used. More generally still, anytime the public key is used for some action.
If an hacker could steal the private key (as described in Appendix E) he would only be able to encrypt the shared items or can he encrypt the private vault too?
Can you confirm whether you mean mean private or public key here?
0 -
Sorry I mixed things...
According to the story 10 in the white paper: If Mr. Talk was able to hack like in story 10, can he decrypt only the shared items or the whole private vault of an team account?
0 -
No worries, thank you for the clarification. This specific attack is applicable only when a vault is explicitly shared with someone else. This is because sharing a vault effectively means sharing the vault key with the recipient, and the way the vault key is shared is by encrypting it with the recipient's public key, so that only he will be able to decrypt the vault key being shared.
The Private vault cannot be shared with anyone, and therefore the public key does not come into play here. But regardless, there are also other server controls that make sure that other account members will not be granted access to the encrypted data in a vault that they otherwise wouldn’t have access to even if they can obtain the vault key.
In addition to this, even should a malicious user be able to work around all of these, and trick the server into delivering the encrypted vault
keys when it shouldn’t, the malicious user would still need to obtain the vault data encrypted with that key, which is a challenge in and of itself.0 -
Please correct me if I'm false:
In Figure 17 of the white paper you explain that "kv" (vault key of carol) is encrypted with "pkr" (pulbic key of recovery group). If a hacker sends a fake "pkr" to carol, wouldn't he be able to encrypt carols private vault too? So isn't the private vault potentially affacted during sign up to team?
If the hacker wasn't present at the sign up: In case of a recovery, the private vault (of a team account) would be also affacted? Because carol encrypts her new vault key with the "pkr" again.
I know thats all very theoretical and a hacker has many obstacles, but I need to know before starting with 1password (maybe it will no save "high critical" passwords - and maybe we take care to never need a recovery).
0 -
If a hacker sends a fake "pkr" to carol, wouldn't he be able to encrypt carols private vault too?
In the real world, not necessarily. He might get the vault key this way, but it doesn't mean that he would have access to the data itself. On a purely cryptographic level though, as you say, this is possible, and that's why we have additional controls around the recovery group.
This is what we mean in the white paper with this sentence (page 67 in the current version of the white paper):
Thus, this applies during the final stages of recovery or when a vault is added to any group as well as when a vault is shared with an individual. This threat is probably most significant with respect to the automatic addition of vaults to the recovery group as described in Restoring a User’s Access to a Vault.
And hence why there are additional controls to make sure that a malicious member of the recovery group cannot access your data even though they have access to vault keys. If you want some more details about the measure we have implemented to protect against this, please see the section titled Protecting vaults from Recovery Group members (starting from page 39 in the current version of the document).
In general, I think it's worth noting two things here to add some perspective:
- Having access to a vault key does not mean having access to the Master Password or Secret Key. As I mentioned above, having access to the vault key does not automatically mean ability to decrypt a user's data;
- Perhaps more importantly: as you correctly mentioned, you can avoid using recovery. Any recovery mechanism will lower the security of a system, so it's certainly up to your use case whether you prefer to accept this as a reasonable risk, or remove it completely by not using the feature at all. We tried to find a good compromise between security and usability here, and I think that the current implementation of the recovery group achieves this in a reasonably balanced way.
This does not mean that there is no room for improvement though, as indeed you can see in the Types of defences section of the white paper. All of the possible solutions to this challenge (namely, trust hierarchies and key fingerprint verification) have their own difficulties though, so we need to make sure that anything that we implement here makes sense for the way 1Password works.
0 -
Thank you for your explanations. We will now start to use 1Password.
Maybe you could discuss internally to give team admins the choice to disable the recovery feature during initial setup and also for existing teams to disable it belated (including to renew all the keys). If a team decides to support recovery they can active it later again if needed.
If we would never use the recovery it would be great to disable it and minimize the (potentially) risk, because only the discussed risk during sharing would exist.
I know you can not do it now, but it would be great if you discuss it internally and may set it on your agenda for the future.
0 -
I know you can not do it now, but it would be great if you discuss it internally and may set it on your agenda for the future.
Absolutely. We will definitely continue studying this to come up with the best possible solution. Disabling recovery could be a quick and dirty way to remove this attack path. I do believe that we can tackle this more generally though (trust hierarchies or key fingerprint check, as I mentioned above, are two possible solutions), so this is definitely something that is on our minds.
In the meantime, and for completeness, there is another detail here that I hope will help alleviate your concerns a little: remember that it's not enough to have recovery enabled to open you up to this. The attack that we are discussing here would not work just if you have a recovery group, but there needs to be specific action from the user being recovered as well. More specifically, vault keys are shared with the recovery group only when the user actually recreates the account.
This means that, even if you have recovery enabled, nothing is shared unless the user accepts the recovery process and actually completes it. Once again, this is not to say that there is no room for improvement here (indeed, we added that appendix to the security white paper exactly because we recognize that there are ways to improve this), but that there are some defences here already to make sure that this type of attack is not that simple to implement.
0