[71,72] Track item moving between vaults and requests rate limits

Options
kovpack
kovpack
Community Member
edited February 2022 in CLI

Hello,

I have questions regarding ways to keep track of an item when it's being moved between vaults, as well as request rate limits.

A bit of domain knowledge: we have a system, that lists different types of resources we use in the company. One such resource may be some 1password item. Every resource has a link (which is basically a link to a real item in the original system, in the case with 1password that would be a link to the 1password item). These resources are used in a bunch of different documented processes, and we use our internal permanent resource link from the system when we need to tell which item should be used to log in to some system, etc. Obviously, we implemented some way of keeping track of items that are being moved between vaults, thus, if the item was moved into a different vault, we just dynamically generate a different URL when redirecting to a 1password item (and this was done by our system).

However, in the past (CLI v1), we relied on a few fields to be able to identify the item and track the vault change (it was a good enough solution to cover more than 99% of cases - not a bulletproof solution, but acceptable). All the fields we needed were returned by op list items right away (e.g. item name, ainfo (usually login) and URL, creation time, which was kept unchanged when moving items between vaults). In CLI v2, it's not possible to get this information without making a separate request to get data about each item. Taking into consideration that we have around 1000 items (and will have more), it takes time (and is a bad solution in general) and hits some rate limits (I got blocked by almost an hour).

So, I have a few questions.

  • Is there a better way to keep track of an item that is moved between vaults (or during vaults restructuring)?
  • Are there any request rate limits documented?

1Password Version: Not Provided
Extension Version: Not Provided
OS Version: Not Provided

Comments

  • Interesting use case, thanks for sharing!

    Are the resources you're linking to in your internal system all sign-ins or do they also include resources completely unrelated to what 1Password is offering? Would Universal Sign On that we share in our vision at https://www.future.1password.com/ be helpful for you so that you can link not just username/password logins, but any type of login via 1Password?

    Just to make sure I understand correctly what you're currently using, are these the URLs you're using: https://<your company name>.1password.com/vaults/<vault id>/allitems/<item id> ?

    Is there a better way to keep track of an item that is moved between vaults (or during vaults restructuring)?

    My first hunch would be to solve this by tracking the items ID instead of name, as the ID uniquely identifies the item independent of the vault the item is stored in.

    However, I believe moving items in 1Password is currently actually a create + delete under the hood, so the items ID will change when the item is moved. Is this why you have the workaround of storing a unique id in one of the items fields?

    Are there any request rate limits documented?

    1Password API ratelimits are unfortunately not documented at this moment.

    Is using --cache an option for you? This will hopefully prevent you from running into the ratelimits, as the item get commands will fetch the details from the cache that was filled by the item list command.

  • kovpack
    kovpack
    Community Member
    Options

    Are the resources you're linking to in your internal system all sign-ins or do they also include resources completely unrelated to what 1Password is offering?

    We have different internal resource types, one of them is the credential type, which is basically any type of 1password item. Other internal resource types are not related. But anyway, the main problem is tracking the item when it is moved between vaults, as a moved item means a broken URL, which makes internal company processes invalid, which results in a bunch of other problems like people not knowing what to ask for, and inability to keep processes up to date (as we have hundreds of them), etc.

    My first hunch would be to solve this by tracking the items ID instead of name, as the ID uniquely identifies the item independent of the vault the item is stored in.

    This is was our first thought as well, BUT if you move an item to a different vault, it does change the item ID as well :) And this is the problem, so we had to implement a workaround :) I would expect such behavior from a copy functionality, but not from a move functionality.

    So, if we have an item 4y72y2yklphlpjerkcgn5t7zfi in a vault mrrakx26mchefjuxcfkmoepece, we'll have this URL
    https://company.1password.com/vaults/mrrakx26mchefjuxcfkmoepece/allitems/4y72y2yklphlpjerkcgn5t7zfi.

    But, if we MOVE an item to a different vault, e.g. vault xny7rflfrjqasdek35zo4dssra, we'll get a totally new URL,
    https://company.1password.com/vaults/xny7rflfrjqasdek35zo4dssra/allitems/something_totally_different.

    This is one of the biggest problems with 1password for us, which causes us a lot of pain, unfortunately.

    However, I believe moving items in 1Password is currently actually a create + delete under the hood, so the items ID will change when the item is moved. Is this why you have the workaround of storing a unique id in one of the items fields?

    Yes, exactly. In a perfect world I'd not expect such a behavior.

    Is using --cache an option for you?

    No, it does not help. It's just caching of an item list command, but we need more data for each item, as CLI v2 item list provides only basic info. CLI v1 list items additionally provides ainfo (usually login) and URL, etc., which were kept unchanged when moving items between vaults. So, upgrading from CLI v1 to CLI v2 makes us execute 1001 commands (1 item list and 1000 item get) instead of 1 list items command when we have 1000 items (but we'll have more).

    1Password API ratelimits are unfortunately not documented at this moment.

    Having it not documented is fine as soon as you share the rate limit here :) So that we don't have to spend hours/days playing with that risking getting blocked for an hour each time we hit the limit.

  • kovpack
    kovpack
    Community Member
    Options

    Ooooook... I spent half an hour for an answer, published it, decided to edit formatting, and saved the edited comment. And comment disappeared :) Nice. I hope it was sent for moderation (though, can't see it anywhere).

  • Hi @kovpack,
    thank you for writing such a detailed response! This is really helpful for us!

    I can see you reply abovee . Are you able to see it now too?

    In a perfect world I'd not expect such a behavior.

    I'll start here, with the underlying issue (moves that change the ID). I'll start discussing and coordinating this internally. If/when we're going to change this it will need to be changed across all the apps (luckily we have a shared core now for 1Password 8 😀).

    Even when we would make that change across all the apps, old client versions will still do a create+delete under the hood, so you'd either need to make sure everyone updates if/when that update would become available or continue to have the workaround available for items moved in older versions.

    For your work-around:

    • Does op item get have the data available that you require?

      • What are the names for the data in op item get that you need that are not in op item list?
    • Do you run into rate-limits with op item get after op item list when the cache is enabled? Although op item get displays more data than op item ls does, op item ls currently already fetches this data, so it should already be available in the cache.

    • Is your problem only with hitting rate-limits when executing a command for each item or are there any other problems with running multiple commands?

    I'm inquiring internally about the rate-limits. Unfortunately, I believe they are not static (i.e. what I share today may not hold later) and there is currently no way to fetch your current and open rate-limit. There are some plans around improving the rate-limits in general and I have attached this conversation as an up-vote on planning that work.

    Thanks again for writing in with such a detailed reply, this is really helpful for us!

  • kovpack
    kovpack
    Community Member
    Options

    Ok, I'm finally back and have some time.

    Do you run into rate-limits with op item get after op item list when the cache is enabled? Although op item get displays more data than op item ls does, op item ls currently already fetches this data, so it should already be available in the cache.

    Frankly speaking, I have doubts --cache works at all (or I use it in a wrong way somehow, or it works only under some specific conditions).

    Things I did:

    • Run op item list --cache --session my-session-key.
    • Found a single item in the result data set with my-item-id.
    • Updated that item via the web client (updated the title).
    • Run op item get my-item-id --session my-session-key.
    • As a result, got an updated item (not the one that should have been cached).

    So, seems like cache does not work or I'm using it in the wrong way.

    And one more question: let's imagine cache does work, so is there a way to limit the cache time? Based on the documentation, it caches data for 24 hours (which is definitely not cool). Is there a way to set a specific duration to cache the data for, e.g. an hour or so? If I run op item list --cache once more, will it update the cached data right away, even if the 24 hours have not passed yet?

  • Hey @kovpack

    I don't think we do a great job of explaining how the cache works, and I think we want to do a better job documenting it moving forward, but for now, I think I can take a shot at some of your questions.

    Before I start my explanation, I would just like to note that the Windows op client does not support any caching and instead silently fetches all objects directly from the server even when the --cache flag is specified.

    The cache works by creating an in-memory store that stores encrypted vault items. We spawn a subprocess that manages it. The inactivity timeout of this process and its in-memory cache can be set by the op daemon command along with the --timeout option, but defaults to 24 hours if unspecified.

    We use a lazy-sync method to determine whether or not the cached item is no longer up-to-date. That is, every time an item is retrieved with the --cache flag enabled, we check whether its cached version is the same as what we have stored in our servers. If the server version has been updated, we fetch the new item and store it in the cache. So regardless of whether or not you specify the --cache flag, we will retrieve the most up-to-date version of it.

    When the item in the cache happens to be up-to-date, we deliver it faster than having to fetch the whole item from the server.

    I hope this makes sense, and please do not hesitate if you have any followup questions!

  • kovpack
    kovpack
    Community Member
    Options

    Yeap, now it makes way more sense :)

    P.S. I use both macOS & Linux clients.

  • Glad this got sorted out. :D Don't hesitate to let us know if we can help with anything else!

This discussion has been closed.