Autofill now immediately submits (enters) (feature becomes bug)
I'm using 1Password browser extension on Mac in Brave browser.
On a login page, when you click the cursor to the user name or password field, the extension pops up with a guess of the related entry.
In the past, when you selected the entry, the extension would fill the appropriate fields and stop. The user then would click "log in" or "continue" etc.
With the latest update, now when the extension fills out the fields, it AUTOMATICALLY hits "submit" or "enter" to log you in.
I get that the intent is to save you having to click again, and potentially saving you a click.
BUT, the unintended consequence of this change is that now I an UNABLE to tick/untick any boxes or make any selections after the extensions fill out the fields. For example, Xero asks for user name then password. Then the next screen asks for the OTP from 1Password. It is only on that screen that the option to "trust this browser" appears.
But because 1Password is now filling out the user name, password, clicking "log in" then on the next screen immediately filling out the OTP and clicking "next", I have no opportunity to click "trust this browser." Annoying.
And just now on Amazon, there was some option ticked, but after 1Password filled out the user name/password/OTP I didn't have any opportunity to select/deselect, and now it seems I have created a passkey for Amazon, which I didn't intend to do.
Very annoying that the extension didn't give me time to review anything before submitting the OTP.
Please either make this a user-selectable option to auto-enter after filling out the fields, or revert back to NOT automatically submitting after filling out the fields.
1Password Version: 8.10.34
Extension Version: 2.25.1
OS Version: Sonoma 14.5
Browser: Brave
Comments
-
Hello @markesyd! 👋
I'm sorry that the new autosubmit feature is getting in the way. You can turn off autosubmit by following these steps:
- Open your browser.
- Right-click on the 1Password icon in your browser's toolbar and click Settings.
- Click Autosave & fill.
- Turn off "Sign in automatically after autofill".
Are you able to tell me the address of the websites where you ran into this issue? I'd like to pass along your feedback to the team.
-Dave
1 -
Dave,
This started for me today as well. It was happening on every website. The browser extension's "Sign in automatically after autofill" was already deselected. I had to toggle it on and off to fix this behavior.
Perhaps this was related to the fact that I have "Offer to fill and save passwords" turned off. I had to temporarily enable that to gain access to the autofill control to do the on/off double toggle.
Barry
0 -
I'll add a +1 to this. The browser plugin auto-updated with this feature, and suddenly my corporate SSO is failing every single login when this feature is enabled.
0 -
@Dave_1P thanks for the clear instructions to disable this feature. I would've rather you had a pop-up asking about it or notifying that it can be disabled, rather than just implementing it and having it inadvertently causing issues for some users.
The main sites I had issues with were Xero and Amazon.
0 -
I'm sorry that you're unable to sign in using your corporate SSO. Would you be able to share a few more details about the issue:
- On which website are you running into this issue?
- Do you see a specific error message?
- What SSO provider is your organization using?
-Dave
0 -
Thank you for the feedback, the first time that you use the autosubmit feature you should see a pop-up that looks like this which includes a button to turn the feature off:
Regarding Xero, would you be able to collect the structure of the page that has the "trust this browser" option after you've disabled autosubmit?
- Open the website in question until you can see the "trust this browser" option that you're referring to.
- Right-click on the page and click "1Password - Password Manager" > Help > Collect Page Structure.
Attach the resulting JSON file to an email message addressed to
support+forum@1password.com
.With your email please include:
- A link to this thread: https://1password.community/discussion/146375/autofill-now-immediately-submits-enters-feature-becomes-bug
- Your forum username: markesyd
- A link to the affected webpage.
- Please do not post the file here on the forum.
You should receive an automated reply from our BitBot assistant with a Support ID number. Please post that number here. Thanks very much!
Regarding Amazon you wrote:
I didn't have any opportunity to select/deselect, and now it seems I have created a passkey for Amazon, which I didn't intend to do.
A passkey won't be saved to 1Password without you choosing to save the passkey through a prompt that looks like this:
Do you recall seeing this prompt and choosing to save a passkey for Amazon?
-Dave
0 -
- Issue occurs for every site with auth going through SSO.
- The error page is for issues redirecting the auth back to the application—it’s customized to warn about bookmarking only the desired application URL and not the login URL.
- The SSO setup is using a combination of Cloudflare interception, redirecting to custom Shibboleth solution. I am not super familiar with the exact setup details. I can send contact info for the responsible team privately.
0 -
Sure. We'd be happy to hear from you. Please email us using
support+forum@1password.com
. Be sure to use the email address tied to the account in question.0 -
Hi, just like to report pretty much the same experience, with the added bonus that because the submission is happening on pages that requires captcha (especially hCaptcha and reCaptchaV2), it's creating a doomloop of captcha problems. I'm already for whatever reason bad at solving captchas and this is making logins a complete nightmare. Once you get into the captcha doomloop, at least on the same device, even the workaround of completing the captcha first and then fill in the details doesn't stop the massive amount of captchas I now have to fill. I don't know how long this persists but even a typical Google search requires (as I counted) up to 9 separate captchas before it would go through. To avoid all that, I've temporarily switched browsers and am using an ssh tunnel to a completely different ip which, because it's an ip on a server I'm renting for something else, still tend to trigger captchas, but at a far lesser rate than my normal home connection, ironically.
I've isolated it to the autosubmission because I managed to get banned from a site that has such a login feature and when I emailed to ask, the response was that I had a lot of failed logins sent without a captcha response. I've been unbanned thanks to the email exchange, but this is not really a sustainable solution for every site that has a setup like this. From the server side it would look a lot like a clumsy credential stuffing attack and not every site has responsive operators - some would straight up ban the ip and/or account for a time period. On eCommerce sites that operate on a "drops" like queue system to prevent bots it would more or less prevent me from ordering the item because this is effectively mimicking some poorly programmed browser automation behavior. I realize that front-end designers have a variety of login flows and it'd be difficult to test how the feature would interact with anything that is at all not a straight up template, and as someone who works exclusively on the backend when it comes to projects involving any sort of web applications I've long attempted to get the message across - in vain - that for the most part design choices don't really work to stop bots and attacks of that nature because anything client-side can eventually be reverse-engineered and anything a human can do can be emulated and with a first-mover advantage to boot. I don't know whether to laugh or cry that as it turns out, the login flows do stop a certain type of accessor, just not a bot but me. Now I have to seriously consider just offloading captcha solving to one of the myriad of services that outsource the manual work to someone in the global south for pennies until my ip is no longer considered malicious by the black box algorithms that determine the trustworthiness of my home connection.
I don't know if I'm an edge case or not since it's hard to judge the ratio of complaints versus people actually affected, but it appears that at least some illustrative warning needs to be implemented at least before users opt-in to the feature. I'm fortunate that I can triage and diagnose the problem on my end fairly quickly and browsing actual web sites is not a significant part of my work, and can mitigate if not eliminate a lot of the pain points to a large degree. But just as my first reaction when weird things start to happen is to pop an ssh tunnel and start to eliminate causes by spinning up VMs, if it's my mom who starts to experience all this (she luckily is in China for now where logins are kind of worthless because private property ultimately doesn't exist and so everything is tied to QR code based login systems, vulnerable for a whole other reason), I'm pretty sure that I'll end up losing 2-3 hours of productivity trying to triage the problem without any vaguely jargon-like language, and possibly the ten years of pestering her to use a password manager might just go out the window. That would really be the worst result, no?
0 -
You/she should see something like this the first time you open the extension. I'm sorry you had trouble. Is this with something (site) specific? If so, I can submit a report. I have shared your message here with the team verbatim. I'm sorry for the experience you had.
0 -
@ag_tommy No worries about me, I turned the feature off and over the last few days the number of captchas that I've had to fill have been decreasing pretty steadily. Google is a black box so I couldn't confirm anything until I got the email that straight up said that failed logins were having an affect but it was hard to miss the correlation after a few days of the feature going live. ReCaptchaV2 is still prevalent but the invisible v3 getting at least some adoption helps, but oddly Google's flagship product - search - still uses v2, which amplified the effect. However as long as a site didn't happen to expire my session - and didn't offer a native app on mobile that incorporated faceId or biometrics as login - I was fine. I saw the pop up box but wasn't really sure what behavior to expect, and I guess I also assumed that it would wait for any captcha to be filled since the captcha response would be part of the POST request for the login, even if the process really involved an additional request to obtain the token - I suppose I had simply assumed that by credentials it meant the entire request for the login flow, since logically even if it happens under the hood any anti-bot/credential stuffing mechanism, however ineffective it is in practice, would be part of the credentials in a sense for the login, as it tends to be a Chekov's Gun for websites - if it's coded into the page, the login likely is going to use it, even if it makes little sense to do so.
Actually, as a side note, the fact that one small change is able to cause all sorts of inconveniences mostly from systems designed to provide in broad terms "security" shows in stark terms how poor the heuristic methods being used really are and how much it relies on assumptions. I'm sure there are assistive technologies for the disabled that are just as able to trip up technology that in some cases can be quite costly to implement yet in the end are both easily circumvented and also easily triggers false positives by the ton. My background is a legal one and out of everyone at my relatively small law school graduating class I'm the only person I know who had a tech-centric skillset and made a conscious choice not to go into IP/in house practice (mostly because I had moral objections in equivocating intellectual and real property rights, I went into criminal and public interest law instead as they affect the liberty interest of my clients). It's pretty jarring how big of a gulf exists between the two worlds and how much reliance on tech illiteracy versus legal illiteracy is involved in practice - as in, attribution made either in bad faith or ignorance in regards to whether an IP address can be directly linked to a specific individual and their actions and intent at a specific time, or an address on a blockchain to a single person, backed by "expert" testimony that clearly was vetted by attorneys either too cynical or too lacking in knowledge of the technology to properly cross examine. Even as a law student during spring break I found myself essentially picking a jury for a first degree murder trial by simply looking at public facebook profiles and picking out those who claimed no relationship to law enforcement when their social networks stated otherwise, and when it resulted in a hung jury the prosecutor simply scheduled retrial on the same evidence during the two weeks of my finals on the same evidence, and got a guilty verdict. I'm no longer actively practicing - fortuitously some stupid internet joke managed to somehow enable me to retire before most of my classmates have paid off their loans - but if nothing else it had only gave me more time to look at how flimsy assumptions are treated as ironclad truths that set the groundwork for precedent, ranging from a series of cases where the court failed to note that jurisdiction is established based on geolocation of Cloudflare IPs (so of course they appear to be purposefully availed to be located in the US, the word 'Anycast' never appears in the docket) to the FBI taking a 2 year delay on blockchain analysis based on one objection to obscure the fact that attribution data is effectively entirely crowdsourced, ending up with both sides arguing the wrong issue. Bots can't be charged with a crime, of course, as code is not a real person, but OFAC absolutely thought that a deployed smart contract was a person and it took 5 months for them to figure out a way to explain how they appeared to sanction a smart contract that is literally two metaphors and exists in sync around the world without an operator. I'm curious as to whether this case where a small, seemingly insignificant alteration can create a self-perpetuating misidentification loop ever becomes an example of how far from "beyond a reasonable doubt" some of the evidence admitted can really be if understood correctly. Since the admission of evidence happens in front of a judge and not a jury, it's very much down to whether the judge has a proper understanding of what tech is able to and isn't able to ascertain with almost certainty. Bad forensic science have certainly led to innocent people being executed (Cameron Todd Willingham perhaps the best known example), but even with lower stakes, illustrative examples help far more than mere technical - or technical-sounding - explainers. Perhaps something helpful can come out of this, at some point, since the amount of snake oil in cases of consequence is so often treated as the genuine cure in cases of great importance at trial when anything vaguely technical is part of the evidence. Here's to hoping, although I can't say I'm holding my breath. The best trial attorney I worked for had me print out his emails for reading as recently as 2014.
Cheeers
0 -
@Dave_1P
I have sent the email as requested.
The support ticket ID from the bot is: NPZ-45529-153I don't recall seeing the popup verifying the addition of the passkey for Amazon, but to be honest, it all happened so quickly I couldn't read everything as it refreshed. But when I check the entry for Amazon in 1Password, it shows passkey created 26.Jun.24, which I did not specifically authorise.
Also, contrary to your comment and @ag_tommy 's post, I did not receive a verification notice that the feature had been added or implemented. I just noticed that suddenly 1Password was autosubmitting. I didn't specifically enable it, it just happened. Potentially with the update on 11.Jun for MacOS?? Seems others didn't receive the popup as well...
0 -
Thanks for sharing the Support ID! I've located the ticket and can see that a member of the Support team has replied. Let's continue the conversation over there in order to prevent having the same conversation in two different places. 🙂
-David
ref: NPZ-45529-153
0 -
I just came to the forums to see if I was the only one taken by surprise by this. I'm quite sure I never got that notification popup about the new feature; the first few times it auto-submitted I thought maybe I was imagining things. But tonight, after I got stuck in a loop on a site with a CAPTCHA, I had to go looking for what I guessed must have been a new setting. (I found it, and I disabled it.)
I will leave it off, and that's fine. But even better would be to enable it or disable it on a site-by-site basis. Me personally, I'd be likely to leave it off in the main setting because it trips up on too many sites. But it would be handy to enable it for some sites that I use a lot and that I know won't have a problem with it.
0 -
Thanks for sharing your experience, and I'm sorry that the auto-submit feature has caused some trouble when logging in on certain websites. Our development team is looking into improving how the auto-submit feature works with pages using CAPTCHA, so I've shared your experience with them.
I've also shared your request for the ability to enable or disable the feature on a site-by-site basis with our Product team, so that they can take it into consideration.
Let me know if there's anything else I can help with!
-David
ref: dev/core/core#29636
ref: PB-408254490