'Always allow' for external protocol / scheme handling should be a per-origin permission
Categories
(Firefox :: Security, enhancement, P2)
Tracking
()
Tracking | Status | |
---|---|---|
firefox84 | --- | verified |
People
(Reporter: jonathan.leitschuh, Assigned: emz)
References
()
Details
(Keywords: reporter-external, sec-want, Whiteboard: [reporter-external] [client-bounty-form] [verif?][adv-main84-])
Attachments
(7 files)
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review | |
47 bytes,
text/x-phabricator-request
|
Details | Review |
This is following up on the discussion about the zoommtg
URL handler that Zoom has registered with Chrome, Firefox & Safari.
Unfortunately, the 'always allow' checkbox allows users to put themselves into a situation where they are vulnerable to being dropped into Zoom call meetings without their permission to this date.
Just want to give a heads up that this vulnerability still leaves many people vulnerable to this attack vector.
This is primarily because many users have clicked the checkbox when being prompted if they want Zoom to handle all of the url protocols and saying "always allow". I'm still seeing people regularly drop into my demo call with shocked looks on their faces that this vulnerability still seems to work.
Is it the opinion of Mozilla that this is a Zoom vulnerability still, or is it a browser vendor vulnerability?
Related: https://bugzilla.mozilla.org/show_bug.cgi?id=1545777
Discussion from the Chromium side of the world: https://bugs.chromium.org/p/chromium/issues/detail?id=982341
Comment 1•6 years ago
|
||
This is being publicly discussed so keeping the bug hidden isn't going to help anyone.
Requiring user interaction and/or keeping the permissions per-origin (and, as you noted in the chrome issue, being a bit careful about what happens if the page that opens the external protocol link isn't the toplevel page) seem like reasonable mitigations here. Johann, who makes calls around this area?
Comment 2•6 years ago
|
||
Johann, who makes calls around this area?
We can probably make this call, maybe I'm missing something but I'm not aware of anyone specific who would need to be consulted here. CC'ing Jimm in case he has an opinion or knows someone who does.
Personally, I'm a bit conflicted about this. "Don't ask me again" is a common pattern for all sorts of dialogs (also security and privacy related) and I'm sure there are people who this is useful to. It's different than the original Zoom issue, where the user never consented to launching an application like that. As noted in the Chrome issue, there are some protocols like mailto where removing this would probably lead to user frustration.
The exploit you're describing hinges on a lot of preconditions by now (user has the software installed, has seen the dialog before AND checked the checkbox, and the application has an exploitable feature without any security precautions whatsoever). From telemetry of other dialogs we know that users don't typically check checkboxes, they tend to ignore them, so I think the amount of users who check this against their own interest is quite low.
I'm also slightly afraid that if we continue to make protocol handlers less usable then more applications will start to do something like a localhost server.
Requiring user interaction seems to make no sense to me (i.e. we might as well remove the checkbox then), because the only point of the checkbox is to save an extra click.
But I think I would be okay with double-keying this on the origin of the page, like a permission for protocol handler, again, maybe if we exempt a group of protocols like mailto. So pretty much what was suggested by the Chrome team. It might be worth somewhat aligning our behaviors.
Comment 3•6 years ago
|
||
(In reply to Johann Hofmann [:johannh] from comment #2)
But I think I would be okay with double-keying this on the origin of the page, like a permission for protocol handler, again, maybe if we exempt a group of protocols like mailto. So pretty much what was suggested by the Chrome team. It might be worth somewhat aligning our behaviors.
I think per-origin is a good step to make these attacks less effective. Note that tel:
is a popular protocol handler on mobile, which might need whitelisting.
Comment 4•6 years ago
|
||
Origin-scoped seems really good and something we should do, but we'll need a safelist for mailto
and tel
and such as Freddy notes.
Another thing I'd like to change for these non-fetch schemes is to only support them for top-level navigations. I.e., data:text/html,<iframe src=zoommtg:test>
should result in a network error.
(Is there a bug for more restrictions on localhost in case applications do continue to go there?)
(Aside: Jonathan, I really appreciate your work on highlighting this issue and follow-up with everyone on preventing further abuse. Really solid work.)
Reporter | ||
Comment 5•6 years ago
|
||
Thanks Anne,
It really is my goal to protect users from being unknowingly exploited.
The only other thing you may want to consider is if other sites open the tab for you, vs the user opening the tab via a click from a different location.
If a site you're on launches another tab for you from javascript, this should trigger the "are you sure you want to have zoom handle this" dialog.
Updated•6 years ago
|
Comment 6•6 years ago
|
||
(In reply to Johann Hofmann [:johannh] from comment #2)
But I think I would be okay with double-keying this on the origin of the page, like a permission for protocol handler, again, maybe if we exempt a group of protocols like mailto.
Having thought about this a while, I just wanted to clarify... this wouldn't really help the zoommtg case, right? The "whole point" of their setup is that they give you an https URI in calendar items / emails, so by definition that gets opened by either an external piece of software like a mail client or calendar app, or a non-same-origin page that is not <whatever>.zoom.us . The zoom.us page then trips the zoommtg URL without further user interaction, and it'd be that site that has permissions - but if the attacker's goal is to get you to automatically join a meeting, they can just redirect you from their site to the <whatever>.zoom.us page with the magic number for the room/call that zoom is happy to give you... Keying the permission onto the zoom.us domain wouldn't protect users against that type of attack - or am I missing something here?
Comment 7•6 years ago
|
||
No, you're right, but
- It requires a top-level navigation, which forces the "attacker" to give up control and trust that the user has checked that checkbox (percentage-wise a rather unlikely case), unlike in a drive-by case where the attacker could keep the victim on their site.
- It enables websites to build security mechanisms to prevent such attacks. Right now, there's no point for Zoom in trying to prevent malicious first-party loads since anyone could spawn the protocol handler anyway
Just my thoughts on it. I still personally think removing the checkbox goes a bit too far.
Reporter | ||
Comment 8•6 years ago
|
||
It requires a top-level navigation, which forces the "attacker" to give up control and trust that the user has checked that checkbox (percentage-wise a rather unlikely case)
The number of windows users who were joining my test call who had inevertantly opted in to being vulnerable is high enough to make this a valid attack vector worthy of exploit.
Keying the permission onto the zoom.us domain wouldn't protect users against that type of attack - or am I missing something here?
I think I agree, that's why I argue that you need to differentiate between user interaction and automation driven interaction to open the zoom US link.
Comment 9•6 years ago
|
||
(In reply to Jonathan Leitschuh from comment #8)
I think I agree, that's why I argue that you need to differentiate between user interaction and automation driven interaction to open the zoom US link.
My point is that there's no way to do that. User interaction in their calendar app which opens zoom.us which opens zoommtg is indistinguishable from an "attack" where a user clicks a malicious link that then redirects to zoom.us to do the same.
Reporter | ||
Comment 10•6 years ago
|
||
I don't really have a solution on that one unfortunately.
I do recommend that both the Chrome and Firefox team attempt to unify on a solution here for a common user experience across browsers.
https://bugs.chromium.org/p/chromium/issues/detail?id=951540#c25
This chromium issue spawned this discussion:
https://bugs.chromium.org/p/chromium/issues/detail?id=982341
Comment 11•6 years ago
|
||
(In reply to Jonathan Leitschuh from comment #10)
This chromium issue spawned this discussion:
https://bugs.chromium.org/p/chromium/issues/detail?id=982341
To be clear, you can remove the "permanent allow" permission for a protocol handler from Firefox's options/preferences, search for "Applications" and then select "Always ask" for the protocol that you want it to prompt about. We also point this out when you check the "always allow" checkbox. It sounds like Chrome doesn't have a UI for this at the moment.
(Note: adding per-origin restrictions would likely change how this would work and would probably move such permissions to the identity popup that shows up when you click the (i)/lock icons and where you can currently control other permissions you've given a particular origin.)
I think the fundamental problem here is around the combination of intent and potential for harm:
- We can't know whether the user intended to let the external protocol handler run when it gets navigated to, or if this is a malicious attempt to have it run
- For any given protocol handler (which of course is ultimately just a mostly-arbitrary string) we can't know what the harm is of opening. Opening mailto: and tel: is considered fine - but could in fact be harmful if they actually affected the outside world immediately. It's fine because they will always stop short of actually sending email or placing a call. For some protocols (IIRC magnet: used to be one of them) different apps do different things.
Restricting the permission per-origin might help some consumers (ie ones that don't expose the equivalent of an open redirect into "their" protocol handler the way zoom does) so it might still be worth doing. But the ultimate fix here would be for zoom to do what mail/phone apps do, and have another click through inside the app before connecting.
Reporter | ||
Comment 12•6 years ago
|
||
But the ultimate fix here would be for zoom to do what mail/phone apps do, and have another click through inside the app before connecting.
They do now have this:
https://twitter.com/JLLeitschuh/status/1150857562117562370
That being said, is our goal here to prevent the next over ambitious application from keeping product management requirements around "Zero Click" from sacrificing user's security?
There may be a more fundamental question you seem to be asking as to whether or not this is truly a vulnerability in browsers or the applications that register these handlers. For a while it seemed to be Zoom's opinion that this was a browser vulnerability, not theirs. They added this "staging" area as a way to protect users from this vulnerability that browsers have. This also seems to be the position of Apple Safari. They always require users to always acknowledge URL handlers before they are triggered in the external applications.
Fundamentally, browsers are designed to be a sandbox from the truly horrifying amount of untrusted software that every single webpage serves up. Anything that allows any webpage to jump that boundary outside of the sandbox into another application automatically should be considered a serious security risk. Just like the localhost webserver.
I don't know how closely you were all following that news story, but we later discovered that that localhost webserver held a very easy to exploit RCE vulnerability. The only prerequisite for REC was that the user uninstalled the application after a meeting. The potential impact surface of that vulnerability included the following 14 applications:
Zoom
RingCentral
Telus Meetings
BT Cloud Phone Meetings
Office Suite HD Meeting
AT&T Video Meetings
BizConf
Huihui
UMeeting
Zhumu
Zoom CN
EarthLink Meeting Room
Video Conferencia Telmex
Accession Meeting
Fundamentally, anything that is able to leap that sandbox boundary and interact with the user's local machine should require user interaction to confirm that interaction is intentional.
Comment 13•6 years ago
|
||
Fundamentally, anything that is able to leap that sandbox boundary and interact with the user's local machine should require user interaction to confirm that interaction is intentional.
It does, in Firefox, and I think we're already at a pretty good level of protection with that. The user has the optional ability of disabling this confirmation for a single protocol, and arguably this should be per-origin.
Since "has the user triggered this link interaction", as Gijs mentioned, is not a reliable heuristic, the only option we're discussing that would comply with what you're asking is removing the checkbox.
I don't think we should remove the checkbox, because by limiting the protocol handler, a legitimate way for opening native apps, we are increasing pressure on vendors to find hacky and vulnerable alternatives. Zoom has said quite specifically that they did this because of the Safari prompt that would pop up every time. We should incentivize vendors to build good secure solutions with a good user experience, and per-origin protocol handlers does that. It essentially turns this from a browser bug into a CSRF, i.e. in this scenario checking the Referer header would go a long way to protect Zoom from the hypothetical redirect attack.
There are a lot of bad things outside of our control that can happen when security or privacy related features are triggered by a GET request. Zoom wouldn't rationally expose changing your password without CSRF protection, and it is their decision if they want to do this for joining meetings. (Also, again, the user is asked by default).
Comment 14•6 years ago
|
||
There might be an alternative here, I think. Where we consider redirects different from direct navigations to a whatever:
URL that the user initiated.
Comment 15•6 years ago
|
||
(In reply to Anne (:annevk) from comment #14)
There might be an alternative here, I think. Where we consider redirects different from direct navigations to a
whatever:
URL that the user initiated.
Gijs already comment on this. We consider implementing a heuristic that figures out whether a user initiated a navigation a very hard problem and even then it's not an actual security measure, user interaction is easy to get.
Comment 16•6 years ago
|
||
I meant that on top of a top-level origin restriction on "remember this" to avoid needlessly exposing sites to CSRF.
Comment 17•6 years ago
|
||
It sounds like we all agree that we want to make the permission to avoid the prompt be per-origin. AFAIK to do that we'd need to:
- stop toggling the internal bool that is reflected by choosing "always ask" when changing the default action for a protocol in about:preferences (that is, you might still be able to continue to set a default from the prefs, but you'll be asked for confirmation anyway. We can keep the hidden about:config pref for idio... people who know what they're doing. And mailto/tel and any other protocols that are safelisted in that way today. )
- create per-site permission setup with the default being the hidden pref in question
- only show the checkbox for toplevel navigations (potentially, with user interaction***), and have it write to the per-site permission setup
- update whatever code currently trips the prompt to check the per-site permission setup the usual way, instead of just checking the hidden pref.
I think this is Johann's bailiwick for site permissions, so moving there and clearing prio.
The only problem with this plan is that it regresses to the current state. That is, a malicious actor who wants to abuse the current state can redirect directly to zoommtg URIs and if the user has configured those URLs to always open without asking (globally, in the current state), it will open immediately. After fixing this bug by making the permissions per-origin, and with a user setting the permission for whatever zoom's meeting redirect domain is, a malicious actor could just include a link to such a URL and incentivize the user to navigate, and the same thing would happen.
We should still do this, because it aligns (AIUI) with Chrome's plans, and does make things slightly harder on the attacker in this scenario, and additionally has positive impact on security for other protocols and their scenarios (e.g. if there's a vulnerability in a particular protocol impementation, the user granting permission to example.com will not allow exploitation by other sites unless there are open redirects on example.com ). But I'm not sure how high a priority it should be given it only has a minor impact on the original problem here. Johann can decide...
*** I think this doesn't work. For one, because that's not currently what Zoom and others do, they navigate to a public http(s) URL and then (without further interaction) navigate to their custom protocol, so it'd never work for their usecase. As Johann eloquently explained in comment #13, then we end up with perverse incentives, ie the vendor will claim that the permission "doesn't work" and so to reduce perceived user interaction issues they'll use something else and inadvertently get worse security. Perhaps I'm misunderstanding Anne's proposal - more details about specifically what you think the restrictions should be might help.
Comment 18•6 years ago
|
||
I think you're saying that https://example/ wants a setup such that navigating to https://example/someID can load a document that navigates to example:someID without user interaction. To me it seems that would open up https://example/ to CSRF. This is also possible if https://example/ uses WebRTC. It's also possible if https://example/ has some kind of endpoint on the user's device it can get to (localhost, push).
I don't know to what extent we can mitigate those attacks, but we should think carefully about it as anything that results in unintended camera/microphone access is highly problematic and thus far applications have not always balanced this well.
One thing that comes to mind is that this only succeeds in foreground tabs, but ideally we confirm the user is there due to a prior interaction. Always prompting as Safari does might be another logical result here. (None of these help with all of the above scenarios unfortunately.)
Comment 19•6 years ago
|
||
(In reply to Anne (:annevk) from comment #18)
I think you're saying that https://example/ wants a setup such that navigating to https://example/someID can load a document that navigates to example:someID without user interaction. To me it seems that would open up https://example/ to CSRF.
I think that's correct. From https://example/ 's perspective, they want one-click, deeplink access from anywhere to their app. They could mitigate the CSRF with a CSRF token on their side, of course, if they know that they can time-limit the URLs, but that doesn't seem to be possible for Zoom and some of the other apps here (e.g. links for recurring meetings that get stored somewhere outside of the client's control).
This is also possible if https://example/ uses WebRTC. It's also possible if https://example/ has some kind of endpoint on the user's device it can get to (localhost, push).
Right. I think this is Johann's point here - we're going to be playing whack-a-mole if we introduce hurdles on the external protocol path (like requiring in-browser user interaction on the domain that has permission). If apps then end up switching to using custom localhost servers, we're usually worse off in that they expose a greater attack surface than a protocol handler (as was the case for Zoom). So there's a (perhaps perverse) incentive for us not to restrict the protocol handler case unduly, because it'll make the practical problems faced by consumers worse. :-(
I don't have a good answer to this conundrum. I don't think we can meaningfully distinguish unintentional ("malicious") cases from intentional ones. The user clicking a link in their calendar for a meeting they've attended the past 40 weeks and are happy to attend again this week looks the same to the browser as a click from a phishing email in their email client. Even for in-browser use, unless we're wanting to penalize people using web-based calendars/e-mail clients (which, incidentally, I suppose are likely to insert their own "safelink" redirects for link clicked from emails/calendar invites, tripping the redirect checks suggested in comment #14 ), I don't think we have a lot of options that address the original issues, short of always prompting.
Even besides pushing people to use other mechanisms to contact local apps, if we wanted to always prompt, people using e.g. the itunes web store or torrent websites or ... are going to be miffed, I imagine. Perhaps evangelism about requiring confirmation on the other end of the protocol handler before taking any actual actions (and/or ensuring there's no possibility of CSRF like there is with zoom) is the best remedy...
Reporter | ||
Comment 20•6 years ago
|
||
I worked with Stevoisiak to encourage him to send you guys 1573736. Glad he finally did.
One thing I do want to note is that Safari requires the user to approve every single use of the URL protocol handler, so there is a precedent already set there. It wouldn't be unreasonable to follow in their footsteps, although it's pretty clear that not what most people are considering here.
Comment 21•6 years ago
|
||
The priority flag is not set for this bug.
:johannh, could you have a look please?
For more information, please visit auto_nag documentation.
Updated•6 years ago
|
Comment 22•5 years ago
|
||
Unfortunately this enhancement does not meet the criteria for our bug bounty program.
Comment 23•5 years ago
|
||
The Edge team has documented our recent change here: https://textslashplain.com/2020/02/20/bypassing-appprotocol-prompts/
The Tl;dr is that we've changed the checkbox to "Always allow <protocol> from <origin>", and we plan to offer this change upstream to Chromium.
Updated•5 years ago
|
Comment 25•5 years ago
|
||
I just spent 30 minutes trying to find this bug, making the summary more suitable for that...
Reporter | ||
Comment 26•5 years ago
|
||
(In reply to Jonathan Leitschuh from comment #20)
One thing I do want to note is that Safari requires the user to approve every single use of the URL protocol handler, so there is a precedent already set there. It wouldn't be unreasonable to follow in their footsteps, although it's pretty clear that not what most people are considering here.
To follow up on this one, Chrome now also requires users to always URL protocol handlers. There is no 'remember' functionality anymore in chrome. As I stated above, this was also the choice of the Apple Safari team.
Does Firefox really want to be the one browser that is different in this regard?
Comment 27•5 years ago
|
||
(In reply to Jonathan Leitschuh from comment #26)
To follow up on this one, Chrome now also requires users to always URL protocol handlers.
For context, https://textslashplain.com/2019/08/29/web-to-app-communication-app-protocols/ .
There is no 'remember' functionality anymore in chrome.
My understanding is that this is incorrect. The functionality is still there, it's available via policy, see e.g. the reply from Eric Lawrence and others at https://support.google.com/chrome/thread/14322141?hl=en , and comment #23 wrt to making this permission per-origin (but still allowing to "remember" the decision for the domain). I don't know if the Edge change has been upstreamed into Chromium at this point.
Does Firefox really want to be the one browser that is different in this regard?
I already summarized the consensus in comment #17, which is that we want to make the permission per-origin. The only question is of time + expertise to actually implement this change, which is not trivial.
Comment 28•5 years ago
|
||
Yes, the Edge change has been up streamed into chromium and will ship in Chrome 84. The checkbox will be show to exempt origin+scheme tuples for any secure context (HTTPS, localhost).
Assignee | ||
Updated•5 years ago
|
Assignee | ||
Comment 29•5 years ago
|
||
Assignee | ||
Comment 30•5 years ago
|
||
Depends on D92946
Assignee | ||
Comment 31•5 years ago
|
||
Depends on D92947
Assignee | ||
Comment 32•5 years ago
|
||
Depends on D92945
Updated•5 years ago
|
Assignee | ||
Comment 33•5 years ago
|
||
The dialog / permission UI is according to the current UX draft. It's not final yet. Happy to make adjustments in case UX changes.
Comment 34•5 years ago
|
||
Comment on attachment 9180445 [details]
Bug 1565574 - Added disabled field to SitePermissions gPermissionObject. r=gijs
Revision D92946 was moved to bug 1667781. Setting attachment 9180445 [details] to obsolete.
Assignee | ||
Comment 35•5 years ago
|
||
Updated•5 years ago
|
Assignee | ||
Comment 36•5 years ago
|
||
- Added a new permission dialog shown when the caller does not have permission to open a protocol
- Updated the appChooser dialog for the new UX
- Updated and moved l10n strings to fluent (fluent migration in the following patch)
Depends on D92945
Assignee | ||
Comment 37•5 years ago
|
||
Depends on D94149
Updated•5 years ago
|
Updated•5 years ago
|
Comment 38•5 years ago
|
||
Comment 39•5 years ago
|
||
Backed out 6 changesets (bug 1565574) as per dev's request. CLOSED TREE
This lint failed, might want to take a look at it:
https://treeherder.mozilla.org/logviewer.html#/jobs?job_id=320109952&repo=autoland&lineNumber=537
Push that was backed out:
https://treeherder.mozilla.org/#/jobs?repo=autoland&group_state=expanded&revision=fbe972f837d9f28a2e4260414eb893a67cf31a69
Backout:
https://hg.mozilla.org/integration/autoland/rev/3280f9dfc1cb1466e2db85f19f6b17ccb0f27c4d
Comment 41•5 years ago
|
||
Comment 42•5 years ago
|
||
Comment 43•5 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/2cd180fc8bd3
https://hg.mozilla.org/mozilla-central/rev/dc317f2b785e
https://hg.mozilla.org/mozilla-central/rev/602e2626fa31
https://hg.mozilla.org/mozilla-central/rev/9f1a95e62342
https://hg.mozilla.org/mozilla-central/rev/9dc529fe7e3c
https://hg.mozilla.org/mozilla-central/rev/dffe38f56787
https://hg.mozilla.org/mozilla-central/rev/1a25820134c9
Updated•4 years ago
|
Updated•4 years ago
|
Comment 44•4 years ago
|
||
We did an exploratory testing on the new implementation of Always allow option for external protocol on a few services (Zoom, Steam, League of Legends, Microsoft Teams, Roblox) across platforms (Windows 10, macOS 11 and Ubuntu 18.04) and can confirm that the options work as expected, also playing with the actions from Applications (about:preferences) does not signal any issue both in RTL or LTR.
Are the allowed domains stored somewhere (same as geolocation maybe inside permissions from about:preferences#privacy)? I was curious to test having them "Blocked" inside Site identity panel.
Updated•4 years ago
|
Assignee | ||
Comment 45•4 years ago
|
||
Thanks!
We currently only show this permission in the permission list of the site identity popup. Adding it to about:preferences#privacy is Bug 1678994. So currently the user can't explicitly set this permission to blocked.
Assignee | ||
Updated•4 years ago
|
Comment 46•4 years ago
|
||
Thanks, I will follow that bug and see when this is available. Feel free to request qe verification (by adding + to qe-verify Bug flag
) if needed in that bug when work is done.
Updated•11 months ago
|
Description
•