Closed Bug 686201 (setImmediate) Opened 13 years ago Closed 5 years ago

implementation: setImmediate API

Categories

(Core :: DOM: Core & HTML, defect, P2)

defect

Tracking

()

RESOLVED WONTFIX

People

(Reporter: bugzilla33, Unassigned)

Details

(Keywords: parity-edge)

User Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)

Steps to reproduce:

http://dvcs.w3.org/hg/webperf/raw-file/tip/specs/setImmediate/Overview.html

IE 10 previev 2 have implemented setImmediate API
OS: Windows 7 → All
Hardware: x86 → All
It's not clear that this will end up in the spec (for now, Microsoft just stuck it in the editor's draft, but the WebKit folks seem opposed to it).

Given lack of use cases proposed by Microsoft and the fact that the spec as it stands is definitely wrong, I think we should hold off on implementing this until at least the spec issues are resolved, if not longer.
Assignee: general → nobody
Component: JavaScript Engine → DOM
QA Contact: general → general
Version: 7 Branch → Trunk
Wow that spec needs a lot of help first before we can implement that. Windows 3.0 called and wants it's numeric handles back. Any document.write loaded ad can cancel immediates for code it doesn't own. Those handles should be proper capabilities/objects. Also, setImmediate is poor naming considering that the spec explicitly says it's not guaranteed that execution is immediate. Did IE at least prefix this?
This appears to be documented on MDC for some reason ... not really sure why.

https://developer.mozilla.org/en/DOM/window.setImmediate
> Windows 3.0 called and wants it's numeric handles back.

Yep, that was one of the concerns that was raised about the spec.

> Did IE at least prefix this?

I _think_ so.

> This appears to be documented on MDC for some reason

Because MDC is a wiki..

Eric what's the status of that document, exactly?
Status: UNCONFIRMED → NEW
Ever confirmed: true
Perhaps the importance of this bug should be reduced considering the above comments.
Whiteboard: wontfix?
That's in the wiki since MSFT implemented it; we document stuff for any browser. We have noted, however, that it's only implemented in IE and is unlikely to be implemented elsewhere.

I've added a doc needed tag here to be sure we track this as it shakes out further.
Keywords: dev-doc-needed
I have to say there is a lot of interest in the web community for this. Mostly, those interested in promises implementations are looking for fast ways of executing stuff on the next tick of the event loop. Here are some examples:

https://github.com/NobleJS/setImmediate
https://gist.github.com/2802407
https://github.com/yui/yui3/pull/304

You can even see Douglas Crockford using it in his implemtation and talks: http://youtu.be/dkZFtimgAcM?t=41m57s
Even if you guys don't like the spec, vendor prefix it while it's under discussion and I'll add the prefixed implementation to the setImmediate shim. That way, major promise implementations using the shim can be fast under Firefox as well as under IE10.

For the record postMessage hacks are about 4x-5x slower in Firefox than setImmediate is in IE10, as seen using e.g.

http://jphpsf.github.com/setImmediate-shim-demo/
> vendor prefix it while it's under discussion

The point is that we're not sure that shipping anything like this at all is a good idea.  Why would shipping it prefixed be useful while that's the case, exatly?

> postMessage hacks are about 4x-5x slower in Firefox than setImmediate is in IE10

That's not necessarily related to the lack of setImmediate; postMessage and setImmediate would likely have pretty identical overhead.

In fact, you can test exactly what would happen if we implemented setImmediate today by just changing the value of the "dom.min_timeout_value" preference to 0 in about:config.  That will make setTimeout behave just like setImmediate is specified to do, as far as I can tell.  I will bet you money that you will discover that the actual overhead involved, both in the event loop and in the timeout code itself, means the result is not faster than what you see with postMessage.
Oh, OK. Well, if Firefox is just plain slower than IE10 at scheduling things in the next turn of the event loop, I guess that's a separate issue. Is there a bug number for that?

(I'd assumed the overhead of scheduling something "0 seconds in the future" instead of in the next turn, or of using seemingly-heavyweight cross-document messaging mechanisms, would explain the slowness. Sad to hear it's just an inherent problem with Firefox's event loop architecture.)
Re: comment 10 -- I would not assume anything about performance. Why not do what comment 9 suggests and *actually measure*?

As for comment 2, if integer ids are good enough for setTimeout and setInterval, they're good enough for setImmediate (named in the same vein). These were my attempt at weak refs in 1995, and they "stuck".

/be
OK, tests run using

http://domenic.github.com/setImmediate-shim-demo/

which is a version of the above link with the setTimeout argument changed from 4 to 0.

Firefox with 0ms setTimeout: between 12 and 15 seconds
Firefox with postMessage hacks: between 1.5 and 6 seconds
IE with setImmediate: between 0.4 and 0.5 seconds
I'm sorry, I made a mistake editing the demo. The actual results are more like:

Firefox with 0ms setTimeout: between 1.8 and 7 seconds
Firefox with postMessage hacks: between 1.5 and 6 seconds
IE with setImmediate: between 0.4 and 0.5 seconds
Yeah, comment 13 matches my approximate numbers, for what it's worth.

I'm not aware of any existing bugs on this; no one has had a problem with it before.  Please do feel free to file one.
Chrome: 871ms
FirefoxNightly: 3469ms

Dominic: please file a separate bug, with URL linked to that test.

Boris, my two cents: this bug seems not-wontfix to me. What's the latest in w3 or webkit circles? I see stuff from July 2011 at a glance, nothing newer.

/be
> What's the latest in w3 or webkit circles?

As far as I know, radio silence, as you noted.
ES7 is likely going to acquire the event loop. If a setImmediate or equivalent was ever going to be introduced somewhere, I wish it was in ECMAScript; I and other people have already expressed this wish on es-discuss.

(In reply to Brendan Eich [:brendan] from comment #11)
> As for comment 2, if integer ids are good enough for setTimeout and
> setInterval, they're good enough for setImmediate (named in the same vein).
> These were my attempt at weak refs in 1995, and they "stuck".
Who said they were good enough? Empirically the weak id generator has never been abused to attack a webpage. That's probably because it'd be a very subtle and non-interoperable attack and when you're an attacker on a webpage, there are a lot of lower-hanging fruit.

If it's worth anything, on Node.js:
> typeof setTimeout(Object, 0)
'object'
This API seems confusing and unnecessary.  Why not doing the right thing instead and ensure that setTimeout(func, 0) triggers the callback immediately (after browser breath) rather after some arbitrary short delay?
Probably because currently setTimeout has the same behavior in all browsers and if changed the whole Web will break (all those setTimeout fixes for layout issues will stop working).
> and ensure that setTimeout(func, 0) triggers the callback immediately

If you bother to search the web, you'll find the answer: too many sites depend on that not happening and break in various ways.  The Chrome folks tried this back in 2008-9 and had to give up because it wasn't web-compatible.
(In reply to Brendan Eich [:brendan] from comment #15)
> Chrome: 871ms
> FirefoxNightly: 3469ms

Curious, which OS? I think there is a bug open to make event loop access OS level stuff less
often, at least on OSX.
Filed bug #839816 for the performance issues.
Are other browsers than IE supporting this now? I should know, but I'm old and there are wolves after me ;-).

/be
For what it's worth, it's in Node.js.

http://nodejs.org/api/timers.html
Comment 7 is one of the most important classes of use cases, and I notice nobody ever responded to it. Nicholas Zakas recently brought this up in a blog post:

    http://www.nczonline.net/blog/2013/07/09/the-case-for-setimmediate/

We should implement setImmediate, even if the API design is weak. I believe someone told me that there are disagreements between IE and Chrome as to whether the numeric codes mix with the numeric codes for setTimeout; that will have to be reconciled. I don't like numeric codes either, but there are far worse issues to deal with in web API's to shed any tears over it. This is strictly an improvement on setTimeout.

Dave
This post is a must-read on the topic: https://groups.google.com/a/chromium.org/d/msg/blink-dev/Hn3GxRLXmR0/XP9xcY_gBPQJ
Note the part about misinformation on what setTimeout does (4ms clamping only happens in several level deep setTimeout nestings and not on first call).

I also worry that misusages of setImmediate will inevitably lead implementors to do the exact same thing about clamping than they did with setTimeout. I don't see how it could be differently given that the web is the web and people will inevitably misuse setImmediate.

(In reply to Dave Herman [:dherman] from comment #26)
> We should implement setImmediate, even if the API design is weak. I believe
> someone told me that there are disagreements between IE and Chrome as to
> whether the numeric codes mix with the numeric codes for setTimeout; that
> will have to be reconciled.
The above post highlights some deeper concerns.

A setImmediate-like API is important to have and I imagine that such a work will have happen in ES6 anyway while bringing the event loop in ECMAScript. It'd be a shame to have duplicate work by following a W3C spec (really wish Microsoft had discussed with others instead of randomly throwing a spec draft on this one)
> really wish Microsoft had discussed with others instead of randomly throwing a spec draft on this one

I just would like to dispel some myth about this. I was part of the web-perf working group when setimmediate was conceived and discussed and spec'd. I don't think it's accurate or fair to suggest that Microsoft didn't discuss it -- they definitely did.

The fact that other browsers haven't agreed to what the working group (and Microsoft) came up with is not the same thing as Microsoft just making something up in private.
(In reply to Kyle Simpson from comment #28)
> > really wish Microsoft had discussed with others instead of randomly throwing a spec draft on this one
> 
> I just would like to dispel some myth about this. I was part of the web-perf
> working group when setimmediate was conceived and discussed and spec'd. I
> don't think it's accurate or fair to suggest that Microsoft didn't discuss
> it -- they definitely did.
> 
> The fact that other browsers haven't agreed to what the working group (and
> Microsoft) came up with is not the same thing as Microsoft just making
> something up in private.
oh ok. Wasn't aware of this part of the story. My mistake.
And I apologize for my share of misinformation.
Over to jdm who will work on figuring out exactly what to implement and do the deed (with a likely delay due to some vacation time).
Assignee: nobody → josh
(In reply to Johnny Stenback (:jst, jst@mozilla.com) from comment #30)
> Over to jdm who will work on figuring out exactly what to implement and do
> the deed (with a likely delay due to some vacation time).
I don't understand the need to rush it, especially given the points in https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/Hn3GxRLXmR0/XP9xcY_gBPQJ and the one I raised.

To summarize:
1) The setImmediate spec is based on a false assumption [1]:
"However, setTimeout enforces a minimum delay, even with a specified period of zero, that isn't uniformly implemented across user agents. Removing this minimum delay from setTimeout runs the risk of causing existing webpages, that have come to rely on the minimum delay, to break by going into a seemingly hung state while also significantly increasing the power consumption of the browser. "
=> The minimum delay is only for deeply nested setTimeouts (see [2]). Webpages don't necessarily rely on this delay. It seems to be more of a mitigation mechanism implemented by browsers to avoid burning the CPU and leave the page almost non-responsive if a page has something equivalent to:
  setTimeout(function f(){
    setTimeout(f, 0)
  }, 0)
(see the [4] of the above post for more details)
I would love to know why this sort of bug cannot happen with setImmediate and why browsers won't be eventually forced to implement the exact same mitigation, making setImmediate(f) effectively an equivalent of setTimeout(f, 0) (and bringing no new feature at all)

2) From the post : "The big disappointment of setImmediate() for me is that it provides no additional information that the browser can use to perform task servicing more efficiently."
(the rest of the post is important to understand the context of that quote, but I won't copy/paste it here :-) )

3) Equivalent work is undergoing as part of ES6/7 where the event loop is brought to ECMAScript (and eventually, the DOM will move away to hook into ECMAScript event loop. Ian Hickson wrote somewhere he was happy to do that. If that didn't happen, we'd have a major standard coordination failure), so setImmediate feels like it'd be a duplication of feature between 2 specs and web authors don't need more of that.

Unless there is a good answer for these 3 points and especially the first one, I would recommend not implementing setImmediate at all.

[1] https://dvcs.w3.org/hg/webperf/raw-file/tip/specs/setImmediate/Overview.html#introduction
[2] https://gist.github.com/DavidBruant/6183351
Bringing some feedback from the discussion at the Chromium bug after having some time to consider it. We need two APIs for scheduling stuff that would serve different use cases:

1) A way to run a function after yielding back to the browser, like setImmediate does. This would be useful for those cases where we expect to do something after the browser had time to render, but as soon as possible.

2) A way to run a function after the current microtask and before yielding back to the browser, same as what Object.observe() does. This is what we need for promises and other library features that depend on some level of asynchronicity and deal with data. We want to work on our data before the browser renders again.
> The minimum delay is only for deeply nested setTimeouts (see [2]).
> Webpages don't necessarily rely on this delay.

This is I think the missed detail. It's being cited that since the delay only happens in nested setImmediate calls, and no one sane ever does that, it's not a big deal.

There's very valid usages of setImmediate/setTimeout(..0) which, as far as I understand it, would in fact qualify as nested usage, and therefore suffer this delay.

Promises implementations necessarily have to insert a defer/delay between each step of a sequence, even if all the steps of that sequence are already fulfilled and would otherwise, if not wrapped in promises, execute synchronously. The async "delay" between each step is necessary to create a predictable execution order between sync and async usage.

So, if you are in a situation where you have a sequence of async steps queued up in promises, say 4 or 5 steps (or more!), and all of them are fulfilled and just ready to be executed, the promises library will run the first, then "delay", then the second, then "delay", then...

That necessarily results in the nested case:

```
step1(); // execute step 1

setImmediate(function(){ // "delay" before step 2
   step2(); // execute step 2

   setImmediate(function(){ // "delay" before step 3
      step3(); // execute step 3

      setImmediate(function(){ // "delay" before step 4
         step4(); // execute step 4

         ...
      });
   });
});
```

It's not that someone wants to author that sort of structure, it's that a promises sequence where all steps are fulfilled and ready to go will result in it as a side effect of the way promises implementations ensure predictable order.
(In reply to dopazo.juan from comment #32)

(In reply to Kyle Simpson from comment #33)

This is an interesting and important discussion to have, but if we're having it here, we're gonna get kicked out eventually :-p (because Bugzilla is usually to talk about implementation concerns. I'm even borderline myself when raising points which I consider should block the implementation.)

I moved the discussion at https://mail.mozilla.org/pipermail/es-discuss/2013-August/032617.html
(In reply to dopazo.juan from comment #32)
> 2) A way to run a function after the current microtask and before yielding
> back to the browser, same as what Object.observe() does. This is what we
> need for promises and other library features that depend on some level of
> asynchronicity and deal with data. We want to work on our data before the
> browser renders again.

FWIW the idea of providing an API that allows you to "skip" the event loop makes me very uneasy.  It seems like a footgun that would be very easy to misuse.
> I don't understand the need to rush it

The usual reason things get rushed: web developers deliberately writing code that only works well if you implement setImmediate.
> I don't understand the need to rush it

I don't understand the sentiment that this is being "rushed". I think that's based on an incomplete understanding of how `setImmediate()` came to be.

In any case, shifting from debating the merits of the feature to criticizing (incorrectly) how "quickly" it came about isn't going to help anyone.

-----------

> The usual reason things get rushed...

And the usual reason things like this get endlessly wound up and delayed (perhaps unnecessarily?) is people who abjectly don't agree with a feature just telling developers "you're doing it wrong" and putting it off on them to write much more complex code. Since you're not the one who has to deal with the lack of that feature, it's a bit easier for you to brush it off as "our" problem.

See, that door (and snark) swings both ways. :)
> [...] people who abjectly don't agree with a feature just telling developers "you're doing it wrong"

There are good features and bad "features"; making the distinction can be hard because you might be constrained by deadlines, angry customers, angry boss etc.—perhaps you need that feature Right Now™ and you don't want to think about what happens next.

Almost always, providing a bad "feature" will do more damage in the long run than not providing it, because people will use it (in the order of millions).  Think it through.  We can list tons of bad "features" in JavaScript and we know that it can easily take decades to clean up the mess.

If something doesn't *have* to be done, most probably it's best to not do it.  IMO this one's unnecessary, and as such, it shouldn't be done.
> If something doesn't *have* to be done, most probably it's best to not do it.

That definitely doesn't sound like a reasonable standard to apply. Maybe it's your first filter to weed out the noise, but that at most.

Just because a developer can do something in the harder and more complex way doesn't mean it's always best to force them down that path instead of providing a simpler path. For instance, if the harder path is also less efficient, or if the amount of code to go down the harder path moots the performance gains being attempted, or...

There's a whole host of heuristics that we should use to decide if something is in or out, and prematurely boiling the decision down to "it's not strictly needed, so ignore the request" isn't helpful.

I'm not providing the arguments *for* `setImmediate()` right now, but just pushing back on what seems an unreasonable standard/burden to getting to reasonable consensus.

There's plenty of things in JS that are not necessary, but are quite helpful, and just because they are not necessary, that doesn't make them mistakes for having been put in. If you need examples, I'm happy to go down that rabbit trail. :)
(In reply to Kyle Simpson from comment #37)
> > I don't understand the need to rush it
> 
> I don't understand the sentiment that this is being "rushed". I think that's
> based on an incomplete understanding of how `setImmediate()` came to be.
I was answering to jst and referring to an implementation in Firefox, not the feature itself.

(In reply to Kyle Simpson from comment #39)
> There's plenty of things in JS that are not necessary, but are quite
> helpful, and just because they are not necessary, that doesn't make them
> mistakes for having been put in.
We're beyond unnecessary. My reading of the latest setImmediate spec is that it has no semantic difference with setTimeout 0. So that's plain duplication.
In theory there is no clamping for nested setImmediate. In practice, I have already answered by describing what happened to setTimeout : people will misuse setImmediate, browsers will have to defend and add a mitigation mechanism. Clamping looks like the first candidate (there is already a hook for that in the spec: "This API does not guarantee that the callback will be run immediately. Delays due to CPU load, other tasks, etc, may occur.").
I have suggested another mitigation mechanism at https://mail.mozilla.org/pipermail/es-discuss/2013-August/032636.html

> and putting it off on them to write much more complex code.
I'm not sure what you're referring to. As far as I'm concerned, reading the latest version of the setTimeout spec, setImmediate can be replaced with setTimeout 0; that's not what I'd call "much more complex code".
> My reading of the latest setImmediate spec is that it has no semantic difference with setTimeout 0. So that's plain duplication

Of course, if there weren't actually any difference between the two, it *would* be needless duplication. But then again, there never would have been the whole kerfuffle over a microsoft-made demo using setImmediate() running much much slower in Chrome (and them getting offended that the "silly" polyfill for setImmediate was setTimeout(..0)).

If it has been demonstrated (and I think it has?) that the current implementation of setImmediate() (however microsoft did it) is observably different from setTimeout(..0), that's what I've been operating on.


> people will misuse setImmediate, browsers will have to defend and add a mitigation mechanism. Clamping...

I have never understood this logic, though I have heard this argument many times. Just because browsers did something in the past does not mean they have to do the same in the future.

As an analogy, requestAnimationFrame() is intended to be used for visual updates only, but there's nothing that stops devs from "abusing" it and using it like another setTimeout(..16ish) with nothing to do with visual updates, and in so doing, they unnecessarily burden the repaints and create poor performance.

Are browsers going to come along and neuter requestAnimationFrame in some way because developers are mis-using it? If they do, it'll be a shame and a bad thing for the web platform, worse than if they'd just let developers die by their own sword if they do things wrongly.

I personally see setImmediate() as a way to have a clean break with the clamping past of setTimeout, and it "ships" with the warning label: "if you burn yourself, don't come crying to us."


> setImmediate can be replaced with setTimeout 0; that's not what I'd call "much more complex code".

What I was referring to was a suggestion that promise implementations can get around the nested timeout situation by a more sophisticated mechanism that asynchronously executes the first promise fulfillment but then synchronously processes through the rest of the chain (no "delay"s in between). It's suggested that the DOM Promises spec does this and that this is a better way of implementing than relying on a non-clamped timeout. I was saying that, for my own purposes, it seems like implementing that approach is going to be quite a bit more complicated. My guess is it's more complicated (though to varying degrees) for all promises libs.
(In reply to Kyle Simpson from comment #41)
> > My reading of the latest setImmediate spec is that it has no semantic difference with setTimeout 0. So that's plain duplication
> 
> Of course, if there weren't actually any difference between the two, it
> *would* be needless duplication. But then again, there never would have been
> the whole kerfuffle over a microsoft-made demo using setImmediate() running
> much much slower in Chrome (and them getting offended that the "silly"
> polyfill for setImmediate was setTimeout(..0)).
There might be a perf bug in Chromium https://twitter.com/DavidBruant/status/365414612880986112

> If it has been demonstrated (and I think it has?) that the current
> implementation of setImmediate() (however microsoft did it) is observably
> different from setTimeout(..0), that's what I've been operating on.
So they wouldn't be implementing their own spec?


> > people will misuse setImmediate, browsers will have to defend and add a mitigation mechanism. Clamping...
> 
> I have never understood this logic, though I have heard this argument many
> times. Just because browsers did something in the past does not mean they
> have to do the same in the future.
Regardless of what's been done in the past, how does a browser prevents battery being drained by setImmediate(function f(){ setImmediate(f) }) ?
I really wished someone cared to answer. What does IE10 do? (I don't have one at hand and I worry a VM would bias the result).


> As an analogy, requestAnimationFrame() is intended to be used for visual
> updates only, but there's nothing that stops devs from "abusing" it and
> using it like another setTimeout(..16ish) with nothing to do with visual
> updates, and in so doing, they unnecessarily burden the repaints and create
> poor performance.
Calling a useless function once every 4ms is acceptable for browsers. Once every 16-ish even more so?

> Are browsers going to come along and neuter requestAnimationFrame in some
> way because developers are mis-using it? If they do, it'll be a shame and a
> bad thing for the web platform, worse than if they'd just let developers die
> by their own sword if they do things wrongly.
Note that requestAnimationFrame is self-protecting by definition. If a dev puts a callback that's too long to run, a frame is skipped. Browsers can't do much beyond making their JS fast if they want to compete.

 
> I personally see setImmediate() as a way to have a clean break with the
> clamping past of setTimeout, and it "ships" with the warning label: "if you
> burn yourself, don't come crying to us."
That's not how the web works. Let's say browser X drains your battery on a given page and Browser Y doesn't. Which browser will people choose?
Note that in the scenario I have described, web developers are completely out of the equation whether they want to cry or not.



> > setImmediate can be replaced with setTimeout 0; that's not what I'd call "much more complex code".
> 
> What I was referring to was a suggestion that promise implementations can
> get around the nested timeout situation by a more sophisticated mechanism
> that asynchronously executes the first promise fulfillment but then
> synchronously processes through the rest of the chain (no "delay"s in
> between). It's suggested that the DOM Promises spec does this and that this
> is a better way of implementing than relying on a non-clamped timeout. I was
> saying that, for my own purposes, it seems like implementing that approach
> is going to be quite a bit more complicated. My guess is it's more
> complicated (though to varying degrees) for all promises libs.
I see. It described a proposal for microtasks https://mail.mozilla.org/pipermail/es-discuss/2013-August/032630.html Your input would be appreciated :-)
> and in so doing, they unnecessarily burden the repaints

No, they don't.  It's just some code running every 16ms.
> It's just some code running every 16ms.

OK, I'll play along.

< snark >
Right, because devs would never put code into a rAF callback that took longer than 16ms to run, thereby slowing down the framerate of the browser to a noticeable level, right?

And them doing that could never possibly be construed, as David suggested, as a problem with the browser, because it could never be that the "long-running" code was slower in one browser than in another, so it would never be that browser A has an acceptable framerate and browser B has a noticeably worse framerate... right?

And that sort of thing could never possibly be seen as abuse of the purpose of the function, even if that long running code had nothing to do with visual repaints, right?

And that sort of abuse could never possibly make a browser (who was getting adversely hurt in their public image) decide to neuter something about the way rAF works as a mitigation/face-saving strategy, right?

And that sort of potential devolution couldn't even remotely look like what happened with setTimeout, where browsers implemented clamping on timers so that battery levels were "saved" so as to prevent the browser from getting "undue" blame, right?

< /snark >
Snark aside (I don't know why it keeps getting introduced), the analogy I was making is that if a browser saw rAF as being abused and that was hurting the browser's reputation, they *could* (but I would hope, wouldn't) implement some "mitigation".

For example, I could conceive that such an afflicted browser, in the efforts to save face, might do something "crazy" like not calling the rAF callback on strictly every next frame, but might instead call it on every 4th frame, or something like that.

It would be a shame if they were to do such neutering mitigations. rAF is great, and it's great in spite of the fact that it could be "abused".

By that same token, again, I would have hoped that the introduction of a new feature whose primary feature was "no clamping" could, in fact, survive without losing that key characteristic. Either the browser would leave the "footgun" in there, or they could come up with some other mitigation besides timing clamping, such as what David suggested on the es-discuss thread he linked, where they refuse to go any further in nesting than some arbitrary (and high) level, like maybe 25 levels deep or whatever.

I'm not saying that's a desirable solution. I'm just suggesting that it seems like setImmediate() could be implemented without clamping and survive without having it added at some later time.
> And that sort of potential devolution couldn't even remotely look like what happened with
> setTimeout, where browsers implemented clamping on timers so that battery levels were
> "saved" so as to prevent the browser from getting "undue" blame, right?

This is such a fundamental representation of the history of clamping on setTimeout, that I don't even know where to start responding to it other than by saying you should read up on that history...
I am only referencing the "reasons" David gave earlier for clamping of timers. I am not sure it matters that much the actual history, because I'm not debating the fate of setTimeout (that ship has already sailed).

I'm only asking why David thinks that because setTimeout() ended up clamped, then setImmediate is doomed to the same fate no matter what we do?
> I am only referencing the "reasons" David gave earlier for clamping of timers.

Those reasons are wrong.

> I'm only asking why David thinks that because setTimeout() ended up clamped

Dare I suggest the meta-discussion should not be happening in this bug?
(In reply to Vacation until Aug 19.  Do not ask for review. from comment #48)
> > I am only referencing the "reasons" David gave earlier for clamping of timers.
> 
> Those reasons are wrong.
I drew this analysis from the [4] of https://groups.google.com/a/chromium.org/forum/#!msg/blink-dev/Hn3GxRLXmR0/XP9xcY_gBPQJ (I used "draining battery" based on Adam Barth mention of it which I found relevant https://groups.google.com/a/chromium.org/d/msg/blink-dev/Hn3GxRLXmR0/P9zOB1HIQ4IJ That's just the 2013 version of "overused CPU")

I'd be happy to learn the right reasons.


> > I'm only asking why David thinks that because setTimeout() ended up clamped
> 
> Dare I suggest the meta-discussion should not be happening in this bug?
Sounds good. The es-discuss thread is a good place to continue.


Back to this bug, what should happen for "setImmediate(function f(){ setImmediate(f) })" ? (open question for jdm when he'll be back, I guess, based on jst's comment)
I'm sorry to interject here, but I just wanted to mention that over in gaia we've found the need for something more responsive than setTimeout(0) and it looks like we are going to bring in a setImmediate() polyfill.

In addition to irc we discussed this a bit in the following places:

  https://groups.google.com/forum/#!topic/mozilla.dev.gaia/5TnnNZGzQVY
  https://bugzilla.mozilla.org/show_bug.cgi?id=910876
  https://github.com/mozilla-b2g/gaia/pull/11849

Obviously it would be nice to have something like this native in gecko, but we understand there are a lot of concerns here and its unclear if this will move forward.  We just wanted to mention our use in gaia as a possible data point in your discussions.

Thank you!
Ben, what are the use cases you're using repeating 0-length timeouts for in gaia?
(In reply to Boris Zbarsky [:bz] from comment #51)
> Ben, what are the use cases you're using repeating 0-length timeouts for in
> gaia?

I just posted some in reply David Bruant on the mailing list.  I'll reproduce some of the relevant parts here.

The use cases that immediately came to mind are:

  1) Breaking up large computations and yielding the main thread in order to avoid jank.
  2) Ensuring that callbacks are consistently asynchronous to avoid recursion loops that blow the stack. 

Specifically, I can think of the contacts app search code which examines 10 contacts at a time and then uses setTimeout(0) to schedule the next chunk.  With 1000+ contacts we are well into the deeply nested case with added delays that begin to be observable by the user.  Arguably this could be re-written to use workers or contacts API filtering at the cost of added complexity, but we would get an immediate win if we could just replace the setTimeout(0).

Another case would be the code I am working on now.  I want to build a cursor object that triggers a callback providing the subsequent contact when the next() function is called.  Typically, the callback itself will call next() synchronously which can end up in a recursive loop in some cases.  To prevent this I need to make the callback asynchronous using setTimeout(0). Since I might have 1000+ contacts, I'd like to avoid baking in a 4ms penalty on every callback if I can.  The postMessage() "next tick" technique seemed like a good way to achieve this.
(In reply to Ben Kelly [:bkelly] from comment #52)
>   2) Ensuring that callbacks are consistently asynchronous to avoid
> recursion loops that blow the stack. 

A better explanation of what I was trying to describe here is in David Herman's "Effective Javascript" in Item 67: Never Call Asynchronous Callbacks Synchronously.  I wish I could link to it.
I have use setTimeout(..., 0); and incredible number of times in the last 4 years for Firefox mobile or Gaia.
Many times it was in a code path inside an event handler (click, mousedown, etc...). setTimeout(..., 0) is then a way to postpone something at the end of the event handling.
> Many times it was in a code path inside an event handler

That use isn't a nested timeout, so doesn't get any 4ms delay added: it executes next-tick.

Ben, thank you for describing the use cases you've had that were actually affected by the 4ms delay!
(In reply to Ben Kelly [:bkelly] from comment #52)
> (In reply to Boris Zbarsky [:bz] from comment #51)
> > Ben, what are the use cases you're using repeating 0-length timeouts for in
> > gaia?
> 
>   1) Breaking up large computations and yielding the main thread in order to
> avoid jank.
>   2) Ensuring that callbacks are consistently asynchronous to avoid
> recursion loops that blow the stack. 

Thanks Ben. It would be convenient if there was a browser provided method for efficient asynchronous callbacks. 

Unfortunately it sounds like efficiency is impossible due to the JS VM - see James Richardson's explanation in the blink-dev mailing list at https://groups.google.com/a/chromium.org/d/msg/blink-dev/Hn3GxRLXmR0/XP9xcY_gBPQJ

Apparently it is more efficient to roll your own callback dispatcher as illustrated in James' post. 

I guess this means Promises will also be inefficient. Not that it will stop people using them to try and emulate setImmediate ;)
"efficiency" is not a binary state.

If you want to do each arithmetic operation off a separate callback in a fluid dynamics simulation, you will have a bad time.

If you want to import 1000 contacts, but break it up into 20 callbacks of 50 contacts each, it'll probably be just fine, even though the cost of the callbacks is no less.
Well(In reply to Boris Zbarsky [:bz] from comment #57)
> "efficiency" is not a binary state.
> 
> If you want to do each arithmetic operation off a separate callback in a
> fluid dynamics simulation, you will have a bad time.
> 
> If you want to import 1000 contacts, but break it up into 20 callbacks of 50
> contacts each, it'll probably be just fine, even though the cost of the
> callbacks is no less.

If one is going to break up the import then it would be simplest to make it 1000 callbacks with one contact each. But most use cases would be more complex and less predictable. And probably better suited to something like Promises. 

It would be useful to have a vague idea on some of the timings involved, for instance:

- If setImmediate() or Promises were implemented, what would be the minimum overhead (time / cycles) of exiting from one JS callback and entering the next JS callback? 

- How would this compare to a JS managed callback dispatcher? James Richardson's post implied staying in the JS VM is significantly cheaper. 

- If implementing a JS managed callback dispatcher, how long is it reasonable to keep executing before yielding? 1ms / 4ms / more?
> If one is going to break up the import then it would be simplest to make it 1000
> callbacks with one contact each.

Simplest, sure.  But clearly slower than batching, in terms of overall throughput.

> what would be the minimum overhead (time / cycles) of exiting from one JS callback and
> entering the next JS callback? 

http://mozilla.pettay.fi/moztests/events/event_speed_3.html runs each test iteration in about 120ms for me in Firefox and Chrome, on a 1-year-old fairly high-end laptop (with a 2.7Ghz clock).  During that time, in addition to whatever event bubbling and whatnot it has to do (minimal compared to the cost of the callbacks in this case) it performs 100000 calls from the event dispatch code into the scripted listener.  So figure about 1.2us, or about 3300 CPU cycles, per call. 

Note that this case is also doing various work that a pure "pass through the arguments" API will not need to do (e.g. finding the right JS reflections for the event and event target objects).

We have plans to speed this up some more, but I doubt it'll get more than about 10x faster.  It has to do things (like microtask management) that pure JS code just doesn't have to do.

>- How would this compare to a JS managed callback dispatcher?

Write one and measure?

> If implementing a JS managed callback dispatcher, how long is it reasonable to keep
> executing before yielding?

It depends on your use cases.  If you're in a game, then I suspect anything over 2ms is too long because of the requirement to hit 16ms frame targets.  It basically depends on what code is running outside your dispatcher.
FYI - setImmediate (and MessageChannel) is broken on IE10.  It's basically useless on IE10 Mobile, and somewhat unreliable on IE10 Desktop.  Just adding a spin.js spinner on our login page caused Q promises to stop working on IE10 Mobile, because it uses setImmediate.

I wrote a detailed description of the bug in my blog: http://codeforhire.com/2013/09/21/setimmediate-and-messagechannel-broken-on-internet-explorer-10/
Maybe their demo http://ie.microsoft.com/testdrive/Performance/setImmediateSorting/Default.html shows exactly why it's needed?

Why is the 4ms rule enforced in this case?
Let's not pretend I'm working on this.
Assignee: josh → nobody
What should a developer do in case she wants to process a lot of stuff, and she's willing to put some time into breaking it up into chunks to let the browser "do what it must do" between the chunks, but she doesn't want a 4ms intervals imposed on her between the chunks?

Sure, if the browser has 10ms of own processing to do, go ahead, but if the browser is idle (apart from her pending chunks), she expects her next chunk to be invoked ASAP.

If such facility is not provided, and she still wants to be considerate and run in chunks, she'll just make each of the chunks run longer, in order to get less overhead on average.

E.g. if originally her chunks were taking 4ms each, which results in 50% overhead with 4ms intervals imposed by the browser, then she'll run in chunks of 40ms to get the overhead lowered to 10%. Or chunks of 200ms. Everybody loses.

Except for cases where asynchronicity is required (like Promise), setImmediate is more an act of courtesy by the developer IMO than anything else.

I think a main questions here are two:

1. Can the browser at all schedule a task to right after "what the browser considers it must do in order to maintain its own usability/functionality" ?

2. If it can, do we want to expose such functionality to developers, knowing that it can be abused to induce consistent 100% CPU usage, high battery drain, etc (albeit without hurting the browser's usability due to 1 above).

Since 100% CPU usage can be induced anyway (e.g. by very long running synchronous tasks, like the dev above with her 200ms chunks, or much worse), I tend to think that the answer to 2 should be "yes". I don't think it will abuse the browser more than it can already be abused, but it WILL give devs the the chance to be considerate, and take as much CPU as they need, but without hurting the browser's usability.

If we still want to impose some limits on 2 (and the only reason I could think of is reducing power usage, since the browser IS given time to handle "what it has to"), then we could impose artificial idle times according to various conditions, e.g. battery level, average CPU usage over the last NN seconds, whatever, or just let the CPU throttle itself according to the BIOS definitions (which many times refer to power source and battery level as well).

The browser is abused anyway, setImmediate (or equivalent) will not allow more abuse IMO, but will allow less.

And of course, tasks which do need immediate asynchronous invocation (like promises) will benefit greatly without having to revert to workarounds to satisfy their need for speed.
And there's always the issue that once X amount of processing has to be completed, it's not obvious at all that completing it over longer period of time will consume overall less CPU.
FWIW, I just tested Nightly a bit with dom.min_timeout_value=0, and it _seems_ to be satisfying 1 from comment 63.

I.e. it seems to have very low overhead between the chunks, while the browser seems to stay very responsive as long as the chunks are reasonably quick (I tested chunks of 4-100ms, where at 100ms it's clearly less responsive).

I compared it to where minimal timeouts are enforced, and the results are as expected. E.g. with 4ms chunks there's ~100% overhead with clipped timeout, compared to about 5% overhead when timeout isn't clipped.

I only tested with linear invocations (e.g. one timeout's handler fires the next one).

Since immediate async execution is useful, and a work around can already be [ab]used with postMessage, so I think it's possible to introduce setImmediate as an alias for "setTimeout 0 without 4ms clipping", without breaking the existing web (i.e. setTimeout clipping stays intact), and without adding potential for more abuse.

It's basically saying "we hope you've learned your lesson from setTimeout, please don't use stupidly".

Though if my observation that it doesn't hurt usability of Firefox itself is correct, I don't think that the result of using it carelessly has potential to abuse anything but the battery.
There's a difference from setTimeout 0 though. If I interpret the spec correctly, I think that 2 callbacks should not be invoked without letting the browser handle UI/stuff in between.

I.e. unlike setTimeout, a synchronous loop which calls setImmediate a lot of times will give the control back to the browser between each callback invocation. The same UI responsiveness we get when using setTimeout 0 sequentially (where one callback executes the next setTimeout 0 - even while dom.min_timeout_value=0)

Invoking lots of setTimeout 0 synchronously, however, does hangs the browser.

However, implementing setImmediate as a sequentialized-only version of setTimeout 0 (proper 0) shouldn't be hard I think, e.g.: (ignoring the optional arguments and clearImmediate)

var setImmediate = (function(){
  var immediates = [];

  function processPending() {
    setTimeout(function() {
      if (immediates.length) {
        immediates.shift()();
        if (immediates.length)
          processPending();
      }
    }, 0);
  }

  function _setImmediate(fn){
    immediates.push(fn);
    if (immediates.length == 1)
      processPending();
  }

  return _setImmediate;
}());


I tested this implementation with sequential or parallel setImmediate, while dom.min_timeout_value=0, and as long as the immediate execution duration is quick, the browser is fully responsive and the overhead is minimal.
Apologies for the spam. I posted an incorrect version. Hopefully this is my last post.

var setImmediate = (function(){
  var immediates = [];
  var processing = false;

  function processPending() {
    setTimeout(function() {
      immediates.shift()();
      if (immediates.length)
        processPending();
      else
        processing = false;
    }, 0);
  }

  function _setImmediate(fn) {
    immediates.push(fn);
    if (!processing) {
      processing = true;
      processPending();
    }
  }

  return _setImmediate;
}());
Interestingly, Firefox's native Promise.resolve().then(f) scheduling (at Firefox default settings) behaves very much like Firefox's setTimeout(f, 0) (with dom.min_timeout_value=0). I.e. parallel .then's are all executed before the browser UI is updated, and as such many synchronous .then's loop will hang the browser, but sequential(/nested) .then's don't get the 4ms clipping on one hand, while still letting the browser response/update to user inputs between each invocation (and not in a sluggish way either - properly responsive, upto the duration which each chunk executes).

So if one knows that only nested schedules are used, Promise.resolve().then(fn) suffices at the end of each chunk (using a single resolved promise and invoking its .then over and over didn't produce measurable difference).

If synchronous scheduling can happen, then the function from comment 67 can be used, where setTimeout is replaced with the Promise scheduling.

I applied this promise based scheduling to the demo from comment 12, and it's the same speed as the postMessage based implementation for both Firefox and Chrome (Chrome is devilishly quick in this BTW - ~250ms compared to 1000+ ms in Firefox).

I also tested scheduling 1250 chunks of 4ms each = 5s overall (using a while loop until performance.now() progressed 4ms), and measured the overall time it takes to complete all the chunks, and whether or not the browser was responsive. Here are my results with the following schedulers (Firefox):


function direct (f) { f() }
function timeout(f) { setTimeout(f, 0) }    // with 4ms clipping
function post12 (f) { setImmediate(f) }     // postMessage from comment 12
function promise(f) { Promise.resolve().then(f) }
function shim67 (f) { setImmediateShim(f) } // comment 67 but using promise

// nested where each chunk schedules the next one:
//   direct  -  5003 ms - hanged
//   timeout - 10400 ms - responsive
//   post12  -  5200 ms - responsive
//   promise -  5150 ms - responsive
//   shim67  -  5150 ms - responsive

// synchronous where a loop schedules all the chunks and then waits for all of them to complete:
//   direct  -  5002 ms - hanged
//   timeout -  5050 ms - hanged
//   post12  -  5070 ms - hanged
//   promise -  5050 ms - hanged
//   shim67  -  5200 ms - responsive

The numbers with Firefox 32 and 35 Nightly were very similar.
The exact interaction of Promise stuff with the event loop will likely get pinned down in the spec at some point here; what we're doing right now won't match that and we'll need to change it.  In particular, the "nested" behavior you observe may no longer happen because afaict the spec requires pending Promise callbacks to be serviced before the main event loop.

> ~250ms compared to 1000+ ms in Firefox

Please file a separate bug with whatever testcase you were using here?
(In reply to Boris Zbarsky [:bz] from comment #69)
>
> > ~250ms compared to 1000+ ms in Firefox
> 
> Please file a separate bug with whatever testcase you were using here?

The testcase is to visit http://domenic.me/setImmediate-shim-demo/ and click the setImmediate button. On my system Firefox completes in 1000+ ms while chrome is 250ms or less.

I then downloaded the page and its scripts, replaced their shim with a plain Promise.resolve().then(f) and got similar results (4x slower and similar numbers).

I didn't analyze the code further than that.

I didn't file a bug because I think you referred to this performance difference someplace earlier and said that it's the drawings which take long.
I can't believe this issue still hasn't been resolved. Either setTimeout(() => { ... }, 0); needs to work like setImmediate(() => { ... }) or you all need to implement setImmediate. As I see it the former breaks poorly developed websites, so setImmediate seems to be preferable. Regardless of which is correct, people are building serious programs in the browser, and making the argument that either isn't unnecessary, or is a foot cannon waiting to happen are both very bad arguments. I don't care if Timmy locks up chrome because he can't program. That's Timmy's problem. But I'm building a god dam rocket ship and I need proper tools to get the job done. As a side note there are already many APIs in the browser that can cause CPU thrashing. The clamping in setTimeout protects no one, it's simply annoying. You have to trust that there are engineers who know what they are doing. Stop trying to protect engineers from themselves, we can figure it out on our own.

TL;DR. Please pick setTimeout(() => { ... }, 0) without clamping or setImmediate(() => { ... }). We don't need you to protect us from ourselves, get it done.
Note that I posted the above message in the chromium bug tracker, but I feel it's also important to bring up my frustration here as the issue applies to firefox as well.

Sorry for the double post :(
> I don't care if Timmy locks up chrome because he can't program. That's Timmy's problem.

No, it's our users' problem when they load Timmy's page.

> You have to trust that there are engineers who know what they are doing.

We know there are.  There are also lots of people who create web pages via cargo-culting and copy/paste and they may not know what they're doing.

Removing the clamping on setTimeout is a non-starter, because we have measurements that show that it's a pretty common issue on real-world sites.  No comment on setImmediate.
> No, it's our users' problem when they load Timmy's page.

Sure, until it's Timmy's issue for building a **** poor site/app that no one wants to use. If you want people to take the web as a serious platform to build things like game engines then you need to stop trying to hand hold developers. There are people that need these APIs, and holding them back because your worried someone will use it wrong is the wrong approach. If that's your philosophy then maybe we should rip out WebGL, or array buffers. Hell while we're at it maybe we can remove the while and for keywords so people can't create a runtime error that locks up the page.

> We know there are.  There are also lots of people who create web pages via
> cargo-culting and copy/paste and they may not know what they're doing.

Right, so because someone may not know what they are doing, we need give up on this feature as it could be problematic for a copy paster. Don't you see how ridiculous that is?

> Removing the clamping on setTimeout is a non-starter, because we have
> measurements that show that it's a pretty common issue on real-world sites.

I agree, I read the tickets from when the chromium crew tried to implement setTimeout(() => { ... }, 0); Turned out to be a painful experience. Implement setImmediate(() => { ... }) and be done with it.

It's a better option anyway as this is what is supported in Node; isomorphic code will work on both sides of the fence. No complex pollyfils for setTimeout(() => { ... }, 0) for IE/Edge either.
Hi Robert,

I'm David Bruant, web developer and contributor to web standards. I read the words you're writing and remember having the same feeling. I'll try to answer my best to your comments (only once, because bugzilla is not the right place for this sort of discussion. The WHATWG mailing list may be a better place https://whatwg.org/mailing-list )

(In reply to Robert Hurst from comment #74)
> > No, it's our users' problem when they load Timmy's page.
> 
> Sure, until it's Timmy's issue for building a **** poor site/app that no one
> wants to use. If you want people to take the web as a serious platform to
> build things like game engines then you need to stop trying to hand hold
> developers. There are people that need these APIs, and holding them back
> because your worried someone will use it wrong is the wrong approach. If
> that's your philosophy then maybe we should rip out WebGL, or array buffers.
> Hell while we're at it maybe we can remove the while and for keywords so
> people can't create a runtime error that locks up the page.

There is no absolute. The web is an ecosystem with lots of actors. Among the devs ("authors"), some are amazing engineers, others copy/paste from MDN/stackoverflow.
Web browsers must run all the websites as well as possible and find a balance between providing powerful APIs and mitigate misuages of these APIs (be it bugs or malicious code).

You're carrying only one voice among the thousands browser implementors have to balance. I'm very familiar with the voice you're carrying, I recognize myself. But if you don't understand the balance browsers have to achieve, you will never be able to convince them to move the topic to where you want to see it.
Embrace the balance or remain in the frustration you're feeling right now are pretty much the only two options. I've been there, picked the former option and feel better now :-)

A while ago, I gave a talk where I tried to describe as best as I could the various stakeholders on the web, how they interact with one another (and how this sometimes leads to terrible APIs that cannot be fixed)
https://www.youtube.com/watch?v=7eNFQqMSxtU

 
> > We know there are.  There are also lots of people who create web pages via
> > cargo-culting and copy/paste and they may not know what they're doing.
> 
> Right, so because someone may not know what they are doing, we need give up
> on this feature as it could be problematic for a copy paster. Don't you see
> how ridiculous that is?

Solely from a highly-skilled engineer point of view, it can seem ridiculous. Once you put yourself in the shoes of browser makers, you realize it's hard to make a better choice.
The web is the web because everyone is invited to make a website, even people who are not skilled engineers.

 
> > Removing the clamping on setTimeout is a non-starter, because we have
> > measurements that show that it's a pretty common issue on real-world sites.
> 
> I agree, I read the tickets from when the chromium crew tried to implement
> setTimeout(() => { ... }, 0); Turned out to be a painful experience.
> Implement setImmediate(() => { ... }) and be done with it.
> 
> It's a better option anyway as this is what is supported in Node; isomorphic
> code will work on both sides of the fence. No complex pollyfils for
> setTimeout(() => { ... }, 0) for IE/Edge either.

Bug trackers (any) are not the right forum for this sort of discussion. If you want to continue it, I invite you to answer me privately or continue to the whatwg mailing-list.
David, thanks for the response.

Thanks for extending the offer to chat. I'll take you up on that. Also, I did enjoy your presentation.

I agree, a conversation about stakeholders of the web is not appropriate for this issue thread, and I'm happy to have such a conversation on the WHATWG mailing list and/or privately.

That all said I do wish to dig up this issue of setImmediate again. This is the correct place to talk about the implementation of setImmediate if the comments above are anything to go by.

I acknowledge that my perspective is of one camp here, but setImmediate is not meant to satisfy everyone. It's purpose is to enable better control of deferred execution timing. As I understand it, it's for engineers who specifically need that level of control. In my contrived example with with this Timmy character; there is no reason this kind of tinkerer or copy paster needs to use setImmediate. This is not a gun that will go off in the hands of beginners as they need not use it. As an aside I'd say a gun to the foot is a good way to learn how not to program from time to time. The browser shouldn't be the engineer's mum (or dad). That's what opinionated docs are for ;)

As you suggest I'll raise this on the mailing list. Anyone else who has an opinion here, please bring them to that discussion if you would be so kind as to indulge me.

Personally I think setImmediate is a rather important piece of this browser as a platform puzzle.
For the aspect of discriminating between engineers and naive copypasters, how about setTimeout(func, -0). :-)
Boris, now that we have code that adequately handles timer floods, would you be more open to supporting setImmediate() with similar protections?  For example, make setImmediate() use a ThrottledEventQueue forcing yielding to the main thread.  We could also implement some callback coalescing over a limited time window like we do for setTimeout() if desired.
Flags: needinfo?(bzbarsky)
Maybe this doesn't matter as much now that we have microtasks with Promises.
Flags: needinfo?(bzbarsky)
As a library implementor, `setImmediate(..)` still matters and is, in some cases, quite preferable to a microtask.

Some thoughts:

1. `Promise.resolve().then(..)` doesn't offer the cancelation capability that `setImmediate(..)` / `clearImmediate(..)` do. That's quite limiting. Of course, you can "hack" a wrapper that does it with a boolean or something, but that's not ideal.

2. `Promise.resolve().then(..)` would ostensibly create two promises that would then be thrown away. Dunno if JS engines are now smart enough to skip the creation of those or otherwise optimize that problem, but it seems awfully wasteful to do so, especially for a mechanism that's supposed to offer lightweight, performance-optimized scheduling of "future" work.

3. AFAICT, `setImmediate(..)` schedules work on the next tick of the event loop, rather than on the current tick like microtasks. This matters a lot, not the least of which because of potential starvation if you get in a microtask loop (like a promise that keeps recursively scheduling another resolution).
Ben, I think at this point supporting setImmediate is a lot more palatable, yeah.  I would be ok with it, probably, assuming we make it clear in the spec that it's not so immediate in a lot of cases.

Kyle, the two extra promises are not optimized out, but I expect they're still cheaper than the work setImmediate has to do.

I agree that the difference between task and microtask is pretty important here, though.  We _want_ people to use setImmediate, which we can throttle, not microtasks, which we can't.
Domenic has a proposal for `self.queueMicrotask()` which would avoid the promise allocations:

https://github.com/whatwg/html/pull/2789

He's looking for implementer interest.

FWIW, Domenic was also pretty strongly against setImmediate() still as well.  Thread:

https://twitter.com/domenic/status/915944504867348480
Note, there are some setImmediate() adoption numbers here:

https://github.com/w3c/setImmediate/issues/3

"At the same time, there is massive adoption of setImmediate in the real world. I just checked and we see ~30% of navigations making use of it including Facebook, YouTube, Yahoo, Ebay, Roblox, many other Google properties and many Microsoft sites."

The issue also got me thinking more about how I would implement setImmediate() with abuse mitigations.  Some notes:

1. We could have a per-TabGroup `ImmediateScheduler` that is responsible for taking an immediate object and dispatching a Runnable.
2. Under normal condition the ImmediateScheduler just dispatches the runnable to the main thread event loop.
3. When an immediate callback runs we measure the delay between when it was scheduled and when it executed.
4. If the execution delay exceeds a certain value the ImmediateScheduler will switch into a mitigation mode.
5. In mitigation mode runnables are dispatched to a ThrottledEventQueue instead of the main thread event queue.  This will force setImmmediate() callbacks to yield the event loop between consecutive executions.
6. When switching from normal mode to mitigation mode any pending immediates are canceled and requeued on the ThrottledEventQueue.
7. Once the ThrottledEventQueue empties and executes a non-delayed immedate the ImmediateScheduler switches back to normal mode.
8. The threshold to switch from normal to would be higher than the 4ms we use in TimeoutManager.  Maybe more like 10ms to 16ms.
9. We could also only check for delayed execution of immediates queued while there is a previous outstanding immediate.  If the MT is just slow delaying the first immediate firing and a second immediate is not queued up in the meantime, there there really is no point switch modes.

So for most uses setImmediate() would be just like "post a task".  If multiple immediates are queued on top of each other and significant delays occur, then we start forcing yields to other work inbetween immediate callbacks.
After speaking with some folks at TPAC there is perhaps an easier way to do this.  We could implement setImmediate(f) as setTimeout(f, 0), but without any 4ms clamping.  All our setTimeout() anti-flood and background throttling mitigations would automatically apply.

The reason we would do this instead of just changing setTimeout(f, 0) would be to maintain interop on the 4ms clamping behavior there.
Alias: setImmediate
Whiteboard: wontfix? → wontfix? [parity-Edge]
I'm planning to send an intent to implement in the FF59 or FF60 time frame.  Just depends on when I can finish some other higher priority work.
Assignee: nobody → bkelly
Status: NEW → ASSIGNED
Whiteboard: wontfix? [parity-Edge] → [parity-Edge]
Mass bug change to replace various 'parity' whiteboard flags with the new canonical keywords. (See bug 1443764 comment 13.)
Keywords: parity-edge
Whiteboard: [parity-Edge]
Priority: -- → P2
I still think this would be good to implement, but I don't have time to work on it.  Based on feedback from other vendors and large sites we would probably get massive instant adoption.
Assignee: bkelly → nobody
Status: ASSIGNED → NEW
Component: DOM → DOM: Core & HTML

Closing this as WONTFIX as the only implementation (Edge (non-Chromium)) is obsolete and the standard appears to be gone.

Status: NEW → RESOLVED
Closed: 5 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.