Closed
Bug 1344174
Opened 7 years ago
Closed 7 years ago
[e10s-multi] Add measure for memory distribution among child processes
Categories
(Toolkit :: Telemetry, enhancement, P3)
Toolkit
Telemetry
Tracking
()
RESOLVED
FIXED
mozilla55
People
(Reporter: gkrizsanits, Assigned: gkrizsanits)
References
(Depends on 1 open bug)
Details
(Whiteboard: [e10s-multi:+][MemShrink:P2])
Attachments
(2 files, 2 obsolete files)
10.27 KB,
patch
|
chutten
:
review+
gchang
:
approval-mozilla-aurora+
|
Details | Diff | Splinter Review |
1.58 KB,
patch
|
benjamin
:
review+
|
Details | Diff | Splinter Review |
We should monitor how balanced is our current process selection algorithm. It should be possible to get USS per each child instead of their sum. Since we create and destroy processes during a session it is not trivial to me how to do this right.
Assignee | ||
Comment 1•7 years ago
|
||
Chris, you mentioned that we used to be able to do this somehow? I also wonder if we really need anything more fancy than a variance...
Flags: needinfo?(chutten)
Assignee | ||
Updated•7 years ago
|
Whiteboard: [e10s-multi:?]
Comment 2•7 years ago
|
||
I don't remember mentioning that we did do this, but I do remember having half a plan of how to measure it. Basically it was to, at intervals (probably the same intervals we currently measure memory), measure the differences in memory allocations amongst the processes. So, at time T: child A has 200MB child B has 300MB child C has 500MB We accumulate 100 and 300 to a single histogram. This way if we have one weird outlier process that's always bad, we can see it in the shape of the distribution. And if we shuffle things around, we should see some change in that shape (which alerts.tmo can pick up). Of course there are plenty of options: * Record the memory allocations themselves (200, 300, 500) (we lose time information of how imbalanced they were at a particular time) * Record only the current max split (300) (we lose information about smaller differences, but gain signal about the worst offenders) * Normalize by number of processes (133) (gives a single number to think about instead of a histogram, but we miss out on severity) *... Thoughts?
Flags: needinfo?(chutten)
Assignee | ||
Comment 3•7 years ago
|
||
(In reply to Chris H-C :chutten from comment #2) > I don't remember mentioning that we did do this, but I do remember having > half a plan of how to measure it. > > Basically it was to, at intervals (probably the same intervals we currently > measure memory), measure the differences in memory allocations amongst the > processes. > > So, at time T: > child A has 200MB > child B has 300MB > child C has 500MB > We accumulate 100 and 300 to a single histogram. I'm not sure I get this, what is the reference? The first child process? Is it a signed difference? How about using the median as the reference and store the difference with sign. So in your example that would be: -100, 0, 200. Or we can use mean instead of median (probably even better and simpler). Otherwise this sounds great to me.
Comment 4•7 years ago
|
||
The reference was the one with the smallest allocation, and it would be unsigned. Telemetry histograms do not support negative numbers. Assuming we can come up with a scheme for negative numbers (just add 1GB to everything or measure the absolute value or whatever), median/mean are interesting. It assumes that the three processes using 1GB isn't inflated but is in fact the correct amount to allocate for the work being done, it's just the allocation of that 1GB across the processes that's incorrect. (Whereas mine assumed that "200MB" is the correct about to allocate in a process, it's everyone else that's allocating too much.) So, maybe accumulate the absolute value of the difference from mean allocation across the process type? (so for the example accumulate 133, 33, 167)
Assignee | ||
Comment 5•7 years ago
|
||
(In reply to Chris H-C :chutten from comment #4) > The reference was the one with the smallest allocation, and it would be > unsigned. Telemetry histograms do not support negative numbers. Oh I was not aware of that... Sorry, I'm quite a telemetry noob. > > Assuming we can come up with a scheme for negative numbers (just add 1GB to > everything or measure the absolute value or whatever), median/mean are > interesting. It assumes that the three processes using 1GB isn't inflated > but is in fact the correct amount to allocate for the work being done, it's > just the allocation of that 1GB across the processes that's incorrect. I think that is what we want to measure here mostly. The memory total probs should capture if we use too much memory in total. Maybe in the future we can use this data to ensure that a new process selecting algorithm works better than the previous one. > > (Whereas mine assumed that "200MB" is the correct about to allocate in a > process, it's everyone else that's allocating too much.) > > So, maybe accumulate the absolute value of the difference from mean > allocation across the process type? (so for the example accumulate 133, 33, > 167) There is a tiny detail why I would have preferred the signed value instead of the absolute value. Let's say we have 5 content processes and the following two cases: allocated | 1000| 100| 100| 100| 100 diff from mean | 500| 400| 400| 400| 400 allocated | 1000| 1000| 1000| 1000| 100 diff from mean | 180| 180| 180| 180| 720 The first case is typically the scenario what we want to avoid as much as possible. The second scenario is something we would care less about, because it can just mean that the 5th process is still fresh yet (either a per-allocated process we spin up in the background but don't use it yet, or someone just closed all the tabs from one content process and opened a new tab). Anyway, I think without the signed value if we focus on the mean of this histogram and somewhat ignore if it has a long tail this can work. We also probably don't care so much about the reports from users with less tabs open than the max number of processes since there is not a lot to balance if a user let's say have two tabs, one with a heavy page and one with a lightweight page. Filtering out reports under a certain number of opened tabs would be the best I think. Let's ask Brad what he originally had in mind for this probe. Brad, do we care about the memory balance among content processes if the number of opened tabs are lower then the max number of content processes, or should we filter out reports with less than X opened tabs?
Flags: needinfo?(blassey.bugs)
Updated•7 years ago
|
Blocks: e10s-multi-beta
Whiteboard: [e10s-multi:?] → [e10s-multi:+]
Updated•7 years ago
|
Flags: needinfo?(blassey.bugs)
Assignee | ||
Comment 6•7 years ago
|
||
Brad would prefer to send the number of open tabs as well with the pings so we can do filtering manually if we want.
Comment 7•7 years ago
|
||
(In reply to Gabor Krizsanits [:krizsa :gabor] from comment #6) > Brad would prefer to send the number of open tabs as well with the pings so > we can do filtering manually if we want. Flyby: the maximum number of open tabs (in a subsession) is already being sent using Scalars [1] in the main ping [2]. Aggregates are already available at [3]. [1] - http://searchfox.org/mozilla-central/source/toolkit/components/telemetry/Scalars.yaml#20 [2] - https://gecko.readthedocs.io/en/latest/toolkit/components/telemetry/telemetry/data/main-ping.html [3] - https://mzl.la/2mF7uvX
Comment 8•7 years ago
|
||
Unfortunately it doesn't pin the number of tabs to the memory measurement, which I think would be helpful in this case.
Assignee | ||
Comment 9•7 years ago
|
||
(In reply to Chris H-C :chutten|pto until Mar 20 from comment #8) > Unfortunately it doesn't pin the number of tabs to the memory measurement, > which I think would be helpful in this case. Sorry for dropping the ball here, I've got some other bugs to fix first. Anyway, is there a way to pin those two measurements? So every time we check the memory balance for child processes, we should also check the number of tabs and tag our measurement with it.
Assignee | ||
Updated•7 years ago
|
Blocks: TelemetryAdditions
Comment 10•7 years ago
|
||
In short: nope, not really. There is no way to, after the fact, figure out which accumulations to two separate histograms/scalars happened around the same time. In previous cases we've allowed JSON objects to allow this sort of thing (like the JS GC telemetry), but they do not get the support "for free" that "normal" histograms and scalars get (ease of inclusion in derived datasets, aggregation by default, visibility on telemetry.mozilla.org, ...)
Updated•7 years ago
|
Priority: -- → P3
Comment 11•7 years ago
|
||
I talked through this with Gabor on IRC and there are a few options here. (1) Use one memory histogram and filter out "light tab users" or "extreme tab users" using the tab & window scalars. Those don't allow correlating exactly at each memory measurement point, but allow to see who uses "a lot of tabs" vs. who doesn't. https://mzl.la/2mY7vYs https://mzl.la/2mY55cl https://mzl.la/2mY37J9 https://mzl.la/2mYdUmL (2) Use "a few" memory histograms (one keyed histogram?) and use one each for say "0 - 10 tabs", "11 - 500 tabs", "more tabs". That allows to filter out low tab usage and high outliers for each individual memory data point. (3) If you need to know exactly how many tabs were open for each memory data points, you could add temporary custom data for this. However, thats extra work and - as mentioned - you lose all the existing tooling benefits.
Updated•7 years ago
|
Whiteboard: [e10s-multi:+] → [e10s-multi:+][MemShrink]
Assignee | ||
Comment 12•7 years ago
|
||
This is a partially ready patch. I went with Georg's keyed histogram idea I think it should be enough for filtering. And I'm using Absolute Error (Abs(value - mean)). The tricky part is testing though. I cannot find any tests for MEMORY_TOTAL either. I could just alert some observer every time when we finished collecting the data from the content processes and let the test wait for it maybe... Problem is that even if I try to wait for quite long (see the current dummy test in the patch) I never get any pings. Chris, how did you test MEMORY_TOTAL? I tried about:telemetry as well but nothing shows up under keyed histograms.
Assignee: nobody → gkrizsanits
Attachment #8853421 -
Flags: feedback?(chutten)
Comment 13•7 years ago
|
||
Comment on attachment 8853421 [details] [diff] [review] WIP Review of attachment 8853421 [details] [diff] [review]: ----------------------------------------------------------------- As for testing, emitting a memory event to observers would do or, if easier, simply starting a cycle collection ought to kick things into gear. I'm a little confused about why you aren't seeing your histogram showing up in about:telemetry. It should be in the parent process, keyed histograms (or look at the raw JSON and search directly) so long as there are any accumulations for MEMORY_TOTAL. ::: dom/ipc/tests/browser_memory_ae_telemetry.js @@ +13,5 @@ > + </html>`; > +} > + > +/** > + * Tests the FX_TAB_CLOSE_TIME_ANIM_MS probe by closing a tab with the tab Nice comment :) ::: toolkit/components/telemetry/Histograms.json @@ +1146,5 @@ > + "expires_in_version": "never", > + "kind": "exponential", > + "keyed": true, > + "low": 256, > + "high": 16777216, I like the symmetry with other memory measures, but I wonder if we could get better resolution with a lower "high" and an absent "low". I suppose we'll need some real-life accumulations first to see what "typical" values might be. @@ +1148,5 @@ > + "keyed": true, > + "low": 256, > + "high": 16777216, > + "n_buckets": 100, > + "description": "Absolute difference of each content process' USS and the mean of USS's" What unit?
Attachment #8853421 -
Flags: feedback?(chutten) → feedback+
Assignee | ||
Comment 14•7 years ago
|
||
(In reply to Chris H-C :chutten from comment #13) > > As for testing, emitting a memory event to observers would do or, if easier, > simply starting a cycle collection ought to kick things into gear. Cycle collection did not work out so I chose a more direct approach. > > I'm a little confused about why you aren't seeing your histogram showing up > in about:telemetry. It should be in the parent process, keyed histograms (or > look at the raw JSON and search directly) so long as there are any > accumulations for MEMORY_TOTAL. There was a pref I had to set to enable it. > > + * Tests the FX_TAB_CLOSE_TIME_ANIM_MS probe by closing a tab with the tab > > Nice comment :) Hah :) And this wasn't even the dumbest mistake in that WIP patch... I hope this version is a bit more mature. > I like the symmetry with other memory measures, but I wonder if we could get > better resolution with a lower "high" and an absent "low". > > I suppose we'll need some real-life accumulations first to see what > "typical" values might be. I changed this completely. The numbers in practice were not intuitive at all. Instead I just normalized by the mean, and multiplied by 100 to get percentages. So 20 would mean either 20% more or 20% less than the mean of all the processes. I chose 200 as the maximum because that seemed extreme enough with 4 content processes. As the maximum number of processes gets higher over time we might need a bigger maximum, but realistically if we ever hit 200 that means that something is very wrong. What happens if we try to report a value that is higher than the maximum? I hope that it will be reported as max, if that is not the case we might want to change this.
Attachment #8853421 -
Attachment is obsolete: true
Attachment #8854378 -
Flags: review?(chutten)
Comment 15•7 years ago
|
||
Comment on attachment 8854378 [details] [diff] [review] Memory distribution telemetry. v1 Review of attachment 8854378 [details] [diff] [review]: ----------------------------------------------------------------- I think we can probably skip the ipcCount==1 case when reporting this figure. It'll add nothing but a bunch of 0s. But I'm open to argument in favour of keeping it. ::: dom/ipc/tests/browser_memory_distribution_telemetry.js @@ +1,5 @@ > +"use strict"; > + > +var session = Cu.import("resource://gre/modules/TelemetrySession.jsm", {}); > + > +function makeDataURI() { Not sure why this is a function instead of just a const DUMMY_PAGE_DATA_URI = `... @@ +48,5 @@ > + for (var key in s) { > + is(key, "0 - 10 tabs"); > + let fewTabsSnapshot = s[key]; > + ok(fewTabsSnapshot.sum > 0, "Zero difference between all the content processes is unlikely, what happened?"); > + ok(fewTabsSnapshot.sum < 80, "20 percentage difference on avarage is unlikely, what happened?"); s/avarage/average/ @@ +51,5 @@ > + ok(fewTabsSnapshot.sum > 0, "Zero difference between all the content processes is unlikely, what happened?"); > + ok(fewTabsSnapshot.sum < 80, "20 percentage difference on avarage is unlikely, what happened?"); > + let c = fewTabsSnapshot.counts; > + for (let i = 10; i < c.length; i++) { > + is(c[i], 0, "None of the content processes should have more than 10*2 % difference"); This is a confusing note. You can call into fewTabsSnapshot.ranges[i] for the actual bucket lower bound, if that's what you mean. ::: toolkit/components/telemetry/TelemetrySession.jsm @@ +709,5 @@ > // A Set of outstanding USS report ids > _childrenToHearFrom: null, > // monotonically-increasing id for USS reports > _nextTotalMemoryId: 1, > + _USSForChildProcesses: null, ...isn't this _USSFromChildProcesses, not For them? @@ +1696,5 @@ > + // Mean of the USS of all the content processes. > + let mean = this._USSForChildProcesses.reduce((a, b) => a + b, 0) / length; > + // Absolute error of USS for each content process, normalized by the mean (*100 to get it in percentage). > + // 20% means for a content process that it is using 20% more or 20% less than the mean. > + let diffs = this._USSForChildProcesses.map(value => Math.floor(Math.abs(value - mean) * 100 / mean)); So for anyone with ipcCount 1, they'll be swarming us with 0s. Fair enough, it just means that telemetry.mozilla.org is going to be useless. I think we can just skip reporting if length < 1. @@ +1698,5 @@ > + // Absolute error of USS for each content process, normalized by the mean (*100 to get it in percentage). > + // 20% means for a content process that it is using 20% more or 20% less than the mean. > + let diffs = this._USSForChildProcesses.map(value => Math.floor(Math.abs(value - mean) * 100 / mean)); > + let tabsCount = this.getOpenTabsCount(); > + let bucket = tabsCount < 11 ? "0 - 10 tabs" pedantic nit: maybe call it key
Attachment #8854378 -
Flags: review?(chutten)
Assignee | ||
Comment 16•7 years ago
|
||
(In reply to Chris H-C :chutten from comment #15) > > I think we can probably skip the ipcCount==1 case when reporting this > figure. It'll add nothing but a bunch of 0s. But I'm open to argument in > favour of keeping it. Nice catch, thanks. > > + is(c[i], 0, "None of the content processes should have more than 10*2 % difference"); > > This is a confusing note. You can call into fewTabsSnapshot.ranges[i] for > the actual bucket lower bound, if that's what you mean. What I'm trying to do here is to check that all the buckets above the index 10 are empty. So each content process were in the 0-20% difference range in the test. Probably I should have done this part nicer... > I think we can just skip reporting if length < 1. Absolutely. I might also want to filter out the preallocated process. But since that patch has not even landed I'll do that in a followup.
Updated•7 years ago
|
Whiteboard: [e10s-multi:+][MemShrink] → [e10s-multi:+][MemShrink:P2]
Assignee | ||
Comment 17•7 years ago
|
||
Apart from fixing all the review comment, I've just realized that this test should only run with e10s enabled and should just pass by default if the max number of allowed content processes is less than 2. I fixed that too.
Attachment #8854378 -
Attachment is obsolete: true
Attachment #8855312 -
Flags: review?(chutten)
Comment 18•7 years ago
|
||
Comment on attachment 8855312 [details] [diff] [review] Memory distribution telemetry. v2 Review of attachment 8855312 [details] [diff] [review]: ----------------------------------------------------------------- Seems legit!
Attachment #8855312 -
Flags: review?(chutten) → review+
Comment 19•7 years ago
|
||
Pushed by gkrizsanits@mozilla.com: https://hg.mozilla.org/integration/mozilla-inbound/rev/147cb5cd72fd Add measure for memory distribution among child processes. r=chutten
Assignee | ||
Comment 20•7 years ago
|
||
Comment on attachment 8855312 [details] [diff] [review] Memory distribution telemetry. v2 Approval Request Comment [User impact if declined]: We need this for e10s-multi beta experiment. [Is this code covered by automated tests?]: Yes. [Has the fix been verified in Nightly?]: It's on inbound right now. [Needs manual test from QE? If yes, steps to reproduce]: No. [Is the change risky?]: No. [Why is the change risky/not risky?]: It's an additional telemetry probe. As harmless as it can get. [String changes made/needed]: None.
Attachment #8855312 -
Flags: approval-mozilla-aurora?
Comment 21•7 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/147cb5cd72fd
Status: NEW → RESOLVED
Closed: 7 years ago
status-firefox55:
--- → fixed
Flags: in-testsuite+
Resolution: --- → FIXED
Target Milestone: --- → mozilla55
Updated•7 years ago
|
status-firefox54:
--- → affected
Comment 22•7 years ago
|
||
Comment on attachment 8855312 [details] [diff] [review] Memory distribution telemetry. v2 It's for e10s-multi beta experiment and include test. Aurora54+.
Attachment #8855312 -
Flags: approval-mozilla-aurora? → approval-mozilla-aurora+
Comment 23•7 years ago
|
||
bugherder uplift |
https://hg.mozilla.org/releases/mozilla-aurora/rev/a5df1ac45705
Assignee | ||
Comment 24•7 years ago
|
||
Comment on attachment 8855312 [details] [diff] [review] Memory distribution telemetry. v2 Benjamin, I thought a review for a new telemetry probe from someone like Chris or Georg is enough, and have not asked for a data review. Sorry about that, it's totally my fault. Could you take a look at this probe and let me know if anything is wrong with it? I can adjust it if it's needed in a followup patch. The probe reports: |abs(USS_of_child_process - mean_of_USS_of_all_child_processes) / mean_of_USS_of_all_child_processes) * 100| So in case of (unit is not important): USS of child 1 = 100 USS of child 2 = 200 USS of child 3 = 300 It will report: 50, 0, 50, which means that 2 processes are using 50% more or less than the mean, and 1 uses exactly as much as the mean. The telemetry also has a key, based on the number of tabs opened tabs at the time of the report: "0 - 10 tabs", "11 - 500 tabs", "more-tabs" So the example will get the "0 - 10 tabs" key. This will help us filter out the extreme cases, while we still get all the benefits of histograms.
Attachment #8855312 -
Flags: feedback?(benjamin)
Comment 25•7 years ago
|
||
Comment on attachment 8855312 [details] [diff] [review] Memory distribution telemetry. v2 I am only reading histograms.json and not the bug on purpose, so I can see if the docs make sense. Questions/comments: * It would be good to write down in the histogram description *when* this value is recorded. Once per session? Periodically? Or at some specific point in time triggered by some other action? * My understanding of the values is that if there are four content processes, we record four separate values in the histogram, is that correct? * If there is only one content process (e.g. the user has only ever opened a single tab), do we record a value or no? So as an example, if I had four content processes with 1G, 500MB, 1G, 1.5G it would record 0, 50, 0, 50?
Flags: needinfo?(gkrizsanits)
Comment 26•7 years ago
|
||
To be clear, collecting this data is fine, I'm just trying to get the data docs to best reflect what it's actually doing.
Assignee | ||
Comment 27•7 years ago
|
||
(In reply to Benjamin Smedberg [:bsmedberg] from comment #25) > Comment on attachment 8855312 [details] [diff] [review] > Memory distribution telemetry. v2 > > I am only reading histograms.json and not the bug on purpose, so I can see > if the docs make sense. Questions/comments: > > * It would be good to write down in the histogram description *when* this > value is recorded. Once per session? Periodically? Or at some specific point > in time triggered by some other action? Same as for all the other memory measurement, when gatherMemory is called. Sadly, none of those measures are mentioning in histograms.json when they are recording values so I'm not sure if that's descriptive enough if I refer to them like that. [*] What would be your preference in this case? > * My understanding of the values is that if there are four content > processes, we record four separate values in the histogram, is that correct? Yes, that is correct, I will add a more detailed example. > * If there is only one content process (e.g. the user has only ever opened a > single tab), do we record a value or no? No, at least 2 content processes are needed for this measure to make sense, so we only record it when >2 content processes are running (and available through the ppmm). > > So as an example, if I had four content processes with 1G, 500MB, 1G, 1.5G > it would record 0, 50, 0, 50? That is correct. I'll add a quick patch to address the docs in the json while, just wanted to double check [*] first. (In reply to Benjamin Smedberg [:bsmedberg] from comment #26) > To be clear, collecting this data is fine, I'm just trying to get the data > docs to best reflect what it's actually doing. Thanks, your help is much appreciated.
Flags: needinfo?(gkrizsanits) → needinfo?(benjamin)
Comment 28•7 years ago
|
||
> Same as for all the other memory measurement, when gatherMemory is called.
> Sadly, none of those measures are
> mentioning in histograms.json
Yeah. For now, just saying "recorded during gatherMemory" is fine. If you want to include a DXR or other doc link that would be great but not necessary.
Flags: needinfo?(benjamin)
Assignee | ||
Comment 29•7 years ago
|
||
The comment has got quite long so I did not add the dxr link in the end. (I wonder if we should break up these long descriptions at some point...)
Attachment #8857536 -
Flags: review?(benjamin)
Comment 30•7 years ago
|
||
Comment on attachment 8857536 [details] [diff] [review] follow-up: Additional comments to the probe's description. data-r=me with this. Expect me to ping you periodically to make sure you are still monitoring this, because it's set to expires-never instead of expiring.
Attachment #8857536 -
Flags: review?(benjamin) → review+
Updated•7 years ago
|
Attachment #8855312 -
Flags: feedback?(benjamin)
Comment 31•7 years ago
|
||
Pushed by gkrizsanits@mozilla.com: https://hg.mozilla.org/integration/mozilla-inbound/rev/d02bc56b7025 Follow up to extend the memory distribution probe's comment for data review. r=bsmedberg
Comment 32•7 years ago
|
||
bugherder |
https://hg.mozilla.org/mozilla-central/rev/d02bc56b7025
You need to log in
before you can comment on or make changes to this bug.
Description
•