Friday, 16 May 2014

Re: errors.ubuntu.com and upgrade crashes

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Bjoern Michaelsen wrote on 14/05/14 01:47:
> ...
>
> On Tue, May 13, 2014 at 10:24:20AM +0100, Matthew Paul Thomas
> wrote:
>>
>> You can. Set "Most common of these errors from" to "the date
>> range", then enter the dates, for example 2013-10-16 to
>> 2013-10-20. The result is not as you remember: over that period,
>> bug 1219245 was not in the top 50 at all, whereas it was #42 for
>> the equivalent period around the 14.04 release.
>
> I tried that, just for fun with the range 2014-04-13 2014-04-19 and
> got:
>
> libreoffice-core all 25
>
> libreoffice-core 14.04 120

Oh dear. I've reported that as a bug. <http://launchpad.net/bugs/1320174>

> and went back from 14.04 to all and got a "the list of most common
> errors could not be loaded" -- from that it is a/ obvious, that the
> numbers are not absolute, but normalized in a way[1] that is making
> them uncomparable

Frequency is an absolute count. (Perhaps we should call it "Count"
instead?) If it was being normalized in any way, it would often be
fractional, but it never is. It seems far more likely to me that it's
a simple algorithm bug, for example forgetting to include 14.04 and
Utopic in "all".

> b/ these errors use to happen always to me after two or three
> changes to the selection in general, TBH thus made me mistrust
> this data for all but the most basic searches as I am always unsure
> if I see real data or old data with some stalled JSON request.

That's a shame. The database server is unreliable, but I don't
remember a case where it failed *and* the Web site didn't communicate
this with an error message. When it does give an error, you can
usually work around it by changing one of the search parameters and
then changing it back.

> ...
>>
>> "Supported" is a weasel word. I've never understood why Ubuntu
>> lets people have apps running during an upgrade, because that
>> has many weird effects. But Ubuntu *does* let people do that. And
>> as long as it does, Ubuntu developers are responsible for the
>> resulting errors.
>
> This is hardly a LibreOffice issue -- any interactive application
> will have such issues, esp. if it has any kind of state. Thus the
> solution would be for the updater to search and warn about closing
> such applications.

Sure. And if the Software Updater developers think otherwise, you
should get in a room together until you agree on a solution. Don't
just shrug and blame each other from a distance. :-)

>>> While those upgrade issues should be a concern too, as-is it
>>> seems to me they are overblown in their importance and we dont
>>> have a good way to look if they happen in regular production
>>> use after upgrade.
>>
>> With respect, I don't see that you have any justification in
>> deciding that this particular issue is "overblown". A crash in
>> LibreOffice is just as bad whether it happens during an upgrade,
>> during a full moon, or during the Olympic Games.
>
> All bugs are created equal? Not quite, as on errors.ubuntu.com we
> are ranking them esp. based on "frequency" -- with the implicit
> assumption that a high frequency bug will keep its high frequency
> throughout the lifecycle of the release. 'Upgrade-only bugs' break
> this assumption.

There is no such assumption.

If an error is more common during installation or upgrade than
otherwise, then the only way to judge its overall frequency fairly
would be to average its frequency over the lifetime of an
installation, however many months or years that is. That might be
interesting, but it would be too slow to make any judgements worth
acting on.

Fortunately, we have no evidence that that's any more than a
prioritization problem for the engineers responsible for Ubiquity,
Software Updater, do-release-upgrade and so on. And specifically, we
have no evidence that it's a problem for you.

Maybe LibreOffice does have upgrade-only bugs. But there is nothing in
your example bug report, the corresponding error report, or even the
commit that fixed the bug upstream in February, suggesting that it is
upgrade-only. On the contrary, it seems highly unlikely that anyone
would write "crash while installing a font" in a bug report if the
button they'd actually clicked was labelled "Start Upgrade".

> ...
>
>> Unfortunately, this calculation goes to hell on release day. All
>> of a sudden there are a gazillion new machines with the new
>> version of Ubuntu on them. And of those, some fraction will
>> report their first error. But that fraction are the only ones we
>> know exist at all. So the denominator is much too low -- making
>> the calculated error rate much too high.
>
> Thus the nice charts we are plotting on the page are mostly useless
> and neither help me find the most common issues of my package or
> the distro in toto[2].

The charts have never been intended for finding the most common
issues. They don't even mention any particular issues. The charts are
intended for answering question #1 in the rationale
<https://wiki.ubuntu.com/ErrorTracker#Rationale>: "How reliable is
Ubuntu right now? (Compared with yesterday, compared with the previous
version, or compared with what it would be if everyone had installed
every update.)"

The tables are for listing the most common issues, and they do that.

>> This is why the calculated error rate for every new release
>> spikes on release day, and corrects itself over the next 90
>> days. It's also why the calculated error rate for 13.10 plummeted
>> at the 14.04 release: lots of 13.10 machines were upgraded to
>> 14.04, and so from the error tracker's point of view they're
>> still 13.10 machines that suddenly became error-free.
>
> So what are the charts actually telling us? To me they show more
> artifacts of their normalization than useful information about the
> stability of a release:
>
> - for the first 90 days, there is no good normalization -- thats
> already 25% of a release cycle

And a similar problem:

- - For the 90 days after 31 July 2013, the chart shows *every* Ubuntu
version descending from a spike like the new-release spike. This is
because of an incomplete database migration that is blocked on a
database server upgrade. We have error reports going back to about
March 2012, but they're not in the current database. If they were, the
chart would go back to March 2012, *and* the error rate calculations
from August through September 2013 would be much more accurate.

> - for the last month, people are already starting to migrate to the
> next release, so the normalization goes off again (another 16% of
> the release cycle)

I don't see any evidence of that. The Ubuntu 13.10 reported error rate
didn't plummet until April 18th, the day *after* the 14.04 release.
And because of the July 2013 cutoff, we can't see whether this
happened for any earlier releases.

> IMHO, _if_ errors.ubuntu.com plots anything, it should plot the
> months 4 and 5 of the life of each release cycle over each other.
> Likely that chart would be much more boring (and unfortunately
> rather too late for us to take action upon it), but it is the only
> sensible chart to create from the data.

That's an interesting idea, but we don't have enough data to do that
until the database migration is done.

>> If anyone would like to fix this, it's just a simple matter of
>> programming. ;-) <http://launchpad.net/bugs/1069827>
>
> Im not exactly sure how normalizing this in a different way would
> help me identify high frequency bugs faster, so fixing the charts
> not too high on my priority list.

Sure. The charts are for comparisons, the tables are for identifying
high-frequency bugs. You care about identifying high-frequency bugs.

> Things that would be much more interesting to me would be stuff
> like:
>
> - get the absolute counts for a LibreOffice version and the distro
> release for a stacktrace and the estimated size of deployment
>
> - find correlations between the counts of multiple stacktraces:
>
> - hinting at two bugs caused by the same root cause
>
> - if one trace has a good reproduction scenario and the other does
> not, this would be very helpful etc.

If Launchpad's bug statuses weren't so messed up, we could do the last
of these by highlighting those errors that were linked to a bug report
in whichever status meant "reproducible".

> - much more stuff like that.
>
> Critical for that would be to be able to download the data and see
> what works and what does not for identifying issues by fiddling
> around in some python script or ad-hoc data mangling in a
> spreadsheet. I certainly wont program a solution "into the blind"
> if I havent found it helpful in a few cases ad-hoc at least.

Then the Daisy Pluckers is the team for you.
<https://launchpad.net/~daisy-pluckers>

> ...
>
> [2] Roberts post at
> http://bobthegnome.blogspot.de/2014/05/errorsubuntucom.html
> confirms this, it finds:
>
> - people use their machines more on weekdays

Less, actually. <http://launchpad.net/bugs/1046269>

> - people dont run ubuntu+1 on Christmas

Interesting hypothesis, but not supported by the data. Firstly, the
dip happened from about January 11th to January 28th, well after
Christmas. And secondly, a screenshot taken when the database went
back to March 2012 shows no equivalent crater for Ubuntu 13.04 in
December 2012 *or* January 2013. <http://i.imgur.com/977NGUw.png> So
whatever happened, it was not Christmas-related.

> - people started to use the beta in March

That's not supported by the data either. Apart from a problem that
affected every release on March 27th, the calculated error rate stayed
within a fairly narrow range from January 29th to April 16th.

> - people migrated from 13.10 to 14.04 quickly
>
> - people migrated from 12.04 to 14.04 slowly

"Quickly" and "slowly" are relative measures. You can say that Ubuntu
13.10 users appear to have migrated to 14.04 more quickly than 12.04
users did. But both of those are extremely quick compared to Windows,
and extremely slow compared to iOS, for example.

> - people dont migrate from 12.10 much

Alternatively, most Ubuntu 12.10 users did upgrade, but they did so
long ago. For the ones who didn't, the availability of 14.04 makes
little difference.

> All of which are observation on deployment size and migration,
> none of it is a measure of stability/crash frequency or helpful in
> identifying the most painful bugs -- even relative.
>
> ...

And the better this chart becomes at its original purpose, the less
you will be able to see any of those artifacts anyway.

- --
mpt
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
Comment: Using GnuPG with Thunderbird - http://www.enigmail.net/

iEYEARECAAYFAlN2Hu4ACgkQ6PUxNfU6ecqstwCgxXezKbQK1XJV4b/epMmn4ID4
xDAAmQERMeUtPg7l4D0984ARZyiiNihR
=5WBn
-----END PGP SIGNATURE-----

--
ubuntu-devel mailing list
[email protected]
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel