Tuesday, 21 March 2017

SRU quality and preventing regressions

I'm fairly new to ~ubuntu-sru, and I noticed a couple of things that I
think we could do better in terms of preventing SRU regressions:

1) Actual consideration of how regressions might manifest in "Regression
Potential" paperwork.

2) A description of what was actually tested when marking

Both of these are already requirements of the SRU process, but many
uploaders appear to be unaware of this. Since I started on ~ubuntu-sru,
I have been following the process and either fixing up or deferring SRUs
that do not follow the process. Please help to save everyone time by
making sure this is done properly the first time.

With my DMB hat, we considered and accepted our first application to
~ubuntu-sru-developers last week. In also having an SRU hat[1], I'd like
to see ~ubuntu-sru-developers holding the torch for SRU quality, but
that's tough when I think other existing uploaders could do better.


1) "Regression Potential"

"Regression Potential" is supposed to describe:

...how regressions are most likely to manifest, or may manifest
even if it is unlikely, as a result of this change. It is
assumed that any SRU candidate patch is well-tested before
upload and has a low overall risk of regression, but it's
important to make the effort to think about what could happen in
the event of a regression.

Note that "Low" or "None", or an explanation of why it is "Low" or
"None", is insufficient and doesn't meet this requirement.

If I don't have enough information to be able to fill this in myself
quickly, or if I have a particuarly big queue when I'm reviewing, I will
continue to bounce this back to the uploader and delay the SRU. I prefer
this over accepting something that will get insufficient verification
and risk regressing the stable release.

2) "verification-done"

When marking "verification-done", please describe what packages were
tested and what versions. This is explicitly requested in the acceptance
message, but I see many people not doing this.

We have had at least one very severe regression because the version
tested was not the version released. To prevent this happening again, I
will continue to bounce back any "verification-done" that does not
explicitly state what package versions were tested.

Just switching the tags is similarly unacceptable. I will continue to
decline to release and just switch the tag back when I see this.

Automated testing is still fine - just arrange the automatic testing to
print out the package versions used, and copy that detail in, explaining
it was automatically tested.

If you are a sponsor, please hold contributors to these standards too.
Otherwise they'll only get frustrated later when their SRUs get delayed.



[1] currently I believe there are four people who wear both hats, so
this isn't unique to me