Thursday 25 May 2023

SRU shift report: 2023-05-24

It's been a while since I did one of these. It takes a while to write it
up that I could be spending on reviewing more SRUs, but I did also get
feedback that my last post was helpful. So I'll try to continue doing
these now and again.

As with last time, unfortunately most of the SRUs I looked at were
unable to make progress for some reason or other.

General themes:

* Missing status/explanation for releases subsequent to the one being
SRU'd. To avoid users facing a regression on upgrade, we'd like to
ensure that subsequent versions are already fixed, or if it is
necessary not to (eg. development release closed, not doing interim
release deliberately) then an explanation would really help. Also see

* Ambiguity on testing: what exact test steps were performed, and with
what versions? Was testing done with the actual builds that users
would use after this SRU lands? Does the testing cover general use of
the package, as well as the bug being fixed?

Today I started with the pending-sru report. I only considered entries
in the report that were fully green (ie. marked
"verification-done-<series>"). As long as the SRU queue is backlogged, I
consider it better to focus on these rather than try to progress the
ones that others can drive.

## rocr-runtime

This is finally verified in Kinetic - I remember looking at it again and
again over a number of shifts. But it's not aged yet for Kinetic even
though it is for Jammy, and unless we're in some reason to be hurried I
would release Kinetic first to avoid users facing a regression when they

Outcome: SRU processing is pending ageing.

Feedback: we discussed what we should do about interim releases and SRUs
on this list, found a compromise, and expectations are documented[1]. We
could speed up review cycles if all uploaders were familiar with it.

## apport

Outcome: deferred for a second pass over the queue (but I didn't get to

## autopkgtest

The bug task is New for Mantic and no indication in bug comments as to
whether it is fixed in Mantic or not. Asked in a comment.

Outcome: SRU processing is blocked.

Feedback: please make sure your upload is marked Fix Released in the
development release before uploading, or if that's not possible, then
there's an explanation in the bug.

## oss4

This is blocked on Andreas' request (and followup) for the status for
newer releases.

I'm not sure why Andreas' question has not been addressed. Are the
appropriate people subscribed to the bug? It looks like the sponsor is

To help with this case I am waiting on an MP that will automatically
subscribe uploaders to SRU bugs that they sponsored[2]. In the meantime,
it would help with sponsors did this themselves.

Feedback: if you're driving an SRU, or sponsored it for someone else,
please track it for followups.

Outcome: SRU processing is blocked.

## alsa-ucm-conf

There has been discussion/confusion over what kernel this should be
tested against. I don't want to release this without being certain that
the version being released has been verified against the appropriate
kernels from the archive, but the verification reports do not explicitly
state what versions were used. I remember a prior case of a severe
regression caused by this kind of ambiguity (the version actually tested
wasn't appropriate and resulted in a false negative), so I think it's
appropriate to push hard for an unambiguous verification comment,
especially when, as in this case with the kernels, there are multiple
versions floating about, including verification reports using locally
built kernels.

Outcome: SRU processing is blocked.

Feedback: please ensure that when you verify SRUs you specify exactly
what you did and state the relevant version numbers. Saying something
like "I followed the Test Plan and the test passed with version $v" is
fine since that unambiguously states what you did (in this case stating
kernel versions would also be necessary). Just flipping the bug tag
leaves ambiguity. We also expect SRUs to be tested entirely against the
archive, not against local or other builds.

## iptables

I think there are two things that we should be doing in SRU

1) Verifying that the bug we're trying to fix really is fixed, to avoid
having to push another update with further regression risk and
inconvenience to users.

2) Verifying that we haven't regressed unrelated functionality of the
package. A smoke test is about the most that is usually practical if the
package doesn't already have automated tests.

Usually the Test Plan implicitly performs the latter as part of the
former, or there's an automated test that covers the latter. But
sometimes this isn't the case and that's when I look for the latter to
be done explicitly.

In this case, I looked to see how the Test Plan might cover the latter,
and I didn't find anything. There are autopkgtests but they don't seem
broad enough to test basic functionality of iptables itself, and the
build log doesn't seem to mention a build-time test suite.

The result of releasing a regression in iptables would be severe, as
users' networking might be impacted such that they cannot
straightforwardly receive a regression fix. I'm not comfortable
releasing this without some confidence that somebody or some thing has
actually checked that iptables still works in proposed! So I asked for
that in a comment.

Outcome: SRU processing is blocked.

Feedback: documentation would have helped.

Followup: Andreas mentioned that he thought the latter case was
adequately covered by the autopkgtests. I didn't see any documentation
on that at the time, though, and I had spent a bit of time looking
around including a look at the source tree.

## swtpm

This was correctly marked Fix Released in Mantic, but I couldn't find
any explanation of how this was the case. I was concerned the status
change was an accident, so I dug into this and found that the relevant
code had substantially changed, the issue didn't appear present in the
new code and so the status was likely correct.

Outcome: SRU released.

Feedback: there was nothing wrong procedurally here. There's no
requirement for an explanation. But it's common either to see the bug
automatically closed with a changelog entry, or for someone to leave a
note stating which upstream version the bug was fixed in, and I find
these useful!

## dovecot

This FTBFS on amd64 according to the pending-sru report, but was marked
verification-done. It looks like it was verified on arm64 only. Asked in
the comment for the amd64 side to be looked at.

Outcome: SRU processing is blocked.

Feedback: there's nothing procedurally wrong here. But perhaps the amd64
build failure hadn't been noticed? The pending-sru report[3] collates
the status of everything together, and will flag build failures,
autopkgtest failures, and any outstanding SRU verifications. Please keep
an eye on that - usually the (backlogged) SRU team won't consider
releasing an SRU until it is green and has no warnings in this report.

## mokutil

This one took me some time to understand. FTBFSes were noted in the
pending-sru report but these architectures were not built previously so
there is no regression. The verification seemed to start in the middle
of Test Plan with previous steps not detailed if they were followed, but
eventually I concluded that what was done is equivalent. Released to

Outcome: SRU released.

Feedback: precise documented adherence to the Test Plan would make it
easier to confirm that the package is ready for release.

## wordpress

There was no mention of Kinetic, but I looked at the sources and found
that the patch is already applied in Kinetic. Released to Jammy. I added
a bug task for Kinetic and marked it Fix Released with a comment for

Feedback: it would be helpful to explicitly state the status of
subsequent releases with a reason - this would save the SRU reviewer
the time needed to investigate to find the answer to this question. See
also our policy on newer releases[1].

Outcome: SRU released.

## software-properties

OK to release. Kinetic not mentioned but I happen to know it's not
relevant for the cloud archive.

Feedback: it would be helpful to explicitly state the status of
subsequent releases with a reason - in the general case this would save
the SRU reviewer the time needed to investigate to find the answer to
this question, although in this case I happened to know the answer. See
also our policy on newer releases[1].

Outcome: SRU released.

## tepl

This one was a bit puzzling since it was uploaded to Lunar originally
but actually landed in Mantic and looks like a copy back to Lunar to fix
the issue. Actually that seems all fine and correct and released to Lunar.

Feedback: none

Outcome: SRU released.

## python-tz

This is still waiting for autopkgtest failure investigations.

Outcome: SRU processing is blocked.

Feedback: none

## curl

This is still waiting for autopkgtest failure investigations.

Outcome: SRU processing is blocked.

Feedback: none


There ends the SRUs that looked like they were in a releasable state.

Next, I started looking at the unapproved queues, but I didn't get very
far here before the end of my shift.

## yaru-theme

This included changes that had no accompanying documentation or
explanation, and there was no special case documented[4].

I asked around SRU team members and nobody else seemed to know either.
It seemed to me that adding documentation would require a re-upload
since there would need to be a changelog change, so I rejected this

There was further discussion in #ubuntu-release and the final outcome of
how we're going to fix this package in Jammy is not concluded.

Outcome: rejected from queue.

Feedback: regardless of the eventual outcome, please note that the SRU
team cannot handle special cases without an explanation that we can
discover through the SRU documentation. We would like for SRU team
decisions and expectations to be consistent, but we cannot do that
without documentation and a pointer to that documentation. Even if some
SRU team member handled something in the past, if we don't know if it
was handled and how it was handled, we cannot be expected to it the same
way this time. Even if we did know who previously handled it, during an
SRU shift that person is often not available.