Thursday, 6 March 2014

Re: Changing default CFLAGS on i386

hi,

On Thu, Mar 6, 2014, at 15:12, Dimitri John Ledkov wrote:
> I don't think that's the right approach here. We do have correct code
> in our packages, and where needed packages do raise standards version
> / target cpu features etc. to generate performant / correct or
> optimized code.

It seems like you are advising that the correct solution here is to
apply CFLAGS on a package-by-package basis, as they are needed. ie: if
a package relies on the C compiler producing correct code then we should
have CFLAGS=-mexcess-precision=standard in the CFLAGS for this package?

> """
> As a programmer, my primary concern is that when I type "cc -o x x.c", I
> get correct output, as per the specification. That's not currently
> happening on Ubuntu.
> """
>
> whilst we do a lot with our toolchain to produce hardened and correct
> code, executing compiler without any flags is not going to guess the
> programmer's desires/expectations.

I think a reasonable assumption on the programmer's expectations is a
standards-compliant compiler.



> Currently we still default to gnu89
> standard, whereas c11/c++11 is the default in the current Xcode
> (clang/llvm).

It's worth noting that I've yet to be able to get llvm/clang to exhibit
this problem, perhaps because of its adherence to C99 by default.

> On the other hand Visual Studio only conforms to c89 yet
> uses and supports c++11 features (selectively).

Appealing to MSVC when arguing about the acceptability of non-standard
behaviour is not a good path to go down. I suffer more than most people
I know at the hands of lack of C99 features in MSVC. At least they
don't claim to have a C99-compliant compiler...

> Similarly, Intel
> compiler also applies floating point optimisations not allowed by the
> standards.

If true, I would have a similar argument against this, but I've never
used the Intel compiler.

It's worth noting that this entire problem seems to be more or less
unique to Linux: http://www.vinc17.org/research/extended.en.html

> Floating point computation is not precise, unless special care is
> taken, but most things do not require nor assume standards compliance
> down to the very nitty gritty details.

We're talking about an issue that causes this code to fail:

int x = 1;

double
get_value (void)
{
return x / 1e6;
}

int
main (void)
{
double a, b;

a = get_value ();
b = get_value ();

if (a != b)
abort ();

return 0;
}

I don't consider "basic determinism" to be reliance on "nitty gritty
details". I just consider it to be common sense.

> Neither dropping support for chipsets, or having slower operations
> sounds attractive nor so far justified.

The justification is that it wasted several hours of my time (and the
time of other desktop team members due to failed package uploads)
tracking this issue down. The 50+ duplicates of this bug filed against
GCC suggest that we're not the only ones...

Regardless of your beliefs about if or if not the C99 spec is correct in
specifying this behaviour, you must surely agree that gcc is in
violation of the spec. You could argue (as I believe you have) that gcc
has no obligation to follow the spec. I believe that the wording in the
C99 spec was chosen for a reason, and a very good one: not to follow
this rule produces extremely bizarre and difficult to debug
non-deterministic behaviour.

Cheers

--
ubuntu-devel mailing list
[email protected]
Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-devel