Thursday, 6 March 2014

Re: Changing default CFLAGS on i386

On 6 March 2014 14:44, Ryan Lortie <[email protected]> wrote:
> hi,
> On Thu, Mar 6, 2014, at 2:23, Adam Conrad wrote:
>> I wouldn't be entirely against this option, if the performance hit is
>> measurably not awful in general purpose usage.
> I'd rather approach the problem from the point of "we must have correct
> code".

I don't think that's the right approach here. We do have correct code
in our packages, and where needed packages do raise standards version
/ target cpu features etc. to generate performant / correct or
optimized code.

As a programmer, my primary concern is that when I type "cc -o x x.c", I
get correct output, as per the specification. That's not currently
happening on Ubuntu.

whilst we do a lot with our toolchain to produce hardened and correct
code, executing compiler without any flags is not going to guess the
programmer's desires/expectations. Currently we still default to gnu89
standard, whereas c11/c++11 is the default in the current Xcode
(clang/llvm). On the other hand Visual Studio only conforms to c89 yet
uses and supports c++11 features (selectively). Similarly, Intel
compiler also applies floating point optimisations not allowed by the

Imho, the expectation upon executing "cc -o x x.c" is that resulting
binary will work on any other ubuntu installation for the same
architecture, with reasonable forward/backwards compatibility. If one
is testing / coding against standards specifications one should set
appropriate -std= flag for the target specification revision
with/without GNU extensions.

Floating point computation is not precise, unless special care is
taken, but most things do not require nor assume standards compliance
down to the very nitty gritty details.

Neither dropping support for chipsets, or having slower operations
sounds attractive nor so far justified.



ubuntu-devel mailing list
[email protected]
Modify settings or unsubscribe at: