Post by Niall DouglasWell, sure. But I really wish the authors of the relevant papers
before WG21 described in their motivation why they think that a
library approach is clearly superior to an already published
standard. That's a fairly high bar, in my opinion, to meet when
essentially proposing "I don't think the standardised way is
sufficient for reasons A, B and C. Here's what I propose instead
...". And I don't remember such explanatory text in motivations.
I was hoping somebody could link me to such a text, I could read
it, and as someone without domain expertise in fixed point
arithmetic, I could go away feeling satisfied WG21 is on the
right track on this.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0037r5.html#N1169
<http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0037r5.html#N1169>
I'm not sure it's ever come up before. I assumed the advantages of
parameterizing exponent were fairly obvious.
Thanks for the links.
Sure I get that freeform exponents are useful, but to my inexperienced
and untrained eye the fixed exponent choices in N1169 were because the
codegen would come out much cleaner, and in which I would assume it is
therefore faster and/or more predictable.
Now if I'm totally wrong on that, then that's great to learn. But I
don't think P0037 or P0106 can just hand wave N1169 away like they do.
N1169 ought to be /refuted/ as being empirically inferior to whatever
approach is being proposed.
One of the biggest problems with the N1169 approach is that it has types
that have fixed but implementation-specified sizes. This is completely
and totally useless to embedded programmers (or any other programmers
interested in fixed point). It does not matter how efficient they are
implemented if no one knows what they are!
If you are programming in an embedded system, you want fixed point
numbers in the "Q" format. (Sometimes people use different names, but
it is the same thing.) You want something like Q4.12, which is a 16-bit
signed integer scaled by 2 ^ 12. Or you want UQ0.8, which is an 8-bit
unsigned integer scaled by 2 ^ 8 - i.e., a number between 0 and just
under 1. You need to know the exact range, and exact precision, and
exact size - N1169's "signed short _Fract", etc., are totally
meaningless. It is /infinitely/ more important that you have the exact
sizes you want than that you have a type that can be implemented
efficiently on the target hardware.
For ordinary integers, embedded programmers use int16_t, uint32_t - they
don't use "int" or "short".
A solution that does not give you this level of control and this
information should be rejected out of hand.
I also think you are misunderstanding the state of N1169. It is a TR
proposal that no one uses, with only a small part of it implemented on a
few toolchains. It is not part of the C standards. Rejecting it does
not mean throwing away useful work or implementations.
I haven't yet read through the linked C++ fixed point proposals. But
one thing that would be nice to have in them is an encouragement for
implementations to have specialised cases of the templates with more
optimal code for their targets. For example, on the AVR microcontroller
you would expect that Q2.6 multiplication be implemented with shifts and
masks from general code, but that Q1.7 would use inline assembly for the
"fractional multiply" instruction that many AVR devices support. Then
developers can choose formats with most efficient implementations - but
their primary motivation for choice is the format needed for correct code.
Post by Niall DouglasI'll put this another way. If the Elsewhere Memory SG is approved, it's
on that SG to explain in its proposed changes to the C++ memory model to
support mapped memory why the multiple address spaces feature of N1169
was not adopted (if that's what the SG ends up choosing). After all,
LLVM and other compilers already implement N1169, there is plenty of
empirical experience, and /it's an ISO standard/. Not following the
currently standard way of doing things in a standards proposal seems to
me a high claim to make - you need to /refute/ the current approach,
ideally empirically.
Does that make sense? If it does, that's my concern. I'd like to see
side-by-side godbolt with clang showing N1169 output on one side, and
proposed standard output on the other, in which N1169 output is
obviously no better. Then I'd consider that having freeform exponents
has no cost, and all is rosy and dandy.
Niall
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+***@isocpp.org.
To post to this group, send email to std-***@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/ppkd61%24itp%241%40blaine.gmane.org.