Discussion:
[std-proposals] Language large integers draft 1
Niall Douglas
2018-11-05 18:58:34 UTC
Permalink
I know everybody is currently at the San Diego WG21 meeting, but I just
posted the attached draft 1 paper to WG14 C programming language committee
for feedback.

The attached paper is NOT intended to replace P0539. Rather, it proposes
implementing a subset of P0539 directly into the C programming language
(and thus, by extension, C++).

The attached paper already received considerable WG14 input before I wrote
it, and hence its very-easy-to-implement-for-compilers design.

I don't expect this paper to go before WG21, but I nevertheless welcome
feedback and input from std-proposals.

Thanks,
Niall
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+***@isocpp.org.
To post to this group, send email to std-***@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/236435db-c255-48a6-940a-6b0ddf0481aa%40isocpp.org.
Arthur O'Dwyer
2018-11-06 02:12:46 UTC
Permalink
Post by Niall Douglas
I know everybody is currently at the San Diego WG21 meeting, but I just
posted the attached draft 1 paper to WG14 C programming language committee
for feedback.
The attached paper is NOT intended to replace P0539. Rather, it proposes
implementing a subset of P0539 directly into the C programming language
(and thus, by extension, C++).
it, and hence its very-easy-to-implement-for-compilers design.
I don't expect this paper to go before WG21, but I nevertheless welcome
feedback and input from std-proposals.
I don't 100% buy the direction of the paper, but nevertheless, kudos for
putting together Table 1 (page 2) with a Godbolt link! That's good
research right there, and definitely shows that P0539's reference
implementation is troubling.

I tend to think that the "library-only" approach *would* work fine, *if
only* we could get the proper primitives into the standard library. For
example, if we had something like _addcarry_u64, but without the pointer
parameter. These primitives will be generally useful; and then things like
std::uintN_t<256> can be built on top of them. Right now it feels like
P0539 (and John McFarlane's P0554/P0828) are trying to build uintN_t
without any access to the appropriate primitives.

Nit on the WG14 proposal: Surely the proper name for the new header is
<stdwide.h>, not <stdwint.h>? Or if we must have <stdwint.h>, can we get
<stdkidd.h> <https://en.wikipedia.org/wiki/Mr._Wint_and_Mr._Kidd> to go
with it?

Nit on the WG14 proposal: Several instances of "twos-power" should be
"power-of-two".

Major issue on the WG14 proposal: The proposed "**" widening multiplication
operator comes out of nowhere. Not only would this be inconsistent with
every other programming language (where "**" generally means
"exponentiation"), but it would be a lexer-breaking change for C and C++
because two consecutive stars can already appear in valid code. Why not
just propose a library function `z = widening_multiply(x, y)` that does the
same thing but without breaking any existing lexers? C already has _Generic
functions (as of C11) which means you don't need to create a new operator
to form an overload set in C. You can just do the same thing that e.g.
`sin(x)` does.

(I think `_Generic` works only on closed sets of types, not open-ended
sets; but the set of all `_Wide(N) int` types is closed in practice,
because implementations will not allow anything larger than about `_Wide(1
<< 30)`. So there are only at most 30 wide types — well within the
capabilities of C11 `_Generic` as far as I know.)

Question/issue on the WG14 proposal: Does there exist any space-efficient
algorithm for printing a `_Wide(1 << 20) int` in base 10? Or are you
proposing to allow printing them only in bases 8 and 16? The wording in
section 3.11 is too vague to be sure.

–Arthur
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+***@isocpp.org.
To post to this group, send email to std-***@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/6805b644-aeb2-4103-a4a0-c3573d2ab02b%40isocpp.org.
Niall Douglas
2018-11-06 19:01:38 UTC
Permalink
Post by Arthur O'Dwyer
I tend to think that the "library-only" approach *would* work fine, *if
only* we could get the proper primitives into the standard library. For
example, if we had something like _addcarry_u64, but without the pointer
parameter. These primitives will be generally useful; and then things like
std::uintN_t<256> can be built on top of them. Right now it feels like
P0539 (and John McFarlane's P0554/P0828) are trying to build uintN_t
without any access to the appropriate primitives.
I think C++ will never standardise support for carry and overload
arithmetic. Too many CPUs don't support it.

And besides, *it solves the wrong problem*. Don't tell the compiler how to
implement bigint. Tell it you want bigint. Let it sort out an
implementation.
Post by Arthur O'Dwyer
Nit on the WG14 proposal: Surely the proper name for the new header is
<stdwide.h>, not <stdwint.h>? Or if we must have <stdwint.h>, can we get
<stdkidd.h> <https://en.wikipedia.org/wiki/Mr._Wint_and_Mr._Kidd> to go
with it?
Well, I wouldn't want to consume the name for something standard and wide
in the future. stdwint is at least consistent with stdint. Bond villans
aside, of course.
Post by Arthur O'Dwyer
Major issue on the WG14 proposal: The proposed "**" widening
multiplication operator comes out of nowhere. Not only would this be
inconsistent with every other programming language (where "**" generally
means "exponentiation"), but it would be a lexer-breaking change for C and
C++ because two consecutive stars can already appear in valid code. Why not
just propose a library function `z = widening_multiply(x, y)` that does the
same thing but without breaking any existing lexers? C already has _Generic
functions (as of C11) which means you don't need to create a new operator
to form an overload set in C. You can just do the same thing that e.g.
`sin(x)` does.
It *may* be the case that ordinary multiply can reliably do widening
multiplication with compiler unknown values, and so no extra operator is
needed.

I specifically included the widening multiplication operator because these
wide integers are a bit different from normal integers, specifically the
compiler is not required to understand them (they are just a bunch of bits
as far as the compiler knows). That means no constant folding, no common
subexpression elimination, and so on (though implementations can optionally
choose to implement those).

I had been assuming that this would require the programmer to tell the
compiler what to do. But a compiler writer on WG14 seems to think it'll be
okay, just cast the inputs to the larger output type and multiply. The
compiler, even without knowing anything about multiply, should correctly
not call the twice-too-large multiply routine.
Post by Arthur O'Dwyer
Question/issue on the WG14 proposal: Does there exist any space-efficient
algorithm for printing a `_Wide(1 << 20) int` in base 10? Or are you
proposing to allow printing them only in bases 8 and 16? The wording in
section 3.11 is too vague to be sure.
WG14 want the maximum size of these to be no more than the compiler's
maximum alignment for the target architecture, which I find reasonable.
That's currently 1024 bits on GCC x64.

Niall
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+***@isocpp.org.
To post to this group, send email to std-***@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/a3f993f4-000c-4841-bc54-1072497b1085%40isocpp.org.
John McFarlane
2018-11-08 08:18:33 UTC
Permalink
Post by Arthur O'Dwyer
I tend to think that the "library-only" approach *would* work fine, *if
Post by Arthur O'Dwyer
only* we could get the proper primitives into the standard library. For
example, if we had something like _addcarry_u64, but without the pointer
parameter. These primitives will be generally useful; and then things like
std::uintN_t<256> can be built on top of them. Right now it feels like
P0539 (and John McFarlane's P0554/P0828) are trying to build uintN_t
without any access to the appropriate primitives.
I think C++ will never standardise support for carry and overload
arithmetic. Too many CPUs don't support it.
And besides, *it solves the wrong problem*. Don't tell the compiler how
to implement bigint. Tell it you want bigint. Let it sort out an
implementation.
I agree that carry flags can (hopefully) remain an implementation detail.
I'm not sure how a library solution won't suffice -- other than that it's
of no use to WG14.

(If GCC/Clang's equivalent intrinsics are anything to go by, I don't see
how pointers pose a problem. Yes, it's an ugly interface but should be
constexpr-friendly in latest tools and language revision.)

Regarding the efficiency of P0539, don't judge it by the implementation
which, AFAIK, only implements bytewise arithmetic. I still think it's the
wrong API but for more reasons than just performance.
Post by Arthur O'Dwyer
Post by Arthur O'Dwyer
Nit on the WG14 proposal: Surely the proper name for the new header is
<stdwide.h>, not <stdwint.h>? Or if we must have <stdwint.h>, can we get
<stdkidd.h> <https://en.wikipedia.org/wiki/Mr._Wint_and_Mr._Kidd> to go
with it?
Well, I wouldn't want to consume the name for something standard and wide
in the future. stdwint is at least consistent with stdint. Bond villans
aside, of course.
Post by Arthur O'Dwyer
Major issue on the WG14 proposal: The proposed "**" widening
multiplication operator comes out of nowhere. Not only would this be
inconsistent with every other programming language (where "**" generally
means "exponentiation"), but it would be a lexer-breaking change for C and
C++ because two consecutive stars can already appear in valid code. Why not
just propose a library function `z = widening_multiply(x, y)` that does the
same thing but without breaking any existing lexers? C already has _Generic
functions (as of C11) which means you don't need to create a new operator
to form an overload set in C. You can just do the same thing that e.g.
`sin(x)` does.
It *may* be the case that ordinary multiply can reliably do widening
multiplication with compiler unknown values, and so no extra operator is
needed.
I specifically included the widening multiplication operator because these
wide integers are a bit different from normal integers, specifically the
compiler is not required to understand them (they are just a bunch of bits
as far as the compiler knows). That means no constant folding, no common
subexpression elimination, and so on (though implementations can optionally
choose to implement those).
I had been assuming that this would require the programmer to tell the
compiler what to do. But a compiler writer on WG14 seems to think it'll be
okay, just cast the inputs to the larger output type and multiply. The
compiler, even without knowing anything about multiply, should correctly
not call the twice-too-large multiply routine.
I am not surprised to hear that. I would not pursue `**`.
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+***@isocpp.org.
To post to this group, send email to std-***@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/CABPJVnS%3D3k83aj-ZcQ8oNRsHboO_fmJ5bD4ASab3Qkir2v-uPg%40mail.gmail.com.
Niall Douglas
2018-11-08 08:59:21 UTC
Permalink
Post by John McFarlane
Post by Niall Douglas
And besides, *it solves the wrong problem*. Don't tell the compiler how
to implement bigint. Tell it you want bigint. Let it sort out an
implementation.
I agree that carry flags can (hopefully) remain an implementation detail.
I'm not sure how a library solution won't suffice -- other than that it's
of no use to WG14.
Can you show me a library solution which causes the compiler to generate
optimum codegen for extended precision integers, and which does not use
non-standard C++?

(Funnily enough, a "stupid" fixed implementation library implementation of
my proposal based on _Generic I think is very close to possible, but I am
unaware of any implementation in existence. You have a lot more experience
in this domain than I do though)

Niall
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+***@isocpp.org.
To post to this group, send email to std-***@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/a4b9c3d3-1b3c-4c90-bc98-124106e6d980%40isocpp.org.
John McFarlane
2018-11-08 15:56:49 UTC
Permalink
Post by Niall Douglas
Post by John McFarlane
Post by Niall Douglas
And besides, *it solves the wrong problem*. Don't tell the compiler how
to implement bigint. Tell it you want bigint. Let it sort out an
implementation.
I agree that carry flags can (hopefully) remain an implementation detail.
I'm not sure how a library solution won't suffice -- other than that it's
of no use to WG14.
Can you show me a library solution which causes the compiler to generate
optimum codegen for extended precision integers, and which does not use
non-standard C++?
Perhaps your idea of sufficient is different to mine. Yes, if you want a
library implementation that doesn't use magic then you'll either need
native wide integers or we standardize the built-ins that already exist in
compilers, e.g. __builtin_add_overflow.
Post by Niall Douglas
(Funnily enough, a "stupid" fixed implementation library implementation of
my proposal based on _Generic I think is very close to possible, but I am
unaware of any implementation in existence. You have a lot more experience
in this domain than I do though)
I'm working on it currently. I've just got on to the division operator. Ask
me again next year. :S
Post by Niall Douglas
Niall
--
You received this message because you are subscribed to the Google Groups
"ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an
To view this discussion on the web visit
https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/a4b9c3d3-1b3c-4c90-bc98-124106e6d980%40isocpp.org
<https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/a4b9c3d3-1b3c-4c90-bc98-124106e6d980%40isocpp.org?utm_medium=email&utm_source=footer>
.
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+***@isocpp.org.
To post to this group, send email to std-***@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/CABPJVnSP2mWYTNwa4vybwwVyumTD%2BZOroUxCqVuWWWyYXcCufw%40mail.gmail.com.
Arthur O'Dwyer
2018-11-10 16:31:37 UTC
Permalink
Post by John McFarlane
Post by Arthur O'Dwyer
I tend to think that the "library-only" approach *would* work fine, *if
Post by Arthur O'Dwyer
only* we could get the proper primitives into the standard library. For
example, if we had something like _addcarry_u64, but without the pointer
parameter. These primitives will be generally useful; and then things like
std::uintN_t<256> can be built on top of them. Right now it feels like
P0539 (and John McFarlane's P0554/P0828) are trying to build uintN_t
without any access to the appropriate primitives.
I think C++ will never standardise support for carry and overload
arithmetic. Too many CPUs don't support it.
And besides, *it solves the wrong problem*. Don't tell the compiler how
to implement bigint. Tell it you want bigint. Let it sort out an
implementation.
I agree that carry flags can (hopefully) remain an implementation detail.
I'm not sure how a library solution won't suffice -- other than that it's
of no use to WG14.
(If GCC/Clang's equivalent intrinsics are anything to go by, I don't see
how pointers pose a problem. Yes, it's an ugly interface but should be
constexpr-friendly in latest tools and language revision.)
Regarding the efficiency of P0539, don't judge it by the implementation
which, AFAIK, only implements bytewise arithmetic. I still think it's the
wrong API but for more reasons than just performance.
"Don't judge this numerics library by its performance" doesn't sound
acceptable to me. I mean, performance seems like just about the only thing
we *should* be judging. Would you (or anyone) say you're confident that the
performance problems can be overcome? Do we have a proof-of-concept
somewhere that shows we can get perfect codegen for *any one* test case?

I trust the inliner on all modern compilers (except maybe MSVC). If someone
can provide C++ code (standard or non-standard, I don't care) that
implements 256-bit and 512-bit arithmetic with ugly C++ but perfect codegen
on at least one compiler, then I'll gladly donate a day of my time to
refactoring the C++ code into something more ergonomic.

Here's a perfect-codegen proof of concept for 128-bit operator+.
https://godbolt.org/z/UlwPA6
Here's an intrinsic-less proof of concept for 128-bit operator+, but it
falls down pretty badly on 256-bit operator+.
https://godbolt.org/z/Y3_Z89

Since I can't even get to 256-bit operator+ on my own, I'm by default
skeptical that anyone can get to N-bit all-the-operators. That's why I want
to see a proof of concept, and why I think it's so important to look at the
codegen of the existing libraries and demonstrate (as Niall has
demonstrated) that they're all horrible. We shouldn't standardize something
that's horrible.

Why are current libraries horrible? If it's because their authors are
"digging with spoons instead of shovels," then we should ask them what
shovels they'd need in order to do a good job. (For example, would it help
to have "widening add" and "widening multiply" algorithms in the Standard
Library?)

–Arthur
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+***@isocpp.org.
To post to this group, send email to std-***@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/CADvuK0%2BCMS%2BX0_vo3U_q8oJqX5Rc4UehHO_y8OfGJ3Dvqgr3BA%40mail.gmail.com.
Brian Bi
2018-11-11 01:58:39 UTC
Permalink
Post by Arthur O'Dwyer
Post by John McFarlane
Post by Arthur O'Dwyer
I tend to think that the "library-only" approach *would* work fine, *if
Post by Arthur O'Dwyer
only* we could get the proper primitives into the standard library.
For example, if we had something like _addcarry_u64, but without the
pointer parameter. These primitives will be generally useful; and then
things like std::uintN_t<256> can be built on top of them. Right now it
feels like P0539 (and John McFarlane's P0554/P0828) are trying to build
uintN_t without any access to the appropriate primitives.
I think C++ will never standardise support for carry and overload
arithmetic. Too many CPUs don't support it.
And besides, *it solves the wrong problem*. Don't tell the compiler how
to implement bigint. Tell it you want bigint. Let it sort out an
implementation.
I agree that carry flags can (hopefully) remain an implementation detail.
I'm not sure how a library solution won't suffice -- other than that it's
of no use to WG14.
(If GCC/Clang's equivalent intrinsics are anything to go by, I don't see
how pointers pose a problem. Yes, it's an ugly interface but should be
constexpr-friendly in latest tools and language revision.)
Regarding the efficiency of P0539, don't judge it by the implementation
which, AFAIK, only implements bytewise arithmetic. I still think it's the
wrong API but for more reasons than just performance.
"Don't judge this numerics library by its performance" doesn't sound
acceptable to me. I mean, performance seems like just about the only thing
we *should* be judging. Would you (or anyone) say you're confident that
the performance problems can be overcome? Do we have a proof-of-concept
somewhere that shows we can get perfect codegen for *any one* test case?
I don't think anyone is saying that it's acceptable for the performance to
be terrible. But having bignums standardized as a library component as in
P0539 doesn't prevent the implementation from treating std::wide_integer
types as magic types implemented using efficient intrinsics.
--
*Brian Bi*
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+***@isocpp.org.
To post to this group, send email to std-***@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/CAMmfjbN8ZUeA8AmuDwPc8rQsPkQ_XWSfLsBUKJBPW8GsPhmOig%40mail.gmail.com.
Niall Douglas
2018-11-13 10:28:33 UTC
Permalink
Post by Brian Bi
I don't think anyone is saying that it's acceptable for the performance to
be terrible. But having bignums standardized as a library component as in
P0539 doesn't prevent the implementation from treating std::wide_integer
types as magic types implemented using efficient intrinsics.
My view is that as soon as a library implementation needs compiler hooks to
get the compiler to do the right thing, it's time to strongly consider
modifying the language to be more expressive so you don't need those hooks.

For example, a DSEL for telling the compiler how to exactly do math is long
overdue in C++. Such a DSEL could let you specify bigint, safe int, SIMD
etc without the compiler messing with what you told it to do. You could
form blocks of implementation which then appear in C++ as if builtin types,
and you could rock on without any need for library changes.

Niall
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+***@isocpp.org.
To post to this group, send email to std-***@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/6bbe928a-8815-42e9-9fde-747adb2ae53e%40isocpp.org.
Arthur O'Dwyer
2018-11-11 06:54:06 UTC
Permalink
Post by Arthur O'Dwyer
Post by John McFarlane
Post by Arthur O'Dwyer
I tend to think that the "library-only" approach *would* work fine, *if
Post by Arthur O'Dwyer
only* we could get the proper primitives into the standard library.
For example, if we had something like _addcarry_u64, but without the
pointer parameter. These primitives will be generally useful; and then
things like std::uintN_t<256> can be built on top of them. Right now it
feels like P0539 (and John McFarlane's P0554/P0828) are trying to build
uintN_t without any access to the appropriate primitives.
I think C++ will never standardise support for carry and overload
arithmetic. Too many CPUs don't support it.
And besides, *it solves the wrong problem*. Don't tell the compiler how
to implement bigint. Tell it you want bigint. Let it sort out an
implementation.
I agree that carry flags can (hopefully) remain an implementation detail.
I'm not sure how a library solution won't suffice -- other than that it's
of no use to WG14.
(If GCC/Clang's equivalent intrinsics are anything to go by, I don't see
how pointers pose a problem. Yes, it's an ugly interface but should be
constexpr-friendly in latest tools and language revision.)
Regarding the efficiency of P0539, don't judge it by the implementation
which, AFAIK, only implements bytewise arithmetic. I still think it's the
wrong API but for more reasons than just performance.
"Don't judge this numerics library by its performance" doesn't sound
acceptable to me. I mean, performance seems like just about the only thing
we *should* be judging. Would you (or anyone) say you're confident that
the performance problems can be overcome? Do we have a proof-of-concept
somewhere that shows we can get perfect codegen for *any one* test case?
I trust the inliner on all modern compilers (except maybe MSVC). If
someone can provide C++ code (standard or non-standard, I don't care) that
implements 256-bit and 512-bit arithmetic with ugly C++ but perfect codegen
on at least one compiler, then I'll gladly donate a day of my time to
refactoring the C++ code into something more ergonomic.
Here's a perfect-codegen proof of concept for 128-bit operator+.
https://godbolt.org/z/UlwPA6
Here's an intrinsic-less proof of concept for 128-bit operator+, but it
falls down pretty badly on 256-bit operator+.
https://godbolt.org/z/Y3_Z89
Okay, here's a proof of concept for 512-bit operator+.
https://godbolt.org/z/hpcC9t
It still stumbles on operator- and operator< due to codegen issues in Clang.
I have filed the codegen issues under existing issue
https://bugs.llvm.org/show_bug.cgi?id=24545 .

–Arthur
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+***@isocpp.org.
To post to this group, send email to std-***@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/CADvuK0Ke7WjDO-YcKQ2%2Bzv7j58r18u3c%2B4oes5mc4-4Py%3Dc7HA%40mail.gmail.com.
John McFarlane
2018-11-11 22:10:14 UTC
Permalink
Post by Arthur O'Dwyer
Post by John McFarlane
Post by Arthur O'Dwyer
I tend to think that the "library-only" approach *would* work fine, *if
Post by Arthur O'Dwyer
only* we could get the proper primitives into the standard library.
For example, if we had something like _addcarry_u64, but without the
pointer parameter. These primitives will be generally useful; and then
things like std::uintN_t<256> can be built on top of them. Right now it
feels like P0539 (and John McFarlane's P0554/P0828) are trying to build
uintN_t without any access to the appropriate primitives.
I think C++ will never standardise support for carry and overload
arithmetic. Too many CPUs don't support it.
And besides, *it solves the wrong problem*. Don't tell the compiler how
to implement bigint. Tell it you want bigint. Let it sort out an
implementation.
I agree that carry flags can (hopefully) remain an implementation detail.
I'm not sure how a library solution won't suffice -- other than that it's
of no use to WG14.
(If GCC/Clang's equivalent intrinsics are anything to go by, I don't see
how pointers pose a problem. Yes, it's an ugly interface but should be
constexpr-friendly in latest tools and language revision.)
Regarding the efficiency of P0539, don't judge it by the implementation
which, AFAIK, only implements bytewise arithmetic. I still think it's the
wrong API but for more reasons than just performance.
"Don't judge this numerics library by its performance" doesn't sound
acceptable to me.
By 'implementation', I mean the reference library and by P0539 I mean the
API. The library is indeed of little use in judging the performance of the
API. All I was saying was not to spend time trying. Again, it's not the API
I'd chose. Nevertheless, I think that it's unhelpful to use a *bytewise*
arithmetic implementation to measure its potential.

I mean, performance seems like just about the only thing we *should* be
Post by Arthur O'Dwyer
judging.
Correctness is more important than performance. Usability probably comes a
close third.

Would you (or anyone) say you're confident that the performance problems
Post by Arthur O'Dwyer
can be overcome?
No.

Do we have a proof-of-concept somewhere that shows we can get perfect
Post by Arthur O'Dwyer
codegen for *any one* test case?
The proof-of-concept is not geared toward quality of codegen. It's designed
to be easy to change in response to changes in the API. I've mentioned to
the authors that performance might be something to consider also.
Post by Arthur O'Dwyer
I trust the inliner on all modern compilers (except maybe MSVC). If
someone can provide C++ code (standard or non-standard, I don't care) that
implements 256-bit and 512-bit arithmetic with ugly C++ but perfect codegen
on at least one compiler, then I'll gladly donate a day of my time to
refactoring the C++ code into something more ergonomic.
Here's a perfect-codegen proof of concept for 128-bit operator+.
https://godbolt.org/z/UlwPA6
Here's an intrinsic-less proof of concept for 128-bit operator+, but it
falls down pretty badly on 256-bit operator+.
https://godbolt.org/z/Y3_Z89
Minor nitpick: are you talking about the fundamental types or a library
component? If the latter, I'd advise against defining aliases such as
`uint256_t`. IIRC, SG6 decided against this in Toronto when P0539 was first
presented.

Since I can't even get to 256-bit operator+ on my own, I'm by default
Post by Arthur O'Dwyer
skeptical that anyone can get to N-bit all-the-operators. That's why I want
to see a proof of concept, and why I think it's so important to look at the
codegen of the existing libraries and demonstrate (as Niall has
demonstrated) that they're all horrible. We shouldn't standardize something
that's horrible.
Why are current libraries horrible? If it's because their authors are
"digging with spoons instead of shovels," then we should ask them what
shovels they'd need in order to do a good job. (For example, would it help
to have "widening add" and "widening multiply" algorithms in the Standard
Library?)
Have you tried using the GCC overflow built-ins? Perhaps these are the
'shovels' we need to be using. If not, then I'd turn to compiler
implementers to provide us with the right ones. I'd tricky to know for sure
which ones need standardization until they've been proved out. And we don't
need to standardize them in order to implement `wide_integer`.

So yes, we should prove a library type can be as fast as the best
fundamental types before adopting it. But no, I see no reason why its
machinery needs to also be standardized to be useful.

John
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+***@isocpp.org.
To post to this group, send email to std-***@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/CABPJVnSNoZG1g5kd_p6J0qznfuC5-_W%3DU2A2KHUieyD%2BG8NAHg%40mail.gmail.com.
o***@gmail.com
2018-11-12 11:25:07 UTC
Permalink
Post by Arthur O'Dwyer
I tend to think that the "library-only" approach *would* work fine, *if
Post by Arthur O'Dwyer
only* we could get the proper primitives into the standard library. For
example, if we had something like _addcarry_u64, but without the pointer
parameter. These primitives will be generally useful; and then things like
std::uintN_t<256> can be built on top of them. Right now it feels like
P0539 (and John McFarlane's P0554/P0828) are trying to build uintN_t
without any access to the appropriate primitives.
I think C++ will never standardise support for carry and overload
arithmetic. Too many CPUs don't support it.
And besides, *it solves the wrong problem*. Don't tell the compiler how
to implement bigint. Tell it you want bigint. Let it sort out an
implementation.
Wouldn't such operations be useful in safe numerics as well?
--
You received this message because you are subscribed to the Google Groups "ISO C++ Standard - Future Proposals" group.
To unsubscribe from this group and stop receiving emails from it, send an email to std-proposals+***@isocpp.org.
To post to this group, send email to std-***@isocpp.org.
To view this discussion on the web visit https://groups.google.com/a/isocpp.org/d/msgid/std-proposals/52243c83-16f9-4201-8fb0-52b448cd5f41%40isocpp.org.
Loading...