Description
Somewhat in the spirit of #29, I have been wondering if it would make sense to support a heap-backed integer type (cc @elichi)
The idea would be that integers are still fixed-width and would otherwise work the same as the const generic counterparts, but the width could be chosen at runtime/initialization time, rather than at compile time. So they're not really "arbitrary precision" in that they don't grow/shrink, they're still fixed-width, just fixed at a width determined at runtime.
One place where this would be applicable is the rsa
crate. While it'd be nice for microcontroller users to be able to instantiate RSA with e.g. 2048-bit or 4096-bit keys using only the stack, other use cases like OpenPGP implementations may need to support arbitrary key sizes running upwards of 16384-bit. For these use cases where key sizes can potentially vary wildly, it'd be nice to choose the integer size at runtime. The Integer
and other traits in this crate would permit accepting either. (cc @dignifiedquire)
Internally this would involve implementing core algorithms that operate over slices instead of fixed-size arrays. This could have other benefits: it could reduce the monomorphization penalty for different const generic sizes by having them pass a slice to a function which can operate over many sizes.
There are some potential drawbacks: we don't presently unroll loops, but in some cases it'd be nice and generate better code, and this could potentially negatively impact such code. I think the solution there would be to duplicate some of that loop unrolled code between the stack and heap-backed implementations, so the stack-allocated version always unrolls the loops ahead-of-time, and the heap-backed version would need to loop over a slice.