[L2Ork-dev] Precision of [sqrt~] for double-precision

Matt Barber brbrofsvl at gmail.com
Mon Jun 11 16:38:38 EDT 2018


Good to 8 because of the sqrt operation. If you compare results of sqrt
with 10 mantissa bits on and then with 23 mantissa bits on, they will
differ at the 7th bit, or 8th when you count the implied bit.

On Mon, Jun 11, 2018, 3:27 PM Jonathan Wilkes <jon.w.wilkes at gmail.com>
wrote:

> On Mon, Jun 11, 2018 at 10:38 AM, Matt Barber <brbrofsvl at gmail.com> wrote:
> >
> >
> > On Mon, Jun 11, 2018 at 12:43 PM Jonathan Wilkes <jon.w.wilkes at gmail.com
> >
> > wrote:
> >>
> >> On Mon, Jun 11, 2018 at 9:00 AM, Matt Barber <brbrofsvl at gmail.com>
> wrote:
> >> >
> >> >
> >> > On Sun, Jun 10, 2018 at 10:56 PM Jonathan Wilkes
> >> > <jon.w.wilkes at gmail.com>
> >> > wrote:
> >> >>
> >> >> On Wed, Jun 6, 2018 at 9:57 PM, Matt Barber <brbrofsvl at gmail.com>
> >> >> wrote:
> >> >> > Even with double precision numbers stored in those tables, you're
> >> >> > still
> >> >> > restricting your input to the 254 exponent values and 10 bits of
> >> >> > mantissa.
> >> >> >
> >> >> > The rough outline of the algorithm hinges on the idea that
> sqrt(a*b)
> >> >> > =
> >> >> > sqrt(a) * sqrt(b). So, since floating point is represented
> >> >> > exponent*mantissa, you can get sqrt(e*m) by doing sqrt(e)*sqrt(m).
> >> >> > That
> >> >> > means if you can pre-compute all possible sqrt(e) except NaNs,
> infs,
> >> >> > and
> >> >> > denormals, and enough mantissa values to whatever significant bits
> >> >> > you
> >> >> > want,
> >> >> > you can get really efficient. The reason to precompute the
> reciprocal
> >> >> > first
> >> >> > is because you save the divide in the perform function. Instead,
> you
> >> >> > can
> >> >> > rely on f*(1/sqrt(f)) = sqrt(f), so it's a simple multiply by the
> >> >> > float
> >> >> > you
> >> >> > already have.
> >> >> >
> >> >> > The rest of it is just bit paperwork. All the << 23 you see are to
> >> >> > get
> >> >> > to
> >> >> > the exponent bits. The mantissa table holds rsqrt values for 1.0 +
> >> >> > intervals
> >> >> > of 1./1024. (that initial 1.0 is implied in normalized float
> >> >> > representation).
> >> >> >
> >> >> > So basically, you're accepting double-precision values for inputs
> >> >> > truncated
> >> >> > to 18 bits of precision. One problem is that you'll get very bad
> >> >> > values
> >> >> > for
> >> >> > very high input exponents because you're bashing everything to
> >> >> > single-precision exponent range. Might not be the worst thing, but
> >> >> > something
> >> >> > to consider.
> >> >>
> >> >> When PD_FLOATSIZE == 64, could we type-pun between double and uint_64
> >> >> and make DUMTAB1SIZE 512? (Getting the relevant bitmath right, of
> >> >> course.)
> >> >>
> >> >
> >> > Sometimes type-punning is easier with uint_32 array[2] than it is with
> >> > uint_64, and it may also be more universal – I haven't checked about
> the
> >> > availability of uint_64 everywhere.
> >> >
> >> > If you wanted the full range of 64-bit exponents you'd make
> DUMTAB1SIZE
> >> > 2048
> >> > (2^11).
> >> > Not sure what the best thing is for DUMTAB2SIZE – a similar
> >> > proportion of bits would be 23, which would be a 64MB table that might
> >> > have
> >> > some cache-miss lookup overhead and I'm not certain it would actually
> be
> >> > more efficient than just calculating sqrt(double) directly. Would need
> >> > some
> >> > benchmarking for sure. But if you brought that down to 20, 8MB might
> be
> >> > a
> >> > good compromise. I can think about this some more since it involves so
> >> > many
> >> > magic numbers and it will be annoying to have to recode it.
> >>
> >> I forgot about sqrt inside [expr].
> >>
> >> So what if we keep the current algo with PD_FLOATSIZE == 64? That should
> >> be
> >> fast for [sqrt~] at both t_float sizes, with an escape hatch of [expr]
> >> for higher
> >> precision math. (At least it appears [expr] will compute sqrt at
> >> double precision
> >> by default, and it seems to use t_float properly.)
> >>
> >> Then if a user comes up with a case that requires both higher precision
> >> and
> >> better performance than [expr~] can provide, we can create a benchmark
> >> patch
> >> and start trying out various magic numbers for those tables.
> >>
> >> -Jonathan
> >>
> >
> > If it were up to me I'd try to do the analogous thing for doubles since
> the
> > rest of the math in double precision expands in all the other objects
> and we
> > want to keep it as seamless as possible. Otherwise, if it's really
> > documented well that these particular objects have only float precision
> in
> > single- and double-precision PD, it could be not too bad. But if we
> wanted
> > to go for it, bit precision in the exponents is going to be most
> important.
> > 16-20 bits of mantissa precision (0.5-8MB table) would probably work out
> OK,
> > but it would be worth testing.
>
> Quick sanity check--  from d_math.c in both Purr Data and Pd Vanilla:
>
> /* sigrsqrt - reciprocal square root good to 8 mantissa bits  */
>
> That should read "good to 10 mantissa bits," right?
>
> right-shift by 23 plus bitmask to filter the sign gets us the 8 exponent
> bits.
>
> right-shift by 13 plus bitmask to filter out the exponent/sign bits
> gets us the 10 most
> significant mantissa bits.
>
> -Jonathan
> _______________________________________________
> L2Ork-dev mailing list
> L2Ork-dev at disis.music.vt.edu
> https://disis.music.vt.edu/listinfo/l2ork-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://disis.music.vt.edu/pipermail/l2ork-dev/attachments/20180611/0dedee1b/attachment.html>


More information about the L2Ork-dev mailing list