[L2Ork-dev] Precision of [sqrt~] for double-precision
Matt Barber
brbrofsvl at gmail.com
Tue Jun 19 20:51:52 EDT 2018
OK, so I'd embarrassingly been looking at legacy code. There's now an extra
step, in the *out++ line here:
{
t_sample g = rsqrt_exptab[(u.l >> 23) & 0xff] *
rsqrt_mantissatab[(u.l >> 13) & 0x3ff];
*out++ = f * (1.5 * g - 0.5 * g * g * g * f);
}
This seems to remove some of the error; Pd says "about 120dB" or 20ish bits
of accuracy, on average.
I made a program and found that reliably if we use 19 bits of input
mantissa, we'll get a minimum of 14 bits of rsqrt accuracy in the mantissa
and about 41 bits on average. For 16 bits of input mantissa, minimum is
about 6 bits mantissa and 34-35 bits on average. This is all for random bit
patterns where the input exponent = 0.
So, the exponent table is 2^11 = 2048 points long, the mantissa could IMO
be anywhere between 2^11 and 2^20 with fully satisfactory results. I think
maybe 2^16 would be a good starting point.
Matt
On Mon, Jun 18, 2018 at 12:42 AM Matt Barber <brbrofsvl at gmail.com> wrote:
> Hi,
>
> Still working on this. I was preparing for and attending a memorial for my
> dad this week.
>
> Matt
>
> On Mon, Jun 11, 2018 at 9:11 PM Matt Barber <brbrofsvl at gmail.com> wrote:
>
>>
>>
>> On Mon, Jun 11, 2018 at 5:51 PM Jonathan Wilkes <jon.w.wilkes at gmail.com>
>> wrote:
>>
>>> On Mon, Jun 11, 2018 at 5:14 PM, Matt Barber <brbrofsvl at gmail.com>
>>> wrote:
>>> > Oops, that's 8th and 9th
>>> >
>>> > On Mon, Jun 11, 2018, 4:38 PM Matt Barber <brbrofsvl at gmail.com> wrote:
>>> >>
>>> >> Good to 8 because of the sqrt operation. If you compare results of
>>> sqrt
>>> >> with 10 mantissa bits on and then with 23 mantissa bits on, they will
>>> differ
>>> >> at the 7th bit, or 8th when you count the implied bit.
>>>
>>> Ah, I see. So it shaves the input mantissa down to 10 bits to look up
>>> the result in the
>>> table, and therefore the result has a mantissa good to 8 bits.
>>>
>>> 2048 makes sense for the exponent table.
>>>
>>> As for the mantissa-- is there some fancy DSP algorithm we can use to
>>> grope toward a sensible number of bits? Perhaps something that places
>>> a [sqrt~] upstream in a chain that reads indices in a really big table?
>>>
>>> Something where n bits sounds bad but n + x bits sounds acceptable...
>>>
>>
>> I worry a little about using "how it sounds" as a heuristic just
>> because the use case is not always audio-domain. But with that, I think
>> that we could shoot for "CD quality" and do well enough. The way floats
>> work for normalized audio -1.0 to 1.0 is that the worst resolution is in
>> the range 0.5 < abs(x) < 1.0, which is a 23-bit range. So if you imagine it
>> being quantized to integers, the maximum resolution would be 2^23, times 2
>> for the 0 < abs(x) < 0.5 range (sacrificing some available precision in
>> that range, but basically using 2^23 numbers there where some of the
>> resolution is in the exponents instead of mantissa), and then then times 2
>> for the sign bit, or 2^23 * 2 *2 = 2^25, or 25-bit audio. So by analogy, if
>> we shoot for "accurate through the 14th mantissa bit," that will give us 16
>> bits, of which we'll only be using 15 because the sign bit won't matter in
>> sqrt(). So we could find where the cutoff is on the input side and use that
>> as a starting point. I'll look at that shortly.
>>
>> Matt
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://disis.music.vt.edu/pipermail/l2ork-dev/attachments/20180619/205727d0/attachment.html>
More information about the L2Ork-dev
mailing list