[L2Ork-dev] damage-control scheduler

Ivica Bukvic ico at vt.edu
Thu May 21 09:24:17 EDT 2020

As always, I enjoy your thoughtful emails, Jonathan. Please see below for
my 5c worth:


So-- could the scheduler be changed to resume from a dropout over a
> longer period of time than zero, by interpolating among that 1 second
> of lost blocks to output an accelerated version of them until the
> engine catches back up with the present?

Of course it could, by building a persistent phase vocoder that would
compress saved buffer data whenever an xrun occurred, while requiring the
output to process any missed samples. This would sap precious CPU cycles
(since we are not relying on a streaming codec that is likely responsible
for this and which may even rely on a GPU or a dedicated part of a CPU, as
is the case with ARM chips) most of the time unnecessarily, and on top of
the xrun potentially permanently introduce additional latency since vocoder
will need to be in the DSP chain, and partially mangle additional n seconds
of sound and even potentially cause cascading xruns due to the newfound
spike in needed CPU usage. Undoubtedly, the implementation would be
laborious and may cause significant vanilla codebase parity issues. So,
while we could do all this, the question is why would we want to?

A thing that in my opinion is much more relevant than this is an ability to
seamlessly transition from the old DSP tree onto a new one when deleting
and/or creating new object(s) and connections, so as to allow for the
xrun-free live coding. IIRC Nova pd-like abandonware project may have done
this in the early 2000s.



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://disis.music.vt.edu/pipermail/l2ork-dev/attachments/20200521/5f9cd68c/attachment.html>

More information about the L2Ork-dev mailing list