Advertisement

ML New writing strategy – “variable buffering”

ML New writing strategy - "variable buffering"

MLMagic Lantren (ML) is continuing optimising their Canon HDSLR RAW recording on various camera models.
a1ex is one of the leading figures in the developing team and now he decided to revise the buffering strategy to achieve buffer times that will eliminate or reduce dropped frames while shooting high resolution RAW on fast cards.
He calls the new method “variable buffering” and here is what he has to say:

If you ever looked in the comments from raw_rec.c, you have noticed that I’ve stated a little goal: 1920×1080 on 1000x cards (of course, 5D3 at 24p). Goal achieved and exceeded – even got reports of 1920×1280 continuous.

During the last few days I took a closer look at the buffering strategy. While it was near-optimal for continuous recording (large contiguous chunks = faster write speed), there was (and still is) room for improvement for those cases when you want to push the recording past the sustained write speed, and squeeze as many frames as possible.

So, I’ve designed a new buffering strategy (I’ll call it variable buffering), with the following ideas in mind:

* Write speed varies with buffer size, like this (thanks to testers who ran the benchmarks for hours on their cameras)

* Noticing the speed drop is small, it’s almost always better to start writing as soon as we have one frame captured. Therefore, the new strategy aims for 100% duty cycle of the card writing task.

* Because large buffers are faster than small ones, these are preferred. If the card is fast enough, only the largest buffers will be touched, and therefore the method is still optimal for continuous recording. Even better – adding a bunch of small buffers will not slow it down at all.

* This algorithm will use every single memory buffer that can contain at least one frame (because small buffers are no longer slowing it down).

* Another cause of stopping: when the buffer is about to overflow, it’s usually because the camera is trying to save a huge buffer (say a 32MB one), which will take a long time (say 1.5 seconds on slow SD cameras, 21MB/s). So, I’ve added a heuristic that limits buffer size – so, in this case, if we predict the buffer will overflow after only 1 second, we’ll save only 20MB out of 32, which will finish at 0.95 seconds. At that moment, the capturing task will have a 20MB free chunk that can be used for capturing more frames.

* Buffering is now done at frame level, not at chunk level. This finer granularity allows me to split buffers on the fly, in whatever configuration I believe it’s best for the situation.

* The algorithm is designed to adjust itself on the fly; for this, it does some predictions, such as when the buffer is likely to overflow. If it predicts well, it will squeeze a few more frames. If not… not

Read his full post or try the new buffering for your self and comment here: LINK

18 Comments

Subscribe
Notify of

Filter:
all
Sort by:
latest
Filter:
all
Sort by:
latest

Take part in the CineD community experience