With the advent of digital tapeless cameras, the amount of video footage that is being produced by both amateurs and professionals has increased dramatically – the easy access to cheap digital cameras made them ubiquitous and that means more footage being shot in total (it’s just as logical as the whole American gun control discussion). However, the massive amounts of footage being shot all the time don’t mean that it got much easier or faster to edit. On the contrary, I believe that there is still much room for innovation in the field of organizing and editing footage. The editing industry still relies almost the same methodology it used 20 or 30 years ago, and while NLE software has evolved, it mainly got faster, yet still relies on the very same principles like back in the day when it was invented. And that doesn’t help much with media management requirements of the 21st century. So it’s nice to see innovation in the editing world that would reduce the work an editor has to do. At SIGGRAPH, Disney Research published a new scientific paper describing a software that would automatically edit multi-camera event footage shot at the same time. It analyzes video footage from several angles and applies cinematography rules like the 180 degree line and determines the shooters’ main area of interest and then cuts between the cameras automatically. Or so the House of Mouse says. In the video, they compare it to other products that are randomly cutting between cameras and even the results of a professional editor. Of course, pieces of software like this won’t be able to replace editors anytime soon, but they can help editors save hours of work in the editing booth if they are at least able to deliver a half-decent edit of a multi-camera event shoot. Naturally, a software like this won’t work for boring presentations on a stage where not much is happening, but it might add a lot of value to concert shoots or recorded sports games. Combine that with Microsoft’s software smooth first-person hyperlapse software (which we recently covered) and we might have a winner on our hands. Can you think of projects on which this software might be helpful? And where do you see room for more innovation in the field of editing and post production in general? Let us know in the comments. via NoFilmSchoolRead more
Microsoft is hardly known as a player in the video and film industry, but their Research division turned out some pretty spectacular image processing innovation before (incorporated into Bing Maps). Now they have found a way to make point-of-view / first-person video more watchable. An inherent problem of the typical GoPro strapped to your head is that humans do not move steadily – it’s our brains that make us think that we move smoothly, in fact there is a lot of shake (hence the need for handheld gimbals as pioneered by Freefly Systems with the MoVi). That means that our POV-GoPro shots almost NEVER look anything like the GoPro marketing department makes us think it would look like. In comes Microsoft with the demo of a yet-unpublished software that can not only de-shake these first person videos, but actually make them extremely smooth. In their demo video below, they demonstrate the different looks of the input material, the sped-up timelapse version of that (unbelievably shaky) and the Microsoft hyperlapse version after processing. The downside is that it only seems to work for sped-up (i.e. timelapse) versions of those point-of-view videos, not the realtime recordings, which would certainly be more useful in day-to-day use (I am actually currently personally deeply involved in a large first-person project that would greatly benefit just from that!). However, the results of Microsoft’s hyperlapses are nothing short of amazing: Using some kind of 3D camera path mapping, the route becomes much smoother and actually seems to reconstruct footage at the edges. They also show what normal stabilization looks like on that same footage, and it doesn’t come even remotely close. They are working on putting all of this goodness into a Windows app (yeah, I know … come on, it’s 2014, please give us a Mac app too!). Until then, head over to their Microsoft Research page where you can download the technical paper, supplemental material and a high-res video demo. From their site: We present a method for converting first-person videos, for example, captured with a helmet camera during activities such as rock climbing or bicycling, into hyper-lapse videos, i.e., time-lapse videos with a smoothly moving camera. At high speed-up rates, simple frame sub-sampling coupled with existing video stabilization methods does not work, because the erratic camera shake present in first-person videos is amplified by the speed-up. Scene Reconstruction Our algorithm first reconstructs the 3D input camera path as well as dense, per-frame proxy geometries. We then optimize a novel camera path for the output video (shown in red) that is smooth and passes near the input cameras while ensuring that the virtual camera looks in directions that can be rendered well from the input. Next, we compute geometric proxies for each input frame. These allow us to render the frames from the novel viewpoints on the optimized path. Proxy Geometry Stitched & Blended Finally, we generate the novel smoothed, time-lapse video by rendering, stitching, and blending appropriately selected source frames for each output frame. We present a number of results for challenging videos that cannot be processed using traditional techniques.Read more
We only send updates about our most relevant articles. No spam, guaranteed! And if you don't like our newsletter, you can unsubscribe with a single click. Read our full opt-out policy here.