Show Focus uses AR and LiDAR to give film crews a new sense of space, visualising the focus plane in real-time to eliminate costly reshoots and lost content.
Show Focus uses AR and LiDAR to give film crews a new sense of space, visualising the focus plane in real-time to eliminate costly reshoots and lost content.
Design and production by Wolf in Motion, 2013-2022.
Simplify camera operation and eliminate costly out-of-focus errors through spatial visualisation of the focus.
A hardware and software system using LiDAR to see and control the camera focus plane in 3D.
Pioneered in 2013, this spatial paradigm is now an industry standard globally, reducing reshoots and enhancing manual craft.
Show Focus is a system using augmented reality to show and control the focus of the cinema camera it is paired with. Primarily, it addresses a persistent and expensive issue in the film industry: out-of-focus footage. Such errors often lead to costly reshoots or, in the case of documentaries and live interviews, the permanent loss of unique, unrepeatable content. Designed for a new breed of film professionals, Show Focus bridges the gap between high-end cinematography and emerging spatial computing. By turning the invisible focus plane into a visible, interactive 3D ‘artefact’ on set, our studio empowers crews to extend their craft through intuitive spatial control, prioritising human agency over automation.
In the film industry, the focus puller is expected to perform flawlessly on every take. While almost every aspect of an image — from colour to exposure — can be adjusted or even corrected in post-production, a soft focus is arguably the only mistake for which there is no known cure. Paradoxically, the rapid pace of technological progress has made this task significantly harder. Modern lenses with wide apertures and large camera sensors induce an incredibly shallow depth of field. Simultaneously, the industry’s move towards 4K, 8K, and beyond makes even the slightest focusing error glaringly obvious to the viewer.
Traditional preparation involves a laborious process of physical marking — placing coloured tape on the ground and adding corresponding marks to the lens. However, these marks cannot always be relied upon; focus pullers must keep their eyes on the action to anticipate the movements of actors, meaning they often have to guess the distance in space. They trust the kinesthetic memory of their fingers to find the right angle at the right pace. While manufacturers have invested heavily in autofocus solutions that use eye tracking and machine learning, these lack the stylistic control and human touch required for cinematic fiction. The challenge was to create a tool that extends the human craft rather than replacing it with an algorithm.
Our studio’s approach was informed by our deep experience in 3D animation and virtual camera work. We initiated Show Focus by exploring how light detection and ranging systems (LiDAR) could be used to augment the human sense of space. In 2013, we created our first focusing mechanism using a spatial point cloud visualisation to represent the focus plane. While we achieved a precision of approximately one centimetre, the hardware available at the time was too cumbersome and delicate for use on a high-pressure film set.
The breakthrough came in 2020 with the release of the first LiDAR-equipped consumer tablets. This provided the opportunity to revisit our earlier work and design a portable, professional-grade solution. We developed a custom camera mount that acts as a bridge between the physical focus ring and our Show Focus software, which we built as a dedicated iOS application.
Through interviews with focus pullers, vloggers, and directors of photography, we learned that filmmakers are results-oriented and value functionality, yet they often rely on unpolished prototypes or ‘hacked’ equipment to get the job done. This insight drove us to create an interface that is both straightforward and intuitive. We went through successive phases of prototyping and testing, moving from a simple manual ‘follow focus’ mechanism to a sophisticated system using Bluetooth Low Energy (BLE) to encode ring rotation into real-time spatial coordinates.
The final version of Show Focus is an AR application that turns the invisible focus plane into a visible 3D ‘artefact’ within the scene. By using the LiDAR sensor on a tablet or smartphone, the system meshes the environment and precisely displays the position of the focus plane relative to the subjects. This allows the crew to ‘know’ the focus rather than ‘guess’ it, all while keeping their eyes on the action.
The system features a motorised hardware kit that allows for two-way interaction. A focus puller can rotate the focus ring manually and watch the digital plane move in the AR view, or they can simply touch and drag the representation of the focus plane on the screen to move the physical lens. This synergy of hardware and software means that complex rack focuses, which once took years of training to master, can now be executed with spatial confidence.
Furthermore, Show Focus addresses the complexities of modern CGI-heavy productions. In scenes involving virtual elements, actors and crews often struggle with ‘blocking’ — essentially imagining where a virtual object will be in the final frame. Because Show Focus is an integrated communication system, it allows for 3D compositing and scene blocking to be managed directly on set as the scene is being shot. With the rise of mixed reality glasses, we see a future where the lens angle, depth of field, and 3D elements are seen and adjusted naturally within the physical environment.
We tested Show Focus with a select group of camera professionals. Beyond simple precision, the system revealed unexpected creative benefits. Users found it much easier to judge the distance between multiple objects in a frame, allowing them to choreograph complex movements involving several points of focus simultaneously. It allows the focus puller to visualise the ‘near limit’ and ‘far limit’ of acceptable focus, giving them total control over the aesthetic of the shot.
Having pioneered the first spatial focusing concepts in 2013, we have seen the industry evolve to meet our early vision; LiDAR-based systems are now used on film sets globally, transforming from experimental prototypes into an essential standard for high-end production. Show Focus earns its keep by preserving the human element in a world of increasing automation. It serves as a paradigm-shifting example of how augmented reality can be applied to the creative industries — not as a gimmick, but as a professional tool that enhances a craft perfected over the 20th century. By reducing the waste associated with failed takes and providing a more intuitive way to manage 3D space, we are creating a more sustainable and efficient production environment for the next generation of filmmakers.
The project’s innovation and technical excellence were celebrated globally, earning an iF Design Award 2022 in the Product category and a place on the D&AD Shortlist 2022 for Digital Design. These accolades confirm the system’s potential to redefine professional cinematography through the thoughtful application of spatial computing.