3D Scanning and Mesh Reconstruction
The aim of this project is to create a mid-resolution human character using 3D scanning data finding the best compromise between resources and time required.
My workflow offers a practical and portable solution that requires minor adaptations. I used Occipital Skanect software to collect multiple 3D scans and after a few trials I brought them together with Pixologic ZBrush where I could merge Diffuse Maps and generate a new UV map.
The point cloud data is likely to generate mesh polygons under the top surface which does require cleanup and remeshing
The 3D data collection for the arms does require supports for the actor for better precision. This step requires cleanup as well and remeshing.
INDUSTRY STANDARDS AND FINAL CONSIDERATIONS
Currently, the industry is making extensive use of the Lidar technology for the fast reconstruction of big authentic scenarios as cities, landscapes and architectural features. Object and people 3D scanning is also common to achieve a detailed digital copy of real actors.
‘Light Stage X’ is an example of one of the best pieces of technology deployed in the VFX industry for the recreation of high detailed human faces and bodies, inclusive of different information such as specularity details.
At present, the level of detail in the shaders and details in the animations applied to human faces digital doubles is higher than ever: the best example to explain this is the digital recreation of Peter Cushing and Carrie Fisher (a.k.a. Moff Tarkin and Princess Leyla in the recently released feature movie ‘Star Wars: Rogue One’), where all the possible amount of visual data was gathered to put on frame actors that.
Going back to my project the standards expected are tied to a sensibly lower budget.
The final generated digital asset is likely to be used for wide shots or crowds simulations shots.
I used the Xbox360 Kinect, the Occipital Skanect and the Pixologic ZBrush. This bundle is available on the market around £700, allowing perpetual licenses for the two software.
The amount of time required for a full digital reconstruction is between 2 to 3 working days, inclusive of the multiple data scanning sessions.
Looking at similar solutions to achieve the same result around the same price, in my research I have found a piece of software similar to the Occipital Skanect which is called RecontructMe, slightly pricier and offering less functions in the reconstructing process.
Another way to develop a mid-resolution 3D scanned human character I have found during my research, is offered by who explains in a few video tutorials how to recombine a full body mesh from multiple 3D scanning session with Meshlab and Blender, two open source software.
Despite this solution being extremely appealing from the financial point of view, I did not find it clear whether it is possible to recombine and have a character in the so-called ‘T-pose’, ready for animation.
At the same time, as shown in my video blog, the process of merging of different parts and reconstruction of UV maps and textures inside of ZBrush is extremely fast and reliable.
Going forward, I also found that Adobe released a piece of software called CC Fuse that enable the quick creation and animation of 3D human characters. This software is currently associated with a monthly subscription fee for the software license and suffers major limitati
ons.
In fact, the customization of the characters is limited to presets and any other character imported in the software to be animated becomes automatically property of the Adobe.