I’ve been really excited about this release, ever since I saw the technology in action at last year’s AU. The first release of the technology was interesting, but this version really takes things to a new level: you can generate a fully textured mesh from a set of photographs.
A lot of work has clearly been done on the technology to address scalability issues: I have a set of close to 500 images of a local monument, which previously was not handled by the Photofly service (for which the Photo Scene Editor is the main client application). Now, though, the pictures do get processed and will even generate a mesh.
Let’s start with the initial steps. After installation, the main start-up screen is much cleaner:
And a new “quick registration” process has been put in place when you first select “Compute Photo Scene” (after having chosen to create a new Photo Scene and selected your images):
As before, image processing can already start once a certain number of images have been uploaded:
But the tracking of the various stages (should you choose to wait for them to occur, rather than being notified by email) has also been made more elegant, with completed phases being collapsed:
On loading your scene, there may well be images that could not be stitched (which was certainly the case with mine):
And the initial mesh quality is set, by default, to “draft”:
There’s a lot I like to the new interface. For instance, when you select a particular navigation mode (Zoom, Pan or Orbit), there are helpful tips displayed at the bottom of the screen, explaining how to perform the action just using the mouse with keyboard shortcuts.
My initial scene did need some work – which was unsurprising given the number of images in the scene, as well as the number that were not able to be stitched – but the fact is I get a mesh, and one I can improve by editing the scene.
And if we just look at the mesh with the texture, the results are really impressive. I deleted some erroneous points to improve the mesh, but didn’t even have to stitch additional images manually to get very decent results.
I’m not going to go into greater depth, at this stage – my expertise is more with programming than with demonstrating product functionality, and there are plenty of others blogging about this product’s capabilities – but I do encourage you to check this out. This type of functionality is really going to change the way we think about the design process by drastically reducing the cost of 3D model capture.