Hello

Hello, this is the less formal but way more exciting part of my website where I post all of my tech experiments.

Search This Blog

Monday, March 8, 2021

Perspective Correction for XR using UV hacks (virtual-production-ish stuff)

Here is the end result and you can read more if you think its interesting.


Why I find this interesting
Firstly, because of its relevance in things like virtual-production-like techniques used in filming The Mandalorian and secondly I think that perspective correction offers a zero barrier opportunity for a single viewer to experience mixed reality.

An idea of application
In my head I am picturing a gallery filled with mostly traditional art but when you walk by one frame it has a screen with a 3D scene portrayed inside it. As you walk by, the screen morphs to display the scene so that it looks approximately correct from the angle you are looking at it as though the screen were a window to another world rather than a flat image.

Some limitations to this method are:

- you can only have one primary viewer at a time (fine with the current social distancing)
- the lack of stereoscopy can be disorienting (the perspective correction is averaged between the eyes)

How to UV hack it
Here is a basic overview of how to achieve this using touch designer. I will not be going over every button click in detail just the main concepts.

You will need:
    an Xbox Kinect and adapter for PC
    TouchDesigner
    a human head (could be yours)
    a general understanding of how UVs work

Step 1: Setting up the virtual scene
Add a box with the dimensions of your screen.
Separate into two objects one with just the 'screen object' itself and one with the 'box back'.
The 'box back' will eventually be replaced with whatever you want to be viewed through the screen but for now it will be a stand-in for us to test whether perspective correction is working

Step 2: Calibration
Set up the kinect with touch designer and isolate the head and a hand input. The hand will be used for calibration.

Create a place holder object and input the hand location to it.
The object should mirror your movements in virtual space.
Place your hand on the corner of the computer screen and match transforms of the 'box back' and 'screen object' to it.
Then do the same for another corner.

Step 3: UV hacking
Once the screen is properly calibrated in the virtual world add a camera and reference your head location to the transform of the camera. Then put the 'screen object' as the look at object for the camera.
only include the 'box back' in the renderable objects. Do not include the screen
Then project UVs onto the screen object from the camera perspective
You will need to subdivide your 'screen object' before you do this because UV's are linearly interpreted between vertices.

Then set up another camera that is pointed directly at the 'screen object'
Then output that render.

And that's it.












Thursday, March 4, 2021

Training a ML Network to recognize Lucky Charms using Synthetic Data in Blender

**This exploration was completed with the help of Immersive Limit's Blender Synthetics Course**

First I gathered a small data set to test against by photographing 3 different kinds of lucky charms marshmallows with my phone against a white background.

     


I split them into folders of their class and created a photoshop action that reduced their resolution to 224 and saved the image.

Then, I opened a new file in Blender to create the synthetic data.
First, I created a base mesh for the Love marshmallow.

Then, I created a geometry nodes modifier to subdivide and randomize the shape a bit so that each render's shape would be slightly different. The geometry nodes distort the shape in 2 ways: one large distortion on the base mesh to vary the shape of the whole and a second displacement to vary the subdivided surface to create texture.



One benefit of using geometry nodes is that, similar to materials, a set of geometry nodes can be applied to any number of objects, and modification of that set will be universally applied.

Next, I created a material to approximate the surface of the Marshmallow.

While not photoreal, I was intentional about including variation in the surface color and displacement that could easily be controlled by an external script.
I then created 2 more base meshes for Luck and BlueMoon and linked them to the same set of geometry nodes for shape variation.



I duplicated the material from Love and made a variation for the other two.


I then created a simple environment with a white surface and an area light. I also created a simple camera rig using nulls so that I could vary the angle for each shot.
I also parented all of the marshmallows to a null so that I could easily control their location and rotation.

I created a script that would render images into folders, modifying the scene each time.


The object variations are crucial to creating ML that can generalize.
I rendered 300 training images 80 validation images and 1 test image just to create the folders that I would populate with my photographs of real marshmallows to test against.




I then used my data set to retrain an ImageNet ML network using TensorFlow in Jupyter Notebooks.

I retrained the network using 7 epochs.


Once trained, I ran the test photographs through the network with a 100% success rate although I would need a larger sample size with more variation to really test its limits. For my first ML network, I am satisfied with what I have learned.


























Monday, February 15, 2021

Blender to Houdini USD pipeline experimentation

 In Blender, my "Team" has created 3 Scenes:

A block out scene created by the concept artist


A scene with modeled display screens created by the hard-surface modeler



And a box modeled by my bosses 10 yr old son that must be included if I value my job.



To export these scenes as USD: File>Export>Universal Scene Description


This dialogue will appear.

For this file, I will uncheck UV maps and materials as I have not created any.

From my tests USD, USDC, and USDA all work with Houdini Solaris. Blender defaults to USDC but, if you want to change it, just type USD in the dialogue then hit 'Export USD'.

Now I will bring the files into Houdini Solaris.



Switch the workspace to Solaris which should look like this.

Add a file node.

I get this error message but it hasn't caused any problems so I have been ignoring it until I have time to look up the issue.

In the properties panel, I can import a USD scene to the file node by clicking the paper with a mouse icon.


If I hit accept I can see my scene graph and my 3D scene.

Because Blender is a silly program that uses the Z-axis as the up axis everything is sideways. To fix this I added a transform node and rotated it -90 degrees in X.


This could easily be put into a subnetwork and turned into an HDA so that you only have to do it once but for now I will just copy it 3 times and import the other files.


I will add a null buffer in case any of the artists create future versions that they would like to merge in before I merge the different artist's versions together.

As I select each null I can watch the scene graph change to reflect the selected scene. If I add a sublayer node between the 'Block In' and 'Display Model' scenes and change the sublayer type to sublayer inputs then I can layer the two scenes on top of each other.




This is the step where all my first attempt errors became visible.

At first, I made the mistake of keeping the object namespace the same but not the mesh namespace. Here you can see the mistake I've made.




While The Xform objects were given the right name the meshes that belong to them were not (except for the first one that I fixed to show how the merging would occur differently).


First Layer                                                                    Second Layer



Composite scene


First, notice that the 3 that had the wrong mesh name for the object now have 2 meshes per transform meanwhile the one that was named correctly has replaced the mesh that was in the block out scene.

Also, notice how in the composite scene the layout cubes are facing the same way? This is because the scene with the more complete display screens had the rotation applied so that their local axis was in line with the global axis. Because the Xform objects still had the same name, the transformations were overridden even though both meshes were kept. 

as soon as the namespaces were fixed the layering worked as expected
The first two scenes layered similar scenes. What If I just want to layer a single asset I can use a reference node to select just the mesh from the 'Boss Kids' Scene.

Once I have the mesh selected I can place it within the transform and disable the placeholder.



These are my explorations with USD so far more to come :)