**This exploration was completed with the help of Immersive Limit's Blender Synthetics Course**
First I gathered a small data set to test against by photographing 3 different kinds of lucky charms marshmallows with my phone against a white background.
I split them into folders of their class and created a photoshop action that reduced their resolution to 224 and saved the image.
Then, I opened a new file in Blender to create the synthetic data.
First, I created a base mesh for the Love marshmallow.
Then, I created a geometry nodes modifier to subdivide and randomize the shape a bit so that each render's shape would be slightly different. The geometry nodes distort the shape in 2 ways: one large distortion on the base mesh to vary the shape of the whole and a second displacement to vary the subdivided surface to create texture.
One benefit of using geometry nodes is that, similar to materials, a set of geometry nodes can be applied to any number of objects, and modification of that set will be universally applied.
Next, I created a material to approximate the surface of the Marshmallow.
While not photoreal, I was intentional about including variation in the surface color and displacement that could easily be controlled by an external script.
I then created 2 more base meshes for Luck and BlueMoon and linked them to the same set of geometry nodes for shape variation.
I then created a simple environment with a white surface and an area light. I also created a simple camera rig using nulls so that I could vary the angle for each shot.
I also parented all of the marshmallows to a null so that I could easily control their location and rotation.
I created a script that would render images into folders, modifying the scene each time.
The object variations are crucial to creating ML that can generalize.
I rendered 300 training images 80 validation images and 1 test image just to create the folders that I would populate with my photographs of real marshmallows to test against.
Once trained, I ran the test photographs through the network with a 100% success rate although I would need a larger sample size with more variation to really test its limits. For my first ML network, I am satisfied with what I have learned.
No comments:
Post a Comment