WebGL, GenAI and the brave new era of Internet Expression.

Tristan John
6 min readMay 26, 2024

As an old Disco Diffusion kid, and probably one of the first 40 people on the planet (outside of Stability) to run inference on a Stable Diffusion model, I had no qualms having the central visual feature of my website be an AI-generated, whimsical and oddly “not-quite-accurate” diorama. After all, it did indeed represent the state-of-the-art in generative AI for that short time (and yes, it required some really “informed prompting” to pull-off).
Alas, it’s a year later and average users in the tens of millions now have some incarnation of DALL-E, Stable Diffusion or Midjourney at their fingertips. All of them capable of creating solid, instant and hyper-contextual works of ar — imagery.. needless to say, all this has left the old landing-page center-piece seeming a little uninspired and antiquated these days. Time to spruce it up!

Art-Ception: Life becomes art, art becomes life.

The interesting thing about the isometric diorama (especially if done in a minimal style) is its ability to pull just a little interest out of the mundane while staying one hundred percent in context. Thereby creating a piece that doesn’t entirely steal the show (if it’s not the main attraction). Sort of like those Knolling arrangements in photography.
So I knew I would be staying with the concept of a diorama, that, and it’s strong ability to convey meaning without words is effectively the embodiment of the Shape Epic “Shaepic” Designs moniker or “what gets me all giddy”. So, it was mentally decided that I would just whip up a new diorama, hand-made in Blender because AI image gen is in fad-status (plus it would only realistically take couple hours out of my life).
It was T-minus 2 hours in when the nuclear bomb went off. I had just finished modeling the “heart and soul” objects of my workspace and was about to instance the cube to start modeling a low-polly “Me” when I thought “it would be cool if the image were animated, would really bring it to life”..and then the earth shook.
The inclination came — why not use the basis of the very technology of all these projects I’m working (WebGL) to (subversively) shine attention to them…on the same medium…that the audience may not realize is the same driving force.

The scene in Wonderland Engine

And that’s it, that’s the article..
— Ok, well at least that’s the “Why” here’s the “How”.

Exit Blender, enter Wonderland. (and somehow another use case for Generative AI)

The beauty of Blender (and being OK enough at it) is still…to this day… that it is a system designed with special attention to deliver great final renderings in BOTH the 3D and 2D space. So that even though I’d spent time working on a scenario meant for 2D representation, I still had the freedom of 3D model information to play with and export… into something else… like a wickedly fast WebGL Engine.. but, more on that soon!

A week prior to this impromptu project, I’d noticed a few massive leaps being made by a handful of startups in the “AI for motion capture scene” and had made a few bookmarks with the intention of checking out later for XR projects. This seemed as good a time as any to revisit those bookmarks, and luckily it didn’t take very long to discover which were the more solid of the bunch.

Finally AI we can get behind.

The looming curse that undermines most of today’s AIaaS is the dreaded facet of unwanted artifacts, be it in the form of “hallucinations” or just plain slips in context and execution. As someone who’d been on the beta floor watching possible ideas for monetization fly, you realize how tricky things can be when in one hand you need to leverage compute costs but on the other, you don’t want customers to feel like they’re gambling with outcomes.
One motion capture company however, seems to have fine-tuned their generative process to that “golden zone”. One decent shot in, one very, very workable result out. Credits well spent.
After a little youtube research, I was eager to give “Move One” by Move.ai a first try. After all, I‘d had all the requirements to operate it: A single iPhone(with internet connection), something to hold said phone, and a vague idea of the motion I wanted to portray.

Move One Output VS Source

After three tries at believable VR-miming, I submitted my first clip and within a few short minutes, I got back one surprisingly impressive package.

Without delving too much into my hypothetical thoughts on sample res, steps and parameters, I’ll simply say that the level of detail reproduced in the final animation had far, far surpassed my expectations. Even after absorbing demos of its use on Youtube.
It’s something that you appreciate more when you take the armature into a suite like Blender and you realize what was captured… from finger movements, the expansion and contraction from the core due to breathing and other natural nuances. All the while maintaining form, accurate root position, leaning and tilting.
My goal wasn’t/isn’t to stress-test Move One or any of these AI apps but to see if I could get a useable result for this amateur exercise in bringing a model of myself to life. Yet, beside my laziness to set up a good loop point for the animation, there’s nothing that felt amateur about this result. +1 for the GenAI front and +100 for my anticipating using it in XR work! Which brings us to next phase of bringing this project to WebGL life: one hyper-performant rendering suite.

Long past the days of WildTanget web driver.

I was just kid when I first dabbled with accelerated 3D for the web some twenty years ago now on the family Windows 98 machine. Though much time has passed since then and now, oddly enough its only in the last six years or so that we’d finally begun seeing real solutions in the sphere as processing power democratization grew and real standards started being drawn out.

Enter the Fastest Engine on the Web:

I had much of the same emotions upon discovering what the folks at Wonderland Engine were building but with the added “oh sh**” factor of 21st century parallel processing. A solution that ushers remarkable power and simplicity, while tailored uniquely for 3D, 2D and XR interactivity on the Web.
A truly fully-faceted suite for the artist, game developer or world-builder to bring their creations to life and the platform I’d opted to build my XR experiences on for just over a year. This time however, its just serving as way to run an animated, non-immersive 3D scene on a 2D webpage which… guess what… it’s pretty great at too!

Setting up the scene was as easy as exporting the models (including the animated model) from Blender and dragging them into the Wonderland Editor.
Setting the tone took placing some lights and enabling shadows.
I then set emissive textures for the screens, toggled WebGL bloom and the auto-play and loop animation features on the character model.
It also helped to position the default 2D camera and set its fov to 40 degrees for that tight isometric perspective.
Hit the package button and I was done. I now have a 3D scene that can be placed on just about any webpage!

All-in-all, a few hours work and a decent boost-up for that old landing pad!

You can check it out here! tristanjohn.com

Also don’t forget to check out:
Wonderland Engine
Blender and
Move One!

--

--

Tristan John
0 Followers

Founder, Developer, sound and light wizard from the Caribbean 🌴 🇹🇹