The ideation process can be summarized by several shifts in focus, beginning from artificial intelligence, to mental illness and imagination, to anxiety and imaginative play, and finally to creative visualization and storytelling. To form the narrative, ideas were constantly added and modified in the script and in sketches. Research into the science of how creative visualization works was conducted and taken into consideration.

The interactions needed were then extracted from the narrative, and translated into code using C# and Unity. The development process began with research on how to create VR experiences using game engines and VR plugins, to identify limitations and useful assets.

Storyboarding the user's virtual journey.
Thinking about how to translate certain concepts such as "connectedness" into a VR-assisted visualization world.
Vision of the Artist Script
Creating the narrative script.
Vision of the Artist Coding
Modifying the C# scripts.


In the beginning, early prototypes were developed using Unity for the Gear VR, and utilized three degrees of freedom (3dof). Vision of the Artist is aimed at assisting users with creative visualization; the VR environment prompts users to first imagine certain scenarios, and then enables users to draw their imaginations in virtual space, in metaphysical ways that elicit a sense of awe. To test out principles linked to boosting mental health, the main VR environment will be filled with several smaller worlds, that each target a specific principle through its narrative and functionality.

Translating it into VR.
Implementing basic drawing functionality in Gear.
Experienting with different worlds in Unity.
Nearing Halloween - drew a test witch in the virtual sky!

The user has the ability to draw several meters away, with their “laser wand” when using the controller.

Ready for playtesting!

Switching to the Quest

Developing for the Gear proved to be difficult. Every drawing interaction was dependent on raycasting. At the same time, server errors prevented me from accessing the Gear VR during a critical time. It was clear that the next stage was to switch to the Quest.

Prototype 2

A Tricky Transition

Upon acquiring the Quest, I attempted to transfer what was already working for the Gear VR, to the Quest. While I was able to transfer the entire project, there were errors that completely modified the experience. For instance, the drawing functionality came out of the headset, rather than the Go Controllers, and so users were drawing with their heads – like unicorns – rather than the controllers. That cannot happen.

I decided it would be time to start from scratch.

Prototyping Process V2

Starting from scratch meant redesigning the entire experience, now with the new possibility of 6 degrees of freedom. From discussions with an advisor, I also learned that hand tracking was now possible with the Quest, and couldn’t wait to explore it.


Ideas also needed to be revisited. The key was to get the user to feel like they were living out their own heroic tale, accompanied by unique interactions and narrative that would encourage positive affect and creative visualization.

The most exciting part, was coming up with a story that would tie all of the scenes together. For inspiration, I looked to unique forms in the history of storytelling, which lead to cave paintings, ancient Chinese and Japanese paintings, and comic books.

Vision of the Artist Sketch

The early idea was tied to the “Face of Fear,” in which the user is asked to draw what they fear in the form of a creature, who they would then face and “defeat.” The question was how to tie this in together more naturally, in a way that tells a story. The answer – fictional beings called Dream Eaters. The player becomes the hero who “defeats” these Dream Eaters through increasing their positive affect. 

The original idea was to figure out how to export the drawing that the user creates into an FBX model, that would would appear later in the game. As time passed, I realized this would be extremely difficult and risky to pull off. This led to the notion of creating a manifestation of Dream Eaters, a combination of all the dream eaters in the world, that the player would face instead. I limit the user to paint in the colours of the Dream Eater, namely black and cyan, to match the model I would use to represent the Dream Eaters. A zombie bunny from Unity’s legacy tutorial series.

Brainstorming and prioritizing ideas.
Dream Eater idea starting to take form. Thinking about how the user can enter the comic book world.
Sketching out how the speech bubbles will look in the battle.
Re-sketching out how the encounter will look like.
Designing the Slay to Hug button interactions and storyline.
Thinking about how the battle scene and experience will end.

Programming & Development

Learning Version Control

Working with Unity in the past, there many times where I had to start all over again because one modification would ruin the entire project. I duplicated the Unity folders, which ended up taking too much space. it was time to also learn how to use version control with Github for Desktop. This saved me from having to redo the entire project – as there were moments where I needed to undo something that couldn’t be undo-ed.

Learning Hand Tracking and Programming for the Quest

Using Fuse VR’s “How to Create Tilt Brush From Scratch” tutorial, and Dilmer’s VR Drawing tutorial series and Valem’s Hand Tracking tutorial series, I found a way to combine and modify the scripts to test out multiple functionalities such as changing colour, line width, drawing using line renderer, and with a custom line renderer that draws double sided paper-like lines, using hand tracking functionality.

Through developing a custom dialog manager by combining lessons from Blackthornprod’s dialog system tutorial and Omnirift’s Fade UI Elements tutorial, I learned how to use Coroutines in my scripts, which would prove to be extremely useful later.

A level loader was also created to be able to switch between scenes upon a set time with help from Brackey’s scene transitions tutorial.

Testing drawing using hand tracking.
Testing different line widths using finger pinch.
Testing both switching colours and line widths using different finger pinches.
Testing face of fear. The drawing that allowed the Dream Eater manifestation idea to take form.
Adding a new script that draws using quads - so the drawings appear flat and 2 sided.
Testing both switching colours and line widths using different finger pinches.

Fading Sprites and Text

More custom scripts had to be written, this time combining Alexander Zotov’s fade out/in game objects tutorial with Text Mesh Pro, so that both the speech bubble sprites and the text would fade in and out at the same time, upon a delay. The delay before fading was achieved using a delay coroutine within a coroutine that was advised by forums. Delay before fading was a key component that allowed elements in the experience to fade in and out at the right times.

Creating the Comic Scenes

For the comic scene, I used Unity’s Terrain building functionality. I created the 3D world, and took screenshots of the scene, implementing them as comic scenes by attaching speech bubble visuals and text on top. It was simpler to create the 3D world, and base the 2D images off of them, than the other way around.

Unfortunately, the system broke when updating the materials to the LightWeight Render Pipeline, and even the version control method wasn’t working anymore. The project with the terrain is lost, and all that was left were these screenshots. Time to start again before this terrain was even made!

Sculpting the terrain.
Vision of the Artist Process
Texturing the terrain and adding trees and water.
Vision of the Artist Comic scene
Final result of the terrain, with an oil painting filter shader. The actual image used is a flipped version.
Vision of the Artist Script
Rewriting the script for the new narrative.
Implementing the comic scene.
Adding fade transitions for the comic scene speech bubbles and images.
Used an example prefab of a zombie bunny model from an old Unity Survival Shooter tutorial package for the appearance.
Adding models and fog to the Dream Eater scene using particle systems.
With much testing, the terrain caused the battle scene to lag greatly, and was omitted from the battle scene entirely.
Adding speech bubble sprites and buttons to the battle scene.
With much testing, the terrain caused the battle scene to lag greatly, and was omitted from the battle scene entirely.
Adding speech bubble sprites and buttons to the battle scene. Much readjusting was needed.

Desigining and Executing Interactions

The hover and pinch functionality was achieved with help from Valem’s hand tracking tutorial. For every element that required hand tracking interactions, they required a Proximity game object, a Contact game object, and an Action game object. Changing the valid tools from All to Ray allowed for it to work. Upon detecting contact, the speech bubble would turn on the “Hover Highlight” mesh renderer, while it’s default state had it turned off. Upon the action state, the speech bubble would turn on the “Selected Highlight” mesh renderer instead. This allowed the light to stay on. 

One big problem was that the items were still being highlighted on hover, before they even appeared on the screen. For this, more custom code had to be written. By tagging each of the speech bubbles “InteractBubble,” and assigning each script with the right game object carrying all of the hover and selected mesh renderers, I was able to turn them off until after the object fades in. 

Using Unity’s tag function, I was able to detect if all of the interactable speech bubbles were activated, which would allow the Slay button to appear. When hovered over, a glowing light appears around the Dream Eater. When the slay button is activated, the Dream Eater would reply back in speech bubbles, and a Hug button would appear instead, in which the glowing light now appears pink. When the hug button is activated, The Dream Eater fades away, and the glowing orb appears.

The Dream Eater fading away was achieved by first creating a duplicate game object of the prefab model, changing the materials to a Transparent/Specular shader, and animating the alpha values. When the action state of the Hug button is detected, the original model with regular shading is turned off, while the transparent shader model is activated. Then, the animation is triggered to play.

Testing the buttons and making sure it lights up on hover and selection.
Testing both switching colours and line widths using different finger pinches.
The speech bubbles had to be constantly reoriented in order to display properly to the viewer's view.
Dream Eater fades away.

Testing & Recording

The Quest’s native recording and screen capture functionality was limited to a 1:1 ratio, that was not ideal. Using SideQuest and ADB, I was able to change it to capture recordings in 19:6 widescreen.

Finishing Touches

The “Dream Pursuer” was added to the final scene along with multiple speech bubble narratives that would close off the story. Finding music, adding crossfading animations and music, and modifying each scene with the right i.e. paint colours was the final step.

Vision of the Artist Creating Orb Process
Creating the glowing orb that would represent the Dream Pursuer using multiple particle systems.
Audio crossfade was done using volume animations per audio source, and code.
Creating the final recording with live interaction using Premiere Pro.


Through working on Vision of the Artist, I was able to learn a vast amount of programming and Unity in a short time. It has allowed me to develop my game development skills substantially, when I was faced with obstacle after obstacle, of not knowing how to execute my ideas using the system. I attribute the success to all of the tutorials and articles out there, that made Vision of the Artist possible.

Working on Vision of the Artist allowed me to understand that designing interactive experiences is something I want to continue to do, and wish to continue improving to become a more adept game designer in all of the pipelines from concept art, programming, to animating. (Though I don’t want to touch another VR project for awhile.)

Future Ideas

With user feedback (from home) – there is more to do. In the future, player choice matters. Players will answer questions throughout the experience, which will dictate the narrative of the story. This allows the story to cater more to the player and their own type of Dream Eater. For instance, one story can focus on the fear of what other people think, and all of the interactions and text would reflect this. In addition, each of the drawing scenes would no longer be timed, and instead the player would have the option to hover and pinch the “Move forward” button that fades in upon a timer, when they are finished practicing their awareness. In addition, it is important to make sure the drawing functionality is not activated when the hands cannot be sensed while in the middle of drawing, in order to avoid unwanted line strokes from being drawn, which surprises the player.