A lot of effort went in modelling the three worlds in Placeholder - the Cave World, the Hoodos World, and the Waterfall World.
Two SGI VGX computers were used with Alias architectural design tools to lay out the environments' geometries and to apply textures to the resulting wireframes; this effort was accomplished by Rachel Strickland working with Catherine McGinnis, Raonull Conover, Douglas McLeod, Michael Naimark, and Rob Tow. Video capture of environments was accomplished by Rachel Strickland and Michael Naimark; subsequent video digitization using Macintosh-based equipment was done by Catherine McGinnis and Rachel Strickland , with image enhancement in Photoshop done by Catherine McGinnis and Rob Tow. The first complete assembly of worlds into a form viewable in the VR helmets, as opposed to the distorted views on workstation screens, was done by Rob Tow.
The effort was greatly hampered by the tools used. Due to budget constraints, an old C compiler for an earlier SGI model was used instead of the proper optimized compiler for the Reality Engine. Debugging was largely accomplished by "printf" statements, and required a minimum of three people in realtime to do: one in the VR helmet, one running the Onyx Reality Engine, and one running the sound processors. The design of the worlds proceeded in a non-immersive way, on workstation screens; the projective geometry of the Alias design software differed greatly from the immersive experience in the VR helmet, which led to tedious difficulties in world construction, and many arguments. We suffered greatly from not designing while immersed in the medium itself.
One notable exception occurred near the end of the process, when we did a small piece of world layout from inside the virtual environment. This was the placement of the uninhabited Critter icons; we placed a set of Voiceholders randomly in the worlds, then Brenda donned a helmet and moved the Voiceholders to where she wanted the various Critters to be - and we replaced the Voiceholders with the Critters. This was a small presage of what it might be like to fluidly design from within an immersive environment, as opposed to painfully and explicitly calculating coordinates at a desk.
Another difficult part of the construction
was the process of capturing the environments
and turning them into data structures. A
tremendous amount of imprecise hand work
was involved, from camera positioning to
wireframe design. Images traveled from fllm
or video to digital form, through PhotoShop
for color correction and other cosmetic surgery,
into Alias, and finally pumped through MR
Toolkit into the helmet - and only then could
we determine if we were on the right track.
Automating this process is clearly possible,
using computer controlled cameras and such
techniques as deriving information about
depth from stereo imagery. We are indebted
to Michael Naimark, for providing the conceptual basis for
our approach. We worked with three natural
environments: a cave with a hot spring inside,
a stand of hoodoos on a mountainside overlooking
the Bow River, and a waterfall at Johnston
Canyon, all in the Banff National Park. We
attempted to match capture and representation
techniques to the salient features of each
landscape. Since the cave was primarily an
auditory experience, it was modeled and rendered
most simply, without texture-mapping, so
that machine cycles could be devoted to supporting
an extremely rich auditory environment. The
sense we wished to capture at the hoodoos
(tall, penile structures that are the product
of millennia of erosion) was one of looming
surround, so we tiled a virtual dome with
digitized video images. The overwhelming
sense at the waterfall was one of flow in
both visual and auditory dimensions. After
much experimentation we settled on the idea
of texture-mapping motion video onto a virtual
relief model of the waterfall's geometry.
More details about the Hoodoos and Waterfall models are below.
Hoodoos models
Waterfall models
Rachel Strickland spent a lot of effort in trying to develop a model of the Waterfall World that expressed her visual esthetic. Ultimately, this effort was abondoned, due to the press of time, limitations of the hardware, and the need to begin work on other elements of Placeholder such as the Critters and Voiceholders. The models in the links below are archived from her effort, which she discusses in her essay Troll Trials and Tribulations.
I disagree strongly with her characterizations of my views in her essay; they do not give me any credit for hundreds of hours laboring to create the working models she discusses, and she reports statements in my voice which I never uttered - the history she constructs is factually untrue in several regards. For example, she writes:
"Rob Tow held certain laws of optics and visual perception to be inviolable. Any deviation from "correct" linear perspective was inadmissable in Rob's view. As a cinematographer, I respond to the play of light on things in the myriad shades and eccentricities that its occurrence manifests. Light is the phenomenon that I capture with a camera. Rob echoed the opinion of those nineteenth century scientists who cautioned that landscape painters should not attempt sunlit scenes."
Both of these attributions to my voice are not true - I helped design and build non-linear projections for
Placeholder; and I certainly would never utter as something laughably foolish as the second utterance. I have been puzzled ever since she began to present this "history" by Rachel's need to project these views on to my voice, and to deny my work. Rob Tow