Formal cinema, as evidenced by structural filmmakers in the fourth quarter of the 20th century, was an expression of the mechanical tools that were available to them: the film camera, the optical printer, the splicer, the A/B rolls. Today’s formal practice recalibrates the practitioner as a cinematician: neither fixed as cinematographer or editor, special-effects artist or projectionist. The cinematician of today spans the entire cinema gestational process in a single gesture: digitizing, processing, analyzing, playing, editing, presenting, deconstructing, discovering and experimenting without the imposition of hardware and processing compartmentalization. In this developing environment, the experimental cinematician programs a cinema engine that defines a new relationship to post-mechanical cinema, and spirals into a truly experimental and developmental relationship where the medium and the self can no longer be differentiated. The cinematician develops both the sensible experience and the generative media with every new piece.
The cinematician is an adaptation of independent cinema to the speed and power of 21st century digital media.
I create and employ software engines to examine mediated artifacts forged at my zone of proximal development. My Simultaneous Opposites engine (2008-present) is a performance/navigation system for real-time traversal of existing video files, sorting through the audio and video a single frame at a time, in a arrhythmic spiraling motion. The center frame between the two ends of the spiral becomes a temporal focal plane, with the length of the jump a temporal depth of field. The navigational path is the result of a preprogrammed algorithm interrupted during traversal by triggering and modulation by computer keyboard, mouse, and MIDI guitar.
On my website (www.robertedgar.com) are over 50 examples of the output of the developing Simultaneous Opposites engine. Each sequential video file exemplifies a stage in the ongoing experiment I’ve undertaken. During early work at Synapse Video in the 1970s, I integrated strategies of experimental filmmakers with those of the newly developing medium of post-television video. In the early 1980s, I sold my 16mm Beaulieu camera and bought an Apple //e. Teaching myself programming, I produced Memory Theatre One, an early seminal work of interactive computer art. Supported by the positive reaction from the small but developing group of artists who programmed, I set about to develop systems for extending Eisenstein’s montage categories to include the attribute of real-time cinema generation.
The first of these was Living Cinema, which blended video footage collected diary-style on video discs, texts from musings, audio recordings, cells to create short animation loops, graphics etc, and programmed software supporting the selection and combination of all the elements in real time in performance. I was also able to save the performance decisions and cursor moves, and re-inject them into the system later during the same performance, for a spiraled revisitation. Living Cinema had performances throughout the United States.
After a stint as multimedia specialist with Commodore Business Systems working with the video-oriented Amiga, I programmed and integrated a new system, SAND. From my website:
“The central theme of SAND: OR HOW COMPUTERS DREAM OF TRUTH IN CINEMA comes from a well-known quote that Cinema is truth 24 times a second. I grabbed stills (and sometimes generated images using 3D animation programs) of a sequence 6 frames long. I changed things in front of the camera between the shots. I then took the separate images and allowed the computer to imagine what happened between the frames–I did this using a morph program. Usually morph programs are used to change one object into another, but in SAND I used it to create an explanation (a visual one) for the changes that happened between the information that the computer had (it had only the separate stills). Of course, it didn’t always guess right: things may move from left to right, when they actually were pulled apart, etc.
“The mistakes–the artifacts from the morph–create their own poems, from their difference between what actually happened and what it “guessed” happened. My goal in the piece was to invent a new poetic space, which I believe I succeeded in doing here. And of course, there are parallels among all the media channels in the piece, as rememberances, arguments, occurrences, repeating riffs appear and advance in time”.
Unfortunately, soon after a working system was completed, I had to sell the Amiga-based system in order to purchase a business-oriented computer for my new Silicon Valley-based company Iconceptual. Only a few examples of the SAND exist.
At that same time, my video camera was stolen. I used the next few years to focus on music and musical systems. My next compositional system—consisting of software, MIDI-guitar, looping pedal and computer—generated a single piece. My “Duchamp Examination” is a demonstration of how a composer can dissect, examine and re-produce a precomposed (but not prerecorded) piece through a combination of analog and digital mastication and redigestion, in real-time solo performance before an audience. The Duchamp Prelude is a kind of electronic alapana, in which I explore the chordal sequence of the source piece “If Duchamp Drew Beautifully” through an overdriven TM-2 bandpass filter. I performed The Duchamp Examinations live online for an international audience through Electro-Music.com, as well as performances in California and New York.
Simultaneous Opposites, then, is itself the current evolution of a series of performance systems that I’ve programmed and integrated. A specific precursor is found in my 1972 16mm film of the same title. The 1972 Simultaneous Opposites was single-framed from a single tripod position. A second exploration incorporating this same approach, Intersticies, blended similar footage into video manipulation, and was produced at Synapse in 1975. Digital videos of these earlier works can be seen at www.robertedgar.com.