Archive
david

Past week:

Over the break, I initially focused on implementing kinect skeleton tracking.  I spent a few days working through the book Making Things See and was able to get skeleton tracking working in processing.  However, I wasn’t satisfied with how well I could integrate it with the other  generative elements I’ve been working on, so I eventually decided to stay with face recognition (open CV) as the primary mode of interactivity.

I further developed the generative graphics of my processing sketch, trying to visually reinforce the themes of recursion and memory – I’m not sure how successful I was in this regard, but I’m happier with the overall aesthetic.  I also spent a day experimenting with how the processing sketch works as a projection in real space.  This allowed me to troubleshoot the code to be more appropriate for an installation context.

I also played around with my recursive audio max patch in the installation space.  I’m happier with how this represents the original theme, though at present the audio effect it generates is subtler than I’d like.  I had some trouble avoiding feedback and other unwanted audio side effects, but was able to minimize these by adjusting equipment and placement.

Next week:

I may continue experimenting with kinect skeleton tracking as I still feel this can afford more options for interactivity.  Ideally, I’d like to further integrate the processing and max elements so that they communicate with each other directly and inform each others’ output.  I would also like to experiment with other installation space possibilities.

 

Read More

(1) what you accomplished this week

– Experimented with recursive audio Max patch I developed last week

– Further explored face recognition + generative visuals possibilities – may use Kinect instead

– Went over Kinect skeleton tracking in Processing with Jason – feel more confident with the possibilities here

(2) problems, unexpected discoveries, etc

– Wasn’t able to secure all the equipment I wanted to do much physical/hands-on experimentation for the real installation context.

– However, I am feeling better about the project with Alex’s reassurance that the final does not have to be a conceptually cohesive finished art piece – it’s OK to see it more as a well realized technical exploration.
(3) goals for next week

– Will be able to secure equipment over the Thanksgiving break for experimentation towards installation context

– refine Kinect skeleton tracking informing generative elements

– narrow down which of my other previous experiments to utilize in the iteration for the final and integrate them: 1. kinect skeleton tracking, 2. face recognition informed generative visuals, 3. recursive audio max patch, 4. generative visuals that respond to ambient sound (in Flash)

Read More

Developments:

I was able to automate a recursive audio process using Max/MSP.  The Max patch records 5-10 seconds of audio, plays back the recording while simultaneously recording that playback, then repeats.  Ostensibly, each iteration of this process plays back every sound that has been made since the patch was initiated and folds all new sound in the vicinity into the next recording.  Also got in touch with a couple dancers who are willing to be a part of the project, although dance may no longer be an element that I pursue.

 

Issues:

Jason and I have continued to explore the possibility of collaboration, but have not come to a satisfying resolution.  We will likely utilize some of each other’s research, but may no longer work on the same final project.  Also, I now feel that the dance performance layer may be a distraction from the original idea of degrading/mutating memory that was originally my base inspiration.  In general, I’m struggling with reconciling the seemingly disparate technical elements/effects that I’ve been able to execute within a cohesive conceptual framework.  I very much would like to have each element justified and integrated in a way that supports the whole, not just presented as a melange of “cool” effects in a generically interactive installation.

 

Upcoming:

The next week will be about jumping into hands-on work.  I agree with the feedback I got earlier today – I need to push through the current frustration/doubt/indecision by synthesizing the pieces I have now before preemptively making value judgements.  As such, I plan on reserving equipment for extensive experimentation over this coming weekend.

Read More

The original concept and ongoing foundation of this project is an immersive, interactive installation that incorporates recursive/generative processes in real-time, on both visual and auditory levels.  The concept has since evolved with the prospect of a collaboration with classmate Jason Rabie.  We have been discussing the possibility of developing a responsive performance space that informs, and is informed by, imporvisational dance.  Sounds and movements made within the installation will be recorded and fed into generative processes that propagate the next iteration of music and visuals.  In turn, these generative elements inform the performer’s improvisational movement, creating a continuous feedback loop.

 

Technical Specs:

– Processing: generative visuals projected on walls, face recognition

– Max/MSP: skeleton/motion tracking that informs generative music, recursive audio recording/playback, possible video feedback elements

– Equipment: computer, Kinect, HD projector(s), omnidirectional mic, speakers, video camera(s),
Revised Timeline:

week 7:
- further develop face recognition in Processing (how can it better inform generative/recursive visual elements?)
- figure out recursive audio patch element in Max/MSP
- explore how to best implement these layers of interactivity in an installation context
week 8:
- research installation site options
- secure performers
- develop generative music informed by Kinect skeleton tracking
- explore recursive live video/video-feedback possibilities (best in Max or Processing?)
- focus on integrating different types of interactivity (how can they inform each other?)
- research how to interface Processing with Max
week 9:
- secure installation space, design site-specific layout
- refine interactivity and generative elements
week 10:
- test out equipment choices: omnidirectional/PZM mic, speaker, HD projector(s), other?
- practice with performers
- fine tune generative visual and audio elements
week 11:
- set up installation
- troubleshoot technical details
- video documentation
week 12:
- give final presentation
Read More