audio enlightenment

NOTES: Taken from “Body Navigation” by Ole Kristensen
The system was created using Processing for infrared blob-tracking of dancers and drawing the openGL graphics. During the performance, the system was controlled live by a person from an ISADORA-based interface via osc.
(Note to self: “Oh dear”)
The floor was covered with white vinyl.

Things to look into after the meeting with Simon…
– What kind of sequencer will I be using?
– What kind of amp will I be using?

REF: “The wilderness downtown” in Goggle Chrome by Arcade Fire … Pure Awesomeness…

– What is the bridging software between Processing and the Sequencer… (This may have to be Isadora)
– Reference: http://www.native-instruments.com
– How is the MIDI data interpreted?

– Look into ‘Pressure zone microphones’ that could be used to amplify a heartbeat

electric beats

Reference: http://kenfrederick.blogspot.com/2010/07/schhplttlr.html
the premise
Using movements inspired from “schuhplattler” (traditional Bavarian folk dance). The dancers generate beats and light via sensors. Musicians, media artists, dancers and choreographers experiment with these movements during a three-day long interdisciplinary workshop, in which the audience may freely participate. The results were performed at the opening evening on July 15, 2010.” (taken from blog documentation)

enter soundman

I spoke to Andrew today (the sound-wiki). He was quite helpful in directing me in some new investigations about what I have to do to get my sound design ball rolling.

Here are the minutes of our brief meeting

– Look at drum triggers, audio triggers, and Processing audio libraries that use MIDI

Reference: http://www.ddrum.com/

A basic example of what someone made using audio triggers in the minim library in processing

– Investigate the emotion of sounds and chords. Possibly conduct a quick user survey of how people react to the various orchestral chords and sound that I am thinking of sampling to better understand how people perceive the sounds I am thinking of using.

– Speaker set-up… The speakers will have to be mounted high in the corners of the room. If I decide to go with 4 tracks, or even more than 1 track, it would be worthwhile to look into how I could use blob detection to trigger the velocity or volume of the sound in the work reactive to the positioning of the players. I may need a sound encoder like surround sound systems use. (I hope not)

– Listen to the various effects of clipping tracks. Look into the 2 types of clipping that can be achieved: digital vs. analogue

– Andrew thought of Street Fighter 2 the more I talked about my project… hmm…

– Once I have done my homework, one of the editors of Audio Technology might be able to impart some know-how to my humble project

Reference: http://www.audiotechnology.com.au/

And finally… unrelated to the meeting we had, here is a little sound data visualisation that I stumbled across. Made in processing, and added fresh to vimeo, the visuals are affected by the audio… p.s. 31fps has a pretty awesome website, and he’s from Hamburg (add another brownie point!)

location scout

I had a look at the LDG Gallery today. The A1 Photography Studio was being used, but Craig let me have a look and take some snaps at the A2 studio. He says the dimensions are pretty much the same, and the ceiling is rigged with lights which could make it easier to rig a projector to one of the rails. I roughly measured out the space that could be used for the performance. The entire space is about 7×8 meters, but if I were to use the center of the room, then 5x5x5 meters is what I would be looking at. It’s a very nice space and the wooden floors make could give the projection the texture that I discussed with Will about in my last presentation. And using the natural textures of the room could work very well for the final projection of making the digital cracks into the floor – making the folding of the physical and the digital space more seamless in the performance.

Here are some images — terrible quality because I forgot my camera, but my little mobile did the job for today. I will go back and take a look at the other photographic studio to see its potential.

how the hardware started working…

Setting up is a headache in itself. From Day 1 I have been pulling out my hair with just getting any sort of response of connection from the sensors and my computer and processing. From one brick wall to another as you may or not have read from my posts earlier this year.

Here is my setup for the CoFA digitizer, that requires a power source:

Connection to digitizer

Sensor to digitizer connection

Midi connection

The setup wtih the Wi-microDig is slightly different:

sensor to bluetooth

blue light = bluetooth connection is good

Connection bluetooth in iCubX editor

yay! data being inputed into iCubeX editor

There appear to be 3 available channels for data collection for the GForce-3D Sensors. I assume it is one for each axis: x, y, and z. I noticed that when i held the sensor flat and then flipped it 180 degrees, the numbers moved from 0 to 124 (the maximum on the iCube-X editor interface. The other 2 moved more dramatically when I moved the sensor in the other two axes.

I am actually kicking myself for not having taken a screen shot the first time my sensors started working! The problem I am facing now, is yet again carrying the data into Processing. After using the Bluetooth digitizer (Wi-MircoDig), the settings for the CoFA digitizer seem have been reset and I have zero input into processing, although it isn’t totally back to nil – Processing still recognizes the MIDItoUSB connection and the hardware – it is only the actualy values from the sensor that are not coming through again. I’m going to have to give it another bash… Let this be a very good lesson – to ALWAYS write every single step (painfully document the ENTIRE process and not only what I think the “main points to remember” are, in fact, NEVER… EVER.. trust my memory and take notes for every little detail, and I think if I can do this, my stress levels should remain in check, and my hair will not fall out as much!

During the first major breakthrough, I did manage to save the link to the site that brought me so much happiness after hitting so many dead ends. After having installed the drivers for my MIDI to USB connection, the sensor was still not being read by Processing. The tricky part… is Mac OS sometimes has issues with accessing MIDI or sound hardware and requires another plugin: “mmj”
–> Java for Mac OS X 10.6 Update 1 (Dec.3rd 2009) changes things in terms of JVM midi support. With this update the world’s most advanced operating system does not need a third party Midi service provider like mmj for javax.sound.midi to access hardware anymore. Here is Apple’s release notes, see Radar #3261490 under JavaSound.
–> Second: You may still want it, though
Apple’s java Midi implementation appears a bit half-hearted. It seems to ignore timestamps on Midi events, device names will default to only the port’s name (without hints on the device this port belongs to) and there may be other things missing alike.
–> Third: You might even need it
If your OS version is not supported by the update you still need a service provider to access Midi hardware. (Yes, deploying a java application has just become even more fun).
Reference and link to download mmj

Currently, my problem is that even with all the necessary plugins installed, Processing is yet again not receiving any data from the sensors…. Time to have another bash before the presentation so I can do a little demo! I think it’s about time!! …. No sweat on my brow.. :-/ … I just hope my computer will be cooperative!

whitelaw, transmateriality, and tangents

“The cultural imagination of data is crucial in a society increasingly enmeshed in the datasphere.” – Whitelaw

New media art provides a venue for the transformation and translation of the technical and conceptual artefacts of artificial life into cultural objects – conglomerates of rhetoric, metaphor, and aesthetics. Such translations are important… Because of the terms they articulate; at a time of rapid and dramatic technological change, the process of assimilating, debating, contesting and reflecting on that change within cultural domains is crucial. The interface of artificial life and cultural practice is particularly significant for all these reasons; it opens a space for creative experimentation and debate around the increasing technologization of living matter as well as broader issues of life and autonomy, agency and evolution, genetics, code and matter. (Reference: Metacreation: Art and Artificial Life, MIT Press 2004.)

One might take the extreme position that a significant interaction between an artwork and a spectator cannot be said to have taken place unless both the spectator and the artwork are in some way permanently changed or enriched by the exchange. A work that satisfied this requirement would have to include some sort of adaptive mechanism, an apparatus for accumulating and interpreting its experience….

… The navigable structure can be thought of as an articulation of space, either real, virtual or concpetual. The artist structures this space with a sort of architecture, and provides a method of navigation. Each position within the conceptual space provides a point-of-view, defined and limited by the surrounding architectural structure. Exploring this structure presents the spectatr with a series of view of the space and its contents. The sequence in which the spectator experiences these vistas forms a unique reading of that space. In Virtual Reality systems, the architectural metaphor can be taken quite literally. In other works, the architecture is more like a conceptual paradigm, a method of organisation of intellectual perspectives, opinions or emotions.” (David Rokeby, Transforming Mirrors: Subjectivity and Control in Interactive Media)

cracks – 1st prototype, processing sketch

The first iteration of my sketch was inspired by a variety of sketches, and created by combining two of these “inspirational sketches” into a (very) rough functional prototype of the type of generative design I would like my final sketch to mimic.

“Pollen” by Kyle McDonald

“Perlin Noise Particle” by Daan Van Hasselt

“Harmony web remake” by Micthell Whitelaw…

…whose inspiration was Mr Doob’s Harmony Sketch Tool

My sketch is a combination of “Perlin Noise Particle” and “Harmony web remake”:

week 1 done

First week of Session 2. And I feel so far behind my schedule. But all there is, is to keep going.

To re-cap from my thoughts of the week before… I have been reading a lot of the writing about the works of Raphael Lozano-Hemmer and his practice and production of his ideas within his work. It has been very interesting, and I’m not sure if it has been the 4 months that preceded that has resulted in my thought progression to this point. Because when I step back, it seems like a whole heap of mental rubbish from which (finally), a solid idea has emerged from the pile, and presented itself as a realistically viable way of representing my data.

The following somewhat describes where my mind wandered to after reading a couple of the essays from “Some Things Happen More Often Than All of the Time” – Rafael Lozano-Hemmer.
I have been thinking about the visuals and I was reading an essay about Spatial Expressivity (“the edge of a cliff supplies a walking animal with a rtisk, the risk of falling, and the sharp edges of the rocks below, the risk of piercing its flesh; a layout of rigid surfaces facing inward, like a hole on the side of a mountain, supplies an animal with a place to hide, either to escape from a predator or, on the contrary, to conceal its presence from its unsuspecting prey…”)

I won’t site the whole passage but, I was thinking of using the data from the accelerometers to transform the space into a kind of “chasm from above” where the cliff face edges are generated and projected vertically downward as a result of the rhythms of the players.
The gap of the chasm would be reliant on 2 states of the player:
-determination
-desperation

these values could be determined through the comparison of the individual data streams and represented in the gap of the “chasm” and i’m looking into the possibility of projecting a sense of depth through echo…

The 3rd state of the players, vulnerability, i was thinking could be mapped with the brightness of lights casting shadows on the back wall of the space. sort of like in films when the shot of approaching danger is coming.


Ref: Still from Woody Allen’s “Shadows and Fog” (1991)

This lead to further questions from Petra:

in what way do you imagine -determination and -desperation to influence the appearance of teh chasm?
and would you be after showing which player is determent in which one is desperate – how can you show the different states of the player in/through the chasm?
do you imagine the chasm to ‘evolve’ during the fight (that is, change in shape, crackle, etc)? (i must admit, i wouldn’t know how to program that)
if you are interested in visuals that change, grow, etc in accordance to the incoming data input, it might be easier to look for/develop a generative pattern or something alike (e.g. crackling (of the ground)), where the sensor data drives the pattern’s evolution.

This got me thinking… It was true that a “chasm” of sorts may not be expressive enough to represent the different states within a fight. So continuing with the “crack” metaphor, it came to me that it would make more sense to illustrate the reactivity with a collection of cracks, and so this lead to thinking about floating icebergs, broken glass and the assorted types of cracks produced in different materials.



spatial expressivity

“Spatial expressivity has another aspect: the relational one. The ecological space inhabited by an animal expresses, through the arrangement of surface layouts, the capacities it has to affect, and be affected by, the animal. To put this differently, solid objects present an animal with opaque surfaces the layout of which supplies it with opportunities and risks: a clutteredspace supplies a walking animal with the possiblity of locomotion in only some directions, those exhibiting openings or passage, but not in others; the edge of a cliff supplies a walking animal with a rtisk, the risk of falling, and the sharp edges of the rocks below, the risk of piercing its flesh; a layout of rigid surfaces facing inward, like a hole on the side of a mountain, supplies an animal with a place to hide, either to escape from a predator or, on the contrary, to conceal its presence from its unsuspecting prey. These spatial capacities to affect and be affected are fully objective: the animal may perceive them incorrectly and miss an opportunity or run an unnecessary risk. They are nevertheless relational: the surface of a lake does not supply a large animal with the opportunity to walk becaure it affords this opportunity to small insects that can move on it because of its surface tension. Perceptual ecologists and behavioural roboticists have a name for these opportunities and risks supplied by surface layouts affordances.”
-Manuel DeLanda,“The Expressivity of Space”


“Some might agree with the Tate that the work, Doris Salcedo’s “Shibboleth,” concerns “the divisions between creed, color, class and culture that maintain our social order, precariously balanced as it is on the precipice of a chaotic void of hatred.” Others might feel that, as a visitor named Peter Lord said the other day, “there’s some kind of meaning behind it, although I don’t know what.” ”
Reference: New York Times Article, 11/12/ 2007

Food for (thought) Processing

In mathematics, a Voronoi diagram is a special kind of decomposition of a metric space determined by distances to a specified discrete set of objects in the space, e.g., by a discrete set of points. It is named after Georgy Voronoi, also called a Voronoi tessellation, a Voronoi decomposition, or a Dirichlet tessellation (after Lejeune Dirichlet),

In the simplest case, we are given a set of points S in the plane, which are the Voronoi sites. Each site s has a Voronoi cell, also called a Dirichlet cell, V(s) consisting of all points closer to s than to any other site. The segments of the Voronoi diagram are all the points in the plane that are equidistant to the two nearest sites. The Voronoi nodes are the points equidistant to three (or more) sites.
Ref: http://en.wikipedia.org/wiki/Voronoi_diagram

more about Voronroi…
Voronoi diagrams were considered as early at 1644 by René Descartes and were used by Dirichlet (1850) in the investigation of positive quadratic forms. They were also studied by Voronoi (1907), who extended the investigation of Voronoi diagrams to higher dimensions. They find widespread applications in areas such as computer graphics, epidemiology, geophysics, and meteorology. A particularly notable use of a Voronoi diagram was the analysis of the 1854 cholera epidemic in London, in which physician John Snow determined a strong correlation of deaths with proximity to a particular (and infected) water pump on Broad Street.


Ref: http://mathworld.wolfram.com/VoronoiDiagram.html

Mesh Library// http://www.leebyron.com/else/mesh/
Mesh is a library for creating Voronoi, Delaunay and Convex Hull diagrams in Processing. After searching online for a Java package for creating Voronoi diagrams and failing to find anything simple enough to fit my needs I decided to make my own as simple as possible. I did find the wonderfully useful QuickHull3D package, which the algorithms for creating these diagrams are based on. These complete in O(n log n) time.