Fascinating article about Neil Harbisson's use of an 'eyeborg' to create colours into frequencies of sound which is then transmitted into his bones.
As part of the TEDGlobal Conference in Edinburgh, Harbisson explains to the journalist from New Scientist how he can transform a colour into a note...
"Colour is basically hue, saturation, and light. Right now, I can see light in shades of grey, but I can’t see its saturation or hue. This gadget detects the light’s hue, and converts the light into a sound frequency that I can hear as a note [wavelength is inversely proportional to frequency so it can easily convert the wavelength of the light into a sound frequency].
It also translates the saturation of the colour into volume. So if it’s a vivid red I will hear it more loudly."
I'm wondering though, how the sounds work together - I presume he can hear colour together as chords? But how many does the 'eyeborg' convert colours into notes? - there can't be too many 'pixels' of colour or be too sensitive, as I imagine they would drown out each other. Or, he would always see all colours.
Don't usually link to wiki, but can't seem to find this anywhere else - http://en.wikipedia.org/wiki/Neil_Harbisson
This interests me in how I can relate individual's experiences to the shared arena through communication. Things such as Tinnitus are just individual ways of looking/hearing/responding to the world.
Perhaps I can do something with these idea of sounds and colours - find the right hue of my tinnitus and get to see what I hear all the time!