The Music of the Brain

First of all, remember that post I wrote on the serotonin theory of depression, and how it was probably wrong? I was right it is at the very least incomplete. Another one bites the dust. It’s sad, as we are so desperate to find SOME theory on which we people who like to study depression can hang our hats. But the serotonin one was not to be. Check out the blog coverage. It is incisive. I don’t know that we should be THAT hard on the researchers who invented the idea. After all, it was a good idea at the time, and the good news is that everyone is willing to accept better evidence and move on. The scientific method at work.

Ok, I’ll admit, when Sci first saw this publication, she went “LOL wut?!” Why would anyone DO this? I mean, cool, but WHY? Kind of like putting a really sensitive measurement apparatus for brain wave activity in a freely-flying bat. Cool? Yes. Useful? Well…it’s COOL!
But this paper IS cool, and the more I think about it, the more I think there might be something to this, following some more refinement and development down the line.

ResearchBlogging.org Wu, et al. “Scale-free music of the brain”, PLoS ONE, 2009.


The important part of this paper isn’t the figures. It’s the audio files. I’ll be including them in the links, and I definitely recommend a listen. And the sounds are what I’m going to focus on, because this paper is REALLY math heavy, and Sci can’t do this kind of math justice.
Basically, there have been experiments trying to convert brain waves (from EEGs) into sound since about 1934. Electroencephalograms, or EEGs are still the only way we really have to see the brain in real time, as fMRI and PET still work on too slow of a scale to allow for good resolution.
But the question is: why convert brain waves into sound? Well…because it’s cool. No really, there’s another reason. Humans actually hear pretty well in a pretty wide range. More importantly, we can hear very small changes in pitch and rhythm. And sound patterns (because of our extensive use of language) may be easier for us to distinguish compared to really complicated visual patterns. So the idea is to turn brain activity into sound, and see if you can come up with anything. Perhaps, for example, people could compare a normal brain with an epileptic one, and hear differences. Of course, differences during a seizure would be pretty obvious, but it’s possible, if the technique got refined enough, that people could be trained to “hear” differences resulting from things like schizophrenia or Alzheimer’s, which could aid in diagnosis, and thus in treatment.
Suffice it to say that the methods contain a lot of equations. I could go into what each of them means, but Sci is tired and in the lab late. Rather, she will show you what it ended up looking like:
music brain1.jpg
Pretty cool, huh? You can see they took the amplitude from each wave (top panel) and translated it to a pitch (middle panel) which they then corresponded to a note. They even took the duration of the waves and translated it to rhythm. And they got something rather…abstract. One might wonder why they put it only in bass clef, but I’m not going to be picky (c’mon, be scientists! Use tenor clef!).
And how did it end up sounding? Well, go here and check out the supporting information. And it turns out that your brain sounds, not like a Mozart symphony, but rather like a cat on a keyboard.

Ok, maybe not even that organized.

That’s more like it.
Now, this doesn’t really give you a picture of the thousands of neuron firings that are taking place per second, rather, it shows you the overall activity of the brain over time.
Of course, the scientists performed several experiments with this, including whether or not eyes were closed, eyes were open, or the person was in REM or slow wave sleep. They found that REM, or rapid eye movement sleep, looked very active (described as “a lively melody”), almost like an awake brain:
music brain2.jpg
While slow wave sleep was not only slower, it was also at lower amplitude, resulting in a lower pitched tune.
music brain3.jpg
But the real test is this: can ordinary people distinguish, when hearing brain waves made into music, between different states? It turns out that they can, and very reliably. Now granted, they only used a few sets of clips, but it’s conceivable that people could be trained to distinguish particular brain activity types based on the music, regardless of whether they had heard or identified the clip before.
There is one thing, though, that I wish they had done with this paper. Basically, they matched amplitude with tone, put the whole thing on a scale, and made it play on a piano. That’s all well and good, but I don’t know that they got the exact pitch to come across realistically. I think, instead of a piano, they should have used an instrument that can distinguish more than just half and whole tones. For example, a great deal of middle eastern music uses quarter tones as well as half and whole tones, which humans are still perfectly capable of distinguishing (though it’s REALLY hard to sing if you’re not used to it), and which might give more options for how the “music of the brain” might really sound.
This paper, if the technique is refined more and studied more, could provide a new way for people to “look” at brain activity patterns by “listening” for them. It would be pretty easy to train humans to professionally distinguish between different types of brain activity patterns to help diagnose disease. And it’d be something that some trained in music might be able to do really well. For example, I am classically trained in music, and I ALWAYS know Bach when I hear it. It would be a good job for an out of work classical musician. At least one who studied a lot of Schoenberg.πŸ™‚
Wu, D., Li, C., & Yao, D. (2009). Scale-Free Music of the Brain PLoS ONE, 4 (6) DOI: 10.1371/journal.pone.0005915

23 Responses

  1. As someone who enjoys noisy music, I’m wondering why they used discrete notes at all.

  2. Hmmm now… this has got to be the most interesting research post I’ve read in a while. To a layman like myself, this sounds (pardon the pun) like there’d be positively oodles of potential for further study.
    To put it briefly (which I didn’t do anyway, but still): too cool.

  3. Very interesting indeed. I haven’t read the actual paper yet, but I don’t understand why they’ve done this (what seems to me) very arbitrary thing of ascribing pitches to amplitudes. Why don’t they just plug the EEG signal into a speaker and see what it sounds like? You might have to put it through an amp first admittedly. Those signals look quite different to me (the two blue wave patterns). I would be surprised if most people could not tell the difference between them without all this faffing around with the converting of amplitude to pitch.
    Changing the speed at which it is played back may bring out further details, for example repetition becomes very obvious when the speed is increased. Unfortunately, I can’t find the raw EEG data files anywhere in the supplementary material. I’m sure they could easily be converted to .WAV files or something similar. Once you have that there are all sorts of things you could to to enhance or isolate elements of the signal that might lead to something useful. Harmonics perhaps, the list is pretty much endless. I would be trying to set up collaborations with signal processing people and/or professionally trained musicians.
    BTW the cats are great, although I’m slightly worried about the informed consent of the first oneπŸ™‚

  4. Hold on. Let me put my undergraduate degree from Major American Conservatory to good use.
    It looks to me like they converted the frequency data to MIDI and then ran it through off-the-shelf notation software. The quantization clearly doesn’t match the durations shown in the timeline diagrams. No human transcriptionist would’ve notated it like that. Those rests have no business in a continuous signal, and by choosing MM. 120 as an arbitrary tempo and 4/4 as an arbitrary meter, they’re stuck with all these unnecessary ties and beams and hemiolas. The second example, in particular, is full of crazy enharmonic spellings (a major second written as a double-diminished third [C# – Eb], a major third written as a double-augmented second [A# – Gb]) — this is the kind of crap you’d expect from Windows freeware apps. And, as you noted, proper use of clefs would eliminate all those ledger lines. This is the musical equivalent of round-tripping someone’s IM transcript through Google Translate and trying to pass it off as poetry.
    I wonder whether Wu, et al, can actually read music. I call shenanigans.
    What they needed to do was to convert the frequency data to sound files, and then hand that off to a qualified transcriptionist. But if they’d done that, the result would’ve been much easier to understand, and, as near as I can tell from their nearly unreadable notation, rather banal.

  5. AA’s BH: because just playing the brain waves would sound probably a lot like static, and therefore not pretty. C’mon, they’re trying to create ART here!
    HP: FTW!!!! I think I love you. I thought I was the only one offended by the major second written as the double-diminished third. I mean, COME ON. I agree, they basically just put it through some software. I think the 4/4 was just because they felt they had to have a meter (which I think is a little silly). The tempo, I would imagine, is slowed down a LOT from what they recorded. It’d be interesting to hear the gamish you’d get from REAL real time.
    Sad to know that our brain activity might otherwise come out rather banal, though.πŸ™‚
    But I think there could be a cool place for this in the world. I want to see EEG in real time performed by Rasputina.

  6. Wow, HP clearly knows a LOT more than me about music theory. Not worthy and all.
    I guess my approach is more from an instrumentation and signal processing point of view (MSc in Instrumentation). My music knowledge is restricted to a passion for listening to it. I should point out that I own (and even occasionally listen to) stuff by Merzbow and the like, so a little bit of static would not frighten me off. Cleans the sinuses out tooπŸ™‚ Besides, from a signal processing point of view it would be a doddle to subtract white noise from this signal and listen to what you have left. Certainly it ought to be possible for people to tell the difference between different brain states after all of that. Not sure I’d fancy it as a day job. There is probably software already available that could easily identify such differences without the need for anyone to listen to it though.
    My vote for real time performance of EEGs would go to the Hun Hangar Ensemble (at least it would today anyway):
    http://www.myspace.com/hunhangarensemble

  7. AA’s BH: ok, that music is REALLY cool. Is your brain activity movin’ like that? Must be some killer folk dancing in there. What does that come out as? Science? Math? Literature?
    Have you heard these guys? http://www.balkanbeatbox.com/ They will CHANGE YOUR LIFE!

  8. Cool post, interesting stuff. I can imagine that one day you could get brain electrodes stuck on your head, run them through some very expensive software, and end up with diagnostically relevant music – very cool… Plus you wouldn’t need an ipod, you could just plug into your brain.
    Man, you guys know a lot about music! I don’t even know wtf those words mean!

  9. AA’s BH: I had to read that last comment twice — I think of “Instrumentation” as the study of the ranges, playability, and tonal characteristics of musical instruments (it’s a sub-field of Orchestration), so I thought, wow, what a weird combination.
    I got a big, fancy degree from a big, fancy music school nearly twenty years ago. Now I only use it to play old-timey jazz in saloons for beer and tips. I make a living doing technical communications and information architecture, but I still love playing around with music theory and composition. And I get cranky when I see people outside the field doing sloppy, redundant work out of ignorance. You should see my face when Dave Munger does one of his posts about music cognition — I can’t even comment on those, because that whole field is so outrageously parochial and misdirected.

  10. Have they considered using the waves as models for synth sounds? Seeing as notes give us a lot less emotional information than timbres do (think about hearing a G4 – the g above middle C – played on a cello, then on a trombone, then on a marimba – that’s a huge emotional difference there), it might help us to better hear the difference if we converted the waves into timbres than notes.

  11. Sorry to make a double comment, but I’d like to reply to HP’s post:
    You’re right when it comes to some of the strange intervals, but if I’m not mistaken, this is not particularly uncommon, particularly in atonal music, where there are no constraints of a scale or mode (and this isn’t, given the title of the article “Scale-Free Music of the Brain”. I will agree though that a meter is absolutely unnecessary, and that the quantization is absolutely atrocious β€” why would one be necessary, especially in a music with no defined rhythmic figure? As for the rests, if I were to make a guess, those notes are due to the fact that they may jump to inaudible frequencies.
    I think a more accurate transcription would not only rid itself of the unnecessary meter, but also include microtones, which I’m pretty sure brain-music would include.

  12. Sorry to make a double comment, but I’d like to reply to HP’s post:
    You’re right when it comes to some of the strange intervals, but if I’m not mistaken, this is not particularly uncommon, particularly in atonal music, where there are no constraints of a scale or mode (and this isn’t, given the title of the article “Scale-Free Music of the Brain”. I will agree though that a meter is absolutely unnecessary, and that the quantization is absolutely atrocious β€” why would one be necessary, especially in a music with no defined rhythmic figure? As for the rests, if I were to make a guess, those notes are due to the fact that they may jump to inaudible frequencies.
    I think a more accurate transcription would not only rid itself of the unnecessary meter, but also include microtones, which I’m pretty sure brain-music would include.

  13. HP: Sorry about that, didn’t occur to me. Different kind of instrumentation. Ours were more likely to be oscilloscopes, interferometers or phase locked loops rather than pianos, bassoons or trombones. I’m back in molecular biology now, which is where I belong. A lot of the stuff in my masters went way over my head. At least I understand how our gel doc system works now.
    Sci: Maybe my brainwaves did look like that. I certainly had a hankering after some lively bouncy stuff like that yesterday. I was actually listening to a mix I found online:
    http://tnieuwewerck.blogspot.com/2007/10/t-nieuwe-werck-095-of-mataklap.html
    It’s mainly eastern European folk with a smattering of Rai thrown in and even a Venetian Snares track for good measure (from his “Hungarian” album). Looking at the tracklisting I see there isn’t any Hun Hangar Ensemble but I always remember them when listening to this sort of stuff. I particularly recommend the album they did with A Hawk And A Hacksaw. Saw them live once, randomly, and they were fantastic, difficult to keep stillπŸ™‚

  14. But I think there could be a cool place for this in the world. I want to see EEG in real time performed by Rasputina.

    When I first saw that bass clef I tried to imagine fingerings for that on cello and realized that if I tried it, I’d totally break either the cello or my fingers. But if anyone could pull it off, it’d be Melora Creager, Zoe Keating, or Eicca Toppinen.
    1) Why the piano!? Piano is so arbitrary and so closely bound to Bach’s “The Well-Tempered Clavier” tonality! It seems to me that it’d be far more sciencey to plug it through a pure sine wave.
    2) Why only 1 note in any given time? If I’m not mistaken, aren’t there 3 different brain wave types, and as such wouldn’t each give a discrete note that would then combine into a chord? They’re not doing a Fourier transform, so there should be chords.
    3) The REM file reminded me of Szerencsetlen by Venetian Snares. It’s begging for a break beat!

  15. But I think there could be a cool place for this in the world. I want to see EEG in real time performed by Rasputina.

    When I first saw that bass clef I tried to imagine fingerings for that on cello and realized that if I tried it, I’d totally break either the cello or my fingers. But if anyone could pull it off, it’d be Melora Creager, Zoe Keating, or Eicca Toppinen.
    1) Why the piano!? Piano is so arbitrary and so closely bound to Bach’s “The Well-Tempered Clavier” tonality! It seems to me that it’d be far more sciencey to plug it through a pure sine wave.
    2) Why only 1 note in any given time? If I’m not mistaken, aren’t there 3 different brain wave types, and as such wouldn’t each give a discrete note that would then combine into a chord? They’re not doing a Fourier transform, so there should be chords.
    3) The REM file reminded me of Szerencsetlen by Venetian Snares. It’s begging for a break beat!

  16. Converting brain waves into sound? Looks Cool..

  17. The deep sleep waves actually sound pretty good, might steal it for a song.

  18. So if iTunes can succesfully match a high percentage of audio files to the correct song title then a database of audio brainwaves could be matched to the correct mental state? Cool if so. It’s just waveform analysis to the computer and the pattern recognition software is already developed.
    Brain wave music – does it have a beat and can you dance to it?

  19. Well, you made Utne, SCi, so you are on your way to fame and riches.
    On the music from EEgs, wouldn’t it be better to use a function to map normal brain functions to known musical concepts, and observe anomalies from the cacophony that would ensue? But even so, how you actually distinguish a specific region from all the information that results from an EEG? is there a way to isolate particular functions/regions? Or are you ending with the equivalent of a microphone in a traffic jam, trying to guess why cars are not moving?
    Also: I think there is an embedded dimension to EEGs. What would that yield?

  20. You can all scrap your science papers and bad audio-to-MIDI conversion programs. Composer Alvin Lucier has been converting brain-waves into music for years. Check out “Music for Solo Performer” (to be reissued sometime soon; presently out of print on LP) and “Clocker.”
    http://lovely.com/bios/lucier.html
    cheers…

  21. You can all scrap your science papers and bad audio-to-MIDI conversion programs. Composer Alvin Lucier has been converting brain-waves into music for years. Check out “Music for Solo Performer” (to be reissued sometime soon; presently out of print on LP) and “Clocker.”
    http://lovely.com/bios/lucier.html
    cheers…

  22. Camilo, Sci knows not what Utne is…what is that? Fame and riches, you say? Sci could use some riches…

  23. I heard these sounds this morning while I dozed off to sleep. For a brief moment, I could clearly hear something like the first “REM sleep” passage. This was accompanied by a mid range “wowowowowow” at ~130 bpm. I also felt my eyes twitching beneath my eyelids. The whole experience was so strange that I couldn’t “sleep” very long. I ran downstairs and had to tell my wife what a crazy thing I had just heard. That was the whole reason I started looking for this topic.
    All of which is to say, yes: I have heard REM sleep and it sounds much like what they say it does. Thanks for posting! My wife will get a kick out of hearing a better rendition than “Beep-op-tib-beepityobpopity…”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: