Marcelo Magnasco, head of the Laboratory of Mathematical Physics at Rockefeller University, talks about how we distinguish between noise frequencies.
Read the full transcript »
Marcelo Magnasco on Noise Frequencies Hearing is a difficult sense to understand from the theoretical viewpoint because it is pretty much unlike the other senses in many, in many relevant ways. So for instance, we have in both our retinas we have about two-hundred million photo preceptors so each one of our eyes is a hundred mega-pixel camera. We have about a hundred million oral factor receptors in our noses. We have well over ten to twenty million for touch and for pain and for temperature on our skin yet the total number of auditor receptors we have in both of our cochlear is something like seven to eight thousand so there is a miniscule amount of cells that are being input from sound and therefore the nervous system really needs to extract as much information as it can from every, each one of them okay. The information density being coerced out of each one of these detectors is much higher so it puts a lot of demand on the capability of the nervous system to process information. In addition, we do not have understand very well sound exactly the geometry of sound is in the sense that we can understand vision. We hear some words okay we hear a few seconds of phonings and these give rise to a multitude of very different percepts in your brain so on one side one stream you get out is the actual text being spoken. Then you also hear the accent of the speaker, the emotional stance of the speaker, many features related to the identity of the speaker if you know somebody you can easily recognize their voice but even if you don’t know them you immediately know whether it’s male or female, a child, and so on and so forth. All of these are impressions that are sort of separated by the brain into different persons that do not go with the same stream so it’s sort of difficult to try to understand in a unified sense what is it exactly that our hearing does. Then there is all the spatial aspects of hearing that we normally attribute to our sight but actually, a lot of them are derived from hearing so when you hear somebody speaking you know perfectly well if they are talking towards you or towards a wall. You know whether they are turned around or not simply because of the muffling that would happen if the speaker is, is looking away from you. You know the position of the speaker with a fair amount of accuracy. We have very precise models in our brain of how the human voice sounds when you are yelling or when you are whispering and of course then there is an—measuring the volume of the sound of the ear so you can distinguish whether somebody’s whispering in your ear from somebody shouting far away even though the volume of the ear would be precisely the same simply because you can interpret whether the pattern of the sound is that of a stressed voice stressed because of shouting or the muffled sound of somebody whispering. You also have a clear impression of the space in which the conversation takes place; this is for instance the very space I’m in is very quiet space because echoes have been suppressed but not entirely. You have an impression for instance everybody is probably familiar with somebody in their house calling them from a different room and from the sound of the voice knowing exactly which room they are. Okay if somebody calls you from the bathroom you recognize that the shininess in the bathroom walls. You would recognize the more muffled sound of somebody calling you from the bedroom where you know the mattress and stuff absorb the sound. So there are all these variety of different percepts and so it’s sort of difficult to try and integrate auditory status into single category because the brain itself is separating different areas of the source and different areas of the space in which the communication is taking place in a very rapid and dramatic fashion. No we keep yes, we keep an interest in human perception of fairly complex and sort of idiosyncratic sounds like perception of music and the like because that’s ultimatel