In the clink My father-in-law used to tape family mealtime conversations. When played back, the background noise - like silverware hitting plates and doors closing - is surprisingly prominent. Why is it that we filter these sounds out as they happen, but seem unable to filter them out when we listen to the recording? Microphones are wonderfully objective devices. They detect variations in absolute pressure or the pressure gradient on a particular axis and faithfully transduce these into electrical signals. In contrast our ears have a brain attached, and between them they do a much more subjective job - interpreting our acoustic environment, not just recording it. Our ears themselves are simple pressure transducers, but we also have ways of working out where sounds are coming from. For this we make use of the relative levels, phase and arrival times of sounds. In addition, the shape of our head distorts the local audio field in a way that we are personally familiar with, and this aids location of sound sources, particularly when we can move our heads. We detect not only direct sounds but also reverberant ones. The space we are in significantly colours and adds to sounds, mostly in the form of delayed noise from random reflections. This would severely reduce the intelligibility of sound if the brain were not adept at adjusting to these conditions. It works out when the noises arrive, and where from, and can largely ignore them if it so chooses. When the sound objectively recorded by a microphone is replayed through a single loudspeaker or even a stereophonic system, the random cutlery-clanging reverberant sound that should be all around us is now directly in front of us. Directional and timing cues that the brain would normally use for filtering are now inconsistent or just plain wrong. Chris Woolf, Liskeard, Cornwall, UK From issue 2684 of New Scientist magazine, page 73 (http://www.newscientist.com/article/mg20026842.900-in-the-clink.html?full=true&print=true)