Audition is often considered as the most important sense in humans, because of its fundamental role in learning, language and communication. However, its use in robotics is fairly recent in comparison to other exteroceptive sensing. Nevertheless, its complementarity to vision and its potentialities for Human-Robot Interaction have been widely acknowledged. Besides binaural approaches, array processing constitutes a relevant way to endow robots with audition. The idea is to exploit the redundancy of the data sensed by an array of microphones so as to design robust and efficient functions. Most contributions in robotics have been rooted in the Computational Auditory Scene Analysis (CASA) framework. This in turn has given rise to unexpected original requirements. To cite few, any solution must be easily embeddable (geometry and energy consumption), perform in real time, handle wideband signals (e.g. the human voice), and cope with noise and reverberations. In this context, LAAS-CNRS has developed an integrated auditory sensor named EAR (“Embedded Au- dition for Robotics”), on the basis of a linear array of eight microphones, a fully programmable acquisition board, a FPGA processing unit, and USB communication. This paper describes its design and its exploitation in a robotics experiment.