(This dissertation is written in French). The auditory system provides the human with many informations on its acoustic environment. For instance, we are able to precisely localize the origin of a sound and interpret its meaning, so that it would be very difficult to do without these auditory cues in our dynamic and evolving world. Yet, mobile robotics has seldom integrated this auditory modality despite its compulsoriness in order to complete the informations delivered by other exteroceptive sensors such as cameras, laser range-finders or ultrasonic detectors. This doctoral thesis deals with the design of an artificial auditory system for sound source localization, composed of an array of 8 omnidirectional microphones and an aquisition/processing board. This problematics has already been studied in Signal Processing and Acoustics. However, the unusual embeddability and real-time constraints imposed by the robotics context limit the applicability of such methods to wideband signals such as human voice. After a thorough bibiliographical study of the approaches to localization proposed in Robotics, the synthesis of frequency-invariant beamformers is envisaged. An original solution is proposed, based on convex optimization and relying on the modal representation of antenna directivity patterns. Compared to classical methods, its resolution at low frequencies is enhanced, be the source in the farfield or in the nearfield, despite the small size of the array. These optimized beamformers are then exploited so as to compute an acoustic power map of the environment. The localization turns to be more precise than with conventional approaches, and can process wideband signals in the frequency range 400Hz-3kHz. Last, a recent beamspace extension of the high-resolution MUSIC method is assessed, which could cope with the robotics constraints.