In the context of robot perception, a new set of methods for self-supervised sensorimotor learning has emerged lately. These methods try to extract robot and environment configuration information from a set of sensorimotor cues, with no use of any a priori knowledge. This paper is concerned with such methods, in the context of binaural robot audition. An incremental algorithm is proposed, relying on an auditory evoked behavior which allows a robot to orient its head toward a sound source. During the learning process, this evoked behavior is used in order to gather auditive and proprioceptive data before and after the head has moved to face the sound source. An auditorimotor map can then be constructed. Thereafter, when the source plays again near a set of previously learned configurations, the robot can use the auditorimotor map to infer a motor command that would make it face the source. In other terms, the robot has learned from past sensorimotor experiences how to localize a sound source in the space of its own motor azimuths. In the present article, we offer an experimental validation of the evoked behavior and put to the test an offline version of the algorithm. The auditory evoked behavior implementation being sufficiently accurate, our results show a good localization performance using the learned auditorimotor map.