PhD offer

Naive Learning of Causal World Models


The proposed PhD thesis will be conducted within the ASIMOV team at the ISIR lab. This team studies the perception and control of robots, or more generally of intelligent systems, in open-ended environments. In traditional robotics, the common approach involves programming robots with predefined models of their sensors, actuators, and environment. In contrast in machine learning and artificial intelligence, the field known as developmental robotics aspires to endow robots with the capacity to autonomously learn about themselves and their surroundings, mirroring the developmental processes observed in humans and other living organisms.

In Piaget’s theory of cognitive development, the first stage allows infants (in our context robots and intelligent systems) to progressively construct knowledge and understanding of the world by coordinating sensory experiences with physical interaction with their surroundings. The work carried out within the ASIMOV team on these subjects initially involved formalizing, mathematically and algorithmically, the processes that allow a completely naive agent – that is, one without any prior knowledge of its kinematic organization or its environment – to extract the structure of the physical space in which it is immersed. This is made possible by the correlation between its motor actions and the information obtained by its sensors. In the context of the sensorimotor theory of perception described by Kevin O’Regan [1], such a structure can be extracted from row, uninterpreted sensations and can naturally lead to the notion of space perception. Since then, this developmental and sensorimotor approach has enabled the construction of naive agents capable of discovering other properties of their environment such as physical space [2], body schema [3], action structure [4], or the notion of objects [5].


The proposed thesis seeks to explore the autonomous learning of adaptive world models through sensorimotor interaction with the environment. The term world model [5] encompasses both the identification of meaningful latent variables to describe the environment (i.e. finding a disentangled state representation that separates individual factors of environmental variation), and the prediction of the evolution of those variables over time in response to the agent’s action (i.e. learning the transition function of a markov decision process). Such models reflect the agent’s understanding of its environment and can be utilized in model-based reinforcement learning algorithms to guide its behavior. Research has indicated that models effectively incorporating the causal structure of the environment demonstrate enhanced adaptability to distribution shifts, thus improving the adaptability of robots and intelligent systems [6]. This thesis aims at contributing to the domains of causal discovery and causal representation learning in the context of developmental robotics.

Initial research directions will involve studying the connections between causal [7] and other disentangled representation learning frameworks [8], and exploring how the agency of intelligent systems can be leveraged to enhance the learning of world models, for instance through specially designed intrinsic motivations [9, 10]. The proposed algorithms will be based on artificial neural networks for function approximation, and validated in simulated environments.

[1] O’Regan, J.K. and Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and Brain Sciences, 24:939-1031.

[2] Laflaquière, A., O’Regan, J. K., Gas, B., & Terekhov, A. (2018). Discovering space—Grounding spatial topology and metric regularity in a naive agent’s sensorimotor experience. Neural Networks, 105, 371-392.

[3] Marcel, V., Argentieri, S., & Gas, B. (2019). Where do I move my sensors? Emergence of a topological representation of sensors poses from the sensorimotor flow. IEEE Transactions on Cognitive and Developmental Systems, 13(2), 312-325.

[4] Godon, J. M., Argentieri, S., & Gas, B. (2020). A formal account of structuring motor actions with sensory prediction for a naive agent. Frontiers in Robotics and AI, 7, 561660.

[5] Le Hir, N., Sigaud, O., & Laflaquière, A. (2018). Identification of invariant sensorimotor structures as a prerequisite for the discovery of objects. Frontiers in Robotics and AI, 5, 70.

[6] Ha, D., & Schmidhuber, J. (2018). World models. arXiv preprint arXiv:1803.10122.

[7] Richens, J., & Everitt, T. (2024). Robust agents learn causal world models. arXiv preprint arXiv:2402.10877.

[8] Lachapelle, S., Rodriguez, P., Sharma, Y., Everett, K. E., Le Priol, R., Lacoste, A., & Lacoste-Julien, S. (2022, June). Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA. In Conference on Causal Learning and Reasoning (pp. 428-484). PMLR.

[9] Caselles-Dupré, H., Garcia Ortiz, M., & Filliat, D. (2019). Symmetry-based disentangled representation learning requires interaction with environments. Advances in Neural Information Processing Systems, 32.

[10] Oudeyer, P. Y., & Kaplan, F. (2007). What is intrinsic motivation? A typology of computational approaches. Frontiers in neurorobotics, 1, 108.

[11] Annabi, L. (2022). Intrinsically Motivated Learning of Causal World Models. Fifth International Workshop on Intrinsically-Motivated Open-ended Learning (IMOL2022).

Profile required

  • A passion for fundamental research
  • Education (Master's degree or engineering degree) in computer science or robotics
  • Knowledge in machine learning / artificial intelligence
  • Proficiency in Python programming
  • Fluent in English

General information

How to apply

Interested candidates are invited to send their CVs, covering letters and transcripts to
Louis Annabi (annabi(at) and Sylvain Argentieri (sylvain.argentieri(at)

Official applications must be upload to ADUM:

This article was updated on April 17, 2024