Mehdi Khamassi

Orcid: 0000-0002-2515-1046

According to our database1, Mehdi Khamassi authored at least 50 papers between 2005 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Artificial consciousness. Some logical and conceptual preliminaries.
CoRR, 2024

Purpose for Open-Ended Learning Robots: A Computational Taxonomy, Definition, and Operationalisation.
CoRR, 2024

2023
Computational Model of the Transition from Novice to Expert Interaction Techniques.
ACM Trans. Comput. Hum. Interact., October, 2023

Zero-shot model-free learning of periodic movements for a bio-inspired soft-robotic arm.
Frontiers Robotics AI, October, 2023

Reducing Computational Cost During Robot Navigation and Human-Robot Interaction with a Human-Inspired Reinforcement Learning Architecture.
Int. J. Soc. Robotics, August, 2023

Editorial: Neurorobotics explores the human senses.
Frontiers Neurorobotics, June, 2023

2022
Editorial: Computational models of affordance for robotics.
Frontiers Neurorobotics, September, 2022

Model-Based and Model-Free Replay Mechanisms for Reinforcement Learning in Neurorobotics.
Frontiers Neurorobotics, September, 2022

Reproduction of Human Demonstrations with a Soft-Robotic Arm based on a Library of Learned Probabilistic Movement Primitives.
Proceedings of the 2022 International Conference on Robotics and Automation, 2022

2021
Task Driven Skill Learning in a Soft-Robotic Arm.
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2021

Stability Analysis of Bio-inspired Source Seeking with Noisy Sensors.
Proceedings of the 2021 European Control Conference, 2021

2020
A Novel Reinforcement-Based Paradigm for Children to Teach the Humanoid Kaspar Robot.
Int. J. Soc. Robotics, 2020

Special Issue on Behavior Adaptation, Interaction, and Artificial Perception for Assistive Robotics.
Int. J. Soc. Robotics, 2020

DREAM Architecture: a Developmental Approach to Open-Ended Learning in Robotics.
CoRR, 2020

Modeling awake hippocampal reactivations with model-based bidirectional search.
Biol. Cybern., 2020

Adaptive Coordination of Multiple Learning Strategies in Brains and Robots.
Proceedings of the Theory and Practice of Natural Computing - 9th International Conference, 2020

Coping with the variability in humans reward during simulated human-robot interactions through the coordination of multiple learning strategies.
Proceedings of the 29th IEEE International Conference on Robot and Human Interactive Communication, 2020

How to Reduce Computation Time While Sparing Performance During Robot Navigation? A Neuro-Inspired Architecture for Autonomous Shifting Between Model-Based and Model-Free Learning.
Proceedings of the Biomimetic and Biohybrid Systems - 9th International Conference, 2020

Periodic movement learning in a soft-robotic arm<sup>*</sup>.
Proceedings of the 2020 IEEE International Conference on Robotics and Automation, 2020

2019
Using Reinforcement Learning to Attenuate for Stochasticity in Robot Navigation Controllers.
Proceedings of the IEEE Symposium Series on Computational Intelligence, 2019

A Deep Learning Approach for Multi-View Engagement Estimation of Children in a Child-Robot Joint Attention Task.
Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2019

2018
Robot Fast Adaptation to Changes in Human Engagement During Simulated Dynamic Social Interaction With Active Exploration in Parameterized Reinforcement Learning.
IEEE Trans. Cogn. Dev. Syst., 2018

Interactions of spatial strategies producing generalization gradient and blocking: A computational approach.
PLoS Comput. Biol., 2018

Adaptive reinforcement learning with active state-specific exploration for engagement maximization during simulated child-robot interaction.
Paladyn J. Behav. Robotics, 2018

Sequential Action Selection and Active Sensing for Budgeted Localization in Robot Navigation.
Int. J. Semantic Comput., 2018

Toward Self-Aware Robots.
Frontiers Robotics AI, 2018

A Framework for Robot Learning During Child-Robot Interaction with Human Engagement as Reward Signal.
Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication, 2018

Prioritized Sweeping Neural DynaQ with Multiple Predecessors, and Hippocampal Replays.
Proceedings of the Biomimetic and Biohybrid Systems - 7th International Conference, 2018

Computational Model of the User's Learning Process When Cued by a Social Versus Non-Social Agent.
Proceedings of the 6th International Conference on Human-Agent Interaction, 2018

2017
Sustainable computational science: the ReScience initiative.
PeerJ Comput. Sci., 2017

Adaptive coordination of working-memory and reinforcement learning in non-human primates performing a trial-and-error problem solving task.
CoRR, 2017

Reinforcement Learning for Bio-Inspired Target Seeking.
Proceedings of the Towards Autonomous Robotic Systems - 18th Annual Conference, 2017

Active Exploration and Parameterized Reinforcement Learning Applied to a Simulated Human-Robot Interaction Task.
Proceedings of the First IEEE International Conference on Robotic Computing, 2017

Sequential Action Selection for Budgeted Localization in Robots.
Proceedings of the First IEEE International Conference on Robotic Computing, 2017

A drift diffusion model of biological source seeking for mobile robots.
Proceedings of the 2017 IEEE International Conference on Robotics and Automation, 2017

2016
Active exploration in parameterized reinforcement learning.
CoRR, 2016

2015
Which criteria for autonomously shifting between goal-directed and habitual behaviors in robots?
Proceedings of the 2015 Joint IEEE International Conference on Development and Learning and Epigenetic Robotics, 2015

Respective Advantages and Disadvantages of Model-based and Model-free Reinforcement Learning in a Robotics Neuro-inspired Cognitive Architecture.
Proceedings of the 6th Annual International Conference on Biologically Inspired Cognitive Architectures, 2015

2014
Modelling Individual Differences in the Form of Pavlovian Conditioned Approach Responses: A Dual Learning Systems Approach with Factored Representations.
PLoS Comput. Biol., 2014

Design of a Control Architecture for Habit Learning in Robots.
Proceedings of the Biomimetic and Biohybrid Systems - Third International Conference, 2014

2012
Which Temporal Difference Learning Algorithm Best Reproduces Dopamine Activity in a Multi-choice Task?
Proceedings of the From Animals to Animats 12, 2012

Neuro-inspired Navigation Strategies Shifting for Robots: Integration of a Multiple Landmark Taxon Strategy.
Proceedings of the Biomimetic and Biohybrid Systems - First International Conference, 2012

2011
Robot Cognitive Control with a Neurophysiologically Inspired Reinforcement Learning Model.
Frontiers Neurorobotics, 2011

2010
Principal component analysis of ensemble recordings reveals cell assemblies at high temporal resolution.
J. Comput. Neurosci., 2010

A Computational Model of Integration between Reinforcement Learning and Task Monitoring in the Prefrontal Cortex.
Proceedings of the From Animals to Animats 11, 2010

2008
Analyzing Interactions between Navigation Strategies Using a Computational Model of Action Selection.
Proceedings of the Spatial Cognition VI. Learning, 2008

2007
Complementary roles of the rat prefrontal cortex and striatum in reward-based learning and shifting navigation strategies. (Rôles complémentaires du cortex préfrontal et du striatum dans l'apprentissage et le changement de stratégies de navigation basées sur la récompense chez le rat).
PhD thesis, 2007

2006
Combining Self-organizing Maps with Mixtures of Experts: Application to an Actor-Critic Model of Reinforcement Learning in the Basal Ganglia.
Proceedings of the From Animals to Animats 9, 2006

2005
The Psikharpax project: towards building an artificial rat.
Robotics Auton. Syst., 2005

Actor-Critic Models of Reinforcement Learning in the Basal Ganglia: From Natural to Artificial Rats.
Adapt. Behav., 2005


  Loading...