Artem Lykov
Orcid: 0000-0001-6119-2366
According to our database1,
Artem Lykov
authored at least 21 papers
between 2022 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
2025
UAV-CodeAgents: Scalable UAV Mission Planning via Multi-Agent ReAct and Vision-Language Reasoning.
CoRR, May, 2025
CoRR, March, 2025
UAV-VLPA*: A Vision-Language-Path-Action System for Optimal Route Generation on a Large Scales.
CoRR, March, 2025
CognitiveDrone: A VLA Model and Evaluation Benchmark for Real-Time Cognitive Task Solving and Reasoning in UAVs.
CoRR, March, 2025
CoRR, February, 2025
Proceedings of the 20th ACM/IEEE International Conference on Human-Robot Interaction, 2025
GestLLM: Advanced Hand Gesture Interpretation via Large Language Models for Human-Robot Interaction.
Proceedings of the 20th ACM/IEEE International Conference on Human-Robot Interaction, 2025
2024
CoRR, 2024
Industry 6.0: New Generation of Industry driven by Generative AI and Swarm of Heterogeneous Robots.
CoRR, 2024
CoRR, 2024
Co-driver: VLM-based Autonomous Driving Assistant with Human-like Behavior and Understanding for Complex Road Scenes.
CoRR, 2024
CognitiveOS: Large Multimodal Model based System to Endow Any Type of Robot with Generative AI.
CoRR, 2024
Bi-VLA: Vision-Language-Action Model-Based System for Bimanual Robotic Dexterous Manipulations.
Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, 2024
Proceedings of the IEEE International Symposium on Mixed and Augmented Reality Adjunct, 2024
CognitiveDog: Large Multimodal Model Based System to Translate Vision and Language into Action of Quadruped Robot.
Proceedings of the Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024
DogSurf: Quadruped Robot Capable of GRU-based Surface Recognition for Blind Person Navigation.
Proceedings of the Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, 2024
LLM-BRAIn: AI-driven Fast Generation of Robot Behaviour Tree based on Large Language Model.
Proceedings of the 2nd International Conference on Foundation and Large Language Models, 2024
Proceedings of the 2nd International Conference on Foundation and Large Language Models, 2024
VLM-Auto: VLM-based Autonomous Driving Assistant with Human-like Behavior and Understanding for Complex Road Scenes.
Proceedings of the 2nd International Conference on Foundation and Large Language Models, 2024
2023
LLM-MARS: Large Language Model for Behavior Tree Generation and NLP-enhanced Dialogue in Multi-Agent Robot Systems.
CoRR, 2023
2022
DeltaFinger: A 3-DoF Wearable Haptic Display Enabling High-Fidelity Force Vector Presentation at a User Finger.
Proceedings of the Haptic Interaction - 5th International Conference, 2022