0 Datasets
0 Files
Get instant academic access to this publication’s datasets.
Yes. After verification, you can browse and download datasets at no cost. Some premium assets may require author approval.
Files are stored on encrypted storage. Access is restricted to verified users and all downloads are logged.
Yes, message the author after sign-up to request supplementary files or replication code.
Join 50,000+ researchers worldwide. Get instant access to peer-reviewed datasets, advanced analytics, and global collaboration tools.
✓ Immediate verification • ✓ Free institutional access • ✓ Global collaborationJoin our academic network to download verified datasets and collaborate with researchers worldwide.
Get Free AccessPlaying games between humans and robots have become a widespread human-robot confrontation (HRC) application. Although many approaches were proposed to enhance the tracking accuracy by combining different information, the problems of the intelligence degree of the robot and the anti-interference ability of the motion capture system still need to be solved. In this paper, we present an adaptive reinforcement learning (RL) based multimodal data fusion (AdaRL-MDF) framework teaching the robot hand to play Rock-Paper-Scissors (RPS) game with humans. It includes an adaptive learning mechanism to update the ensemble classifier, an RL model providing intellectual wisdom to the robot, and a multimodal data fusion structure offering resistance to interference. The corresponding experiments prove the mentioned functions of the AdaRL-MDF model. The comparison accuracy and computational time show the high performance of the ensemble model by combining k-nearest neighbor (k-NN) and deep convolutional neural network (DCNN). In addition, the depth vision-based k-NN classifier obtains a 100% identification accuracy so that the predicted gestures can be regarded as the real value. The demonstration illustrates the real possibility of HRC application. The theory involved in this model provides the possibility of developing HRC intelligence.
Wen Qi, Haoyu Fan, Hamid Reza Karimi, Hang Su (2023). An adaptive reinforcement learning-based multimodal data fusion framework for human–robot confrontation gaming. Neural Networks, 164, pp. 489-496, DOI: 10.1016/j.neunet.2023.04.043.
Datasets shared by verified academics with rich metadata and previews.
Authors choose access levels; downloads are logged for transparency.
Students and faculty get instant access after verification.
Type
Article
Year
2023
Authors
4
Datasets
0
Total Files
0
Language
English
Journal
Neural Networks
DOI
10.1016/j.neunet.2023.04.043
Access datasets from 50,000+ researchers worldwide with institutional verification.
Get Free Access