سال انتشار: ۱۳۸۵

محل انتشار: دوازدهمین کنفرانس سالانه انجمن کامپیوتر ایران

تعداد صفحات: ۸

نویسنده(ها):

Farzad Rastegar – Control and Intelligent Processing Center of Excellence, Electrical and Computer Eng. Department, University of Tehran, North Karegar, Tehran, Iran
Majid Nili Ahmadabadi – Computer Eng. Department, University of Tehran, North Karegar, Tehran, Iran, School of Cognitive Sciences, Institute for studies in theoretical Physics and Mathematics, Niavaran, Tehran, Iran

چکیده:

In this paper, we propose a novel approach whereby a reinforcement learning agent attempts to understand its environment via meaningful temporally extended concepts in an unsupervised way. Our approach is inspired by findings in neuroscience on the role of mirror neurons in action-based abstraction. Since there are so many cases in which the best decision cannot be made just by using instant sensory data, in this study we seek to achieve a framework for learning temporally extended concepts from sequences of sensory-action data. To direct the agent
to gather fertile information for concept learning, a reinforcement learning mechanism utilizing experience of the agent is proposed. Experimental results demonstrate the capability of the proposed approach in retrieving meaningful concepts from the environment. The concepts and the way of defining them are thought such that they not only can be applied to ease decision making but also can be utilized in other applications as elaborated in the paper.