Future Works

 

Currently facial features of the user are localized but their exact shapes are not extracted. In the future we want to incorporate precise tracking methods like snakes to analyze the shape of facial features and recognizing emotion. We will also make use of binocular view for depth computation to improve accuracy of the visual system.

Right now, Aryan's facial features cannot represent some important emotions such as happiness and sadness. This is due to the rigidity of Aryan’s lips. For resembling lip movements with a satisfactory accuracy, at least four motors are required. In the future, we are going to add flexible lips.

Another important but missing factor in Aryan is learning ability. We did not incorporate learning due to deadline issue for this phase of the project. In the recent research on learning mechanisms for individualized (in opposite of collective) social agents imitative learning seems to be a promising learning mechanism. So we plan to incorporate some sorts of imitative learning.

So far, Aryan’s brain acts reactively. Integrating it with deliberative capabilities for high-level tasks can result in more interesting behaviors such as non-verbal dialogues, turn taking, decision-making and reasoning. Combining it with a superior emotion model can also realize emotional decision making for achieving acceptable sub-optimal solutions in real-time.

Currently neck movements are not coordinated with eyes and the head just performs fixed rotations. An efficient decomposition of the total gaze shift into head and eye movements is required for achieving a life-like behavior.

Ultimately, we would like to add other sensory motor capabilities such as vocalization, utterance and simple natural language processing. We would also like to develop a body for Aryan, particularly hands for manipulating objects.


Back to Aryan's Page                        Back to the Main Page