We propose an intelligent interface for the mobile software agents that we have developed. The interface should have two roles. One is to visualize the mobile software agents using augmented reality and the other is to give a human user the means to control the mobile software agents by gesture using a motion capture camera. Through the interface we human beings can intuitively grasp the activities of the mobile agents, i.e. through augmented reality. In order to provide proactive inputs from the user, we utilize the Kinect motion capture cameras to capture the human users' will. The Kinect motion capture camera is mounted on a mobile robot that is near the human user. The robot acts as a mediator that recognizes the human user's will and convey it to mobile software agents that control the mobile multiple robots. A mobile software agent is searching a target, and uses a mobile robot to seize the target when it finds the target. When the user points at the target or a mobile robot, the monitoring software captures the will of the user and conveys instructions to the mobile agent based on the information from the Kinect. The mobile agent migrates from one robot to another to look for the searching mobile agents and hands the instruction to which robot it should move. The agent migration is represented by an image's moving to the robot that was pointed. This paper reports the intelligent user interface that provides the interaction between the human user and the mobile agents as the first step toward the complete intelligent human computer interface. We demonstrate the usefulness through preliminary experiments.