Doctor of Philosophy
School of Computer Science and Software Engineering
Yu, Chao, Multi-agent learning in complex environments, Doctor of Philosophy thesis, School of Computer Science and Software Engineering, University of Wollongong, 2013. https://ro.uow.edu.au/theses/3993
Multi-agent learning has been widely used to solve real-world problems in a wide range of industrial and commercial domains, such as manufacturing/business process control, telecommunication systems, traffic and transportation management and electronic commerce. In multi-agent learning, the concurrent distributed learning processes can make the learning environment non-stationary for each individual learner. Learning to achieve efficient coordinated/cooperative behaviors in such a non-stationary environment is a difficult problem, especially when agents also need to deal with incomplete information. This thesis investigates the challenging issues in multi-agent learning in a number of different environments so that efficient coordinated/ cooperative behaviors can be achieved by the agents in these environments.
Specifically, this thesis 1. studies coordinated multi-agent learning in loosely coupled multi-agent systems. Two coordinated multi-agent learning approaches are proposed to enable agents to learn coordinated behaviors by exploiting sparse interactions and different levels of independent relationships among agents in loosely coupled multi-agent systems. Unlike most existing approaches, the proposed approaches do not require agents to have any prior knowledge about the domain structure or assumptions such as global observability of the environment. Experimental results show that agents using the proposed approaches can learn efficient coordinated behaviors in domains of different sizes;
2. studies multi-agent learning for the emergence of social norms in networked multi-agent systems. A collective multi-agent learning framework is proposed to study the impact of agent local collective learning on the emergence of social norms in a number of different settings in terms of agent heterogeneities and topological varieties. The framework models the opinion aggregation process in human decision making. This feature makes it different from all the existing sequential learning frameworks in MAL for norm emergence. Experimental results reveal some significant insights into the manipulation and control of norm emergence in networked multi-agent systems achieved through agents’ local collective learning behaviors; and
3. investigates emotions in multi-agent learning to enhance cooperation in social dilemmas. A two-layered emotional multi-agent learning framework is proposed to endow agents with internal cognitive and emotional capabilities that can drive these agents to learn reciprocal behaviors in social dilemmas. Experimental results reveal that different ways of appraising emotions and various network topologies have significant impacts on agent learning behaviors in the proposed framework, and under certain circumstances, full cooperation can be achieved among the agents.
In summary, this thesis studies multi-agent learning of coordination and cooperation in a variety of complex environments. Experimental results demonstrate the efficiency and effectiveness of the proposed multi-agent learning approaches and frameworks in those environments.