|
Agogino A., HolmesParker C. & Tumer K.2012. Evolving large scale UAV communication system. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Philadelphia, PA, July.
Google Scholar
|
|
Agogino A. & Tumer K.2008. Analyzing and visualizing multi-agent rewards in dynamic and stochastic domains. Journal of Autonomous Agents and Multi-Agent Systems (JAAMAS)17(2), 320–338.
Google Scholar
|
|
Barrett S., Stone P. & Kraus S.2011. Empirical evaluation of ad hoc teamwork in the pursuit domain. In Proceedings of 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2011), May.
Google Scholar
|
|
Bharathidasan A. & Ponduru V.2003. Sensor networks – an overview. IEEE Potentials.
Google Scholar
|
|
Challet D. & Johnson N.2002. Optimal combination of imperfect objects. Physics Review Letters89, 028071.
Google Scholar
|
|
Devlin S. & Kudenko D.2011. Theoretical considerations of potential-based reward shaping for multi-agent systems. In Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).
Google Scholar
|
|
Farinelli A., Rogers A. & Jennings N.2008. Maximising sensor network efficiency through agent-based coordination of sense/sleep schedules. In Workshop on Energy in Wireless Sensor Networks.
Google Scholar
|
|
Grzes M. & Kudenko D.2010. Online learning of shaping rewards in reinforcement learning. Neural Networks23, 541–550.
Google Scholar
|
|
Hayden S., Carrick C. & Yang Q.1999. A catalog of agent coordination patterns. In Proceedings of the 3rd Annual Conference on Autonomous Agents.
Google Scholar
|
|
HolmesParker C., Agogino A. & Tumer K.2012. Evolving distributed resource sharing for cubesat constellations. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Philadelphia, PA, July 2012.
Google Scholar
|
|
HolmesParker C., Agogino A. & Tumer K.2013. Exploiting structure and utilizing agent-centric rewards to promote coordination in large multiagent systems (extended-abstract). In Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).
Google Scholar
|
|
Horling B. & Lesser V.2005. A survey of multiagent organizational paradigms. Knowledge Engineering Review19(4), 281–316.
Google Scholar
|
|
Horling B., Mailler R. & Lesser V.2004. A case study of organizational effects in a distributed sensor network. In Proceedings of the International Conference on Intelligent Agent Technology.
Google Scholar
|
|
Howley E. & Duggan J.2011. Investing in the commons: a study of openness and the emergence of cooperation. Advances in Complex Systems14.
Google Scholar
|
|
Knudson M. & Tumer K.2010. Coevolution of heterogeneous multi-robot teams. In Genetic and Evolutionary Computation Conference (GECCO).
Google Scholar
|
|
Kok J. & Vlassis N.2006. Collaborative multiagent reinforcement learning by payoff propagation. Journal of Machine Learning Research (JMLR)7, 1789–1828.
Google Scholar
|
|
Mehta N., Ray S., Tadepalli P. & Dietterich T.2008. Automatic discovery and transfer of maxq hierarchies. In Proceedings of the 25th International Conference on Machine Learning (ICML).
Google Scholar
|
|
Ng A., Harada D. & Russell S.1999. Policy invariance under reward transformations: theory and application to reward shaping. In Proceedings of International Conference on Machine Learning.
Google Scholar
|
|
Panait L. & Luke S.2005. Cooperative multi-agent learning – the state of the art. Journal of Autonomous Agents and MultiAgent Systems (JAAMAS)11(3), 387–434.
Google Scholar
|
|
Rogers A., Farinelli A. & Jennings N.2010. Self-organising sensors for wide area surveillance using the max-sum algorithm. In Self-Organizing Architectures, 6090, 84–100. Lecture Notes in Computer Science, Springer.
Google Scholar
|
|
Sutton R. & Barto A.1998. Reinforcement Learning An Introduction. MIT Press.
Google Scholar
|
|
Tambe M., Bowring E., Jung H., Kaminka G., Maheswaran R., Marecki J., Modi P., Nair R., Okamoto S., Pearce J., Paruchuri P., Pynadath D., Scerri P., Schurr N. & Varakantham P.2005. Conflicts in teamwork – hybrids to the rescue. In Proceedings of the 4th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).
Google Scholar
|
|
Tham C. & Renaud J.2005. Multi-agent systems on sensor networks a distributed reinforcement learning approach. In Intelligent Sensors, Sensor Networks and Information Processing Conference (ISSNIP).
Google Scholar
|
|
Tumer K.2005. Designing agent utilities for coordinated, scalable, and robust multiagent systems. In Challenges in the Coordination of Large Scale Multiagent Systems, P. Scerri, R. Mailler & R. Vincent (eds). Springer, 173–188.
Google Scholar
|
|
Vinyals M., Rodriguez-Aguilar J. & Cerquides J.2010. A survey on sensor networks from a multiagent perspective. The Computer Journal.
Google Scholar
|
|
Vrancx P., Verbeeck K. & Nowe A.2008. Decentralized learning in Markov games. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics38(4), 976–981.
Google Scholar
|
|
Williamson S., Gerding E. & Jennings N.2009. Reward shaping for valuing communications during multi-agent coordination. In Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).
Google Scholar
|
|
Wolpert D. H. & Tumer K.2001. Optimal payoff functions for members of collectives. Advances in Complex Systems4(2/3), 265–279.
Google Scholar
|
|
Xu Y., Scerri P., Yu B., Okamoto S., Lewis M. & Sycara K.2005. An integrated token-based algorithm for scalable coordination. In Proceedings of the 4th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).
Google Scholar
|
|
Zhang C., Abdallah S. & Lesser V.2009. Integrating organizational control into multi-agent learning. In Procceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).
Google Scholar
|