Agogino A., HolmesParker C. & Tumer K.2012. Evolving large scale UAV communication system. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Philadelphia, PA, July.

Agogino A. & Tumer K.2008. Analyzing and visualizing multi-agent rewards in dynamic and stochastic domains. Journal of Autonomous Agents and Multi-Agent Systems (JAAMAS)17(2), 320–338.

Barrett S., Stone P. & Kraus S.2011. Empirical evaluation of ad hoc teamwork in the pursuit domain. In Proceedings of 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2011), May.

Bharathidasan A. & Ponduru V.2003. Sensor networks – an overview. IEEE Potentials.

Challet D. & Johnson N.2002. Optimal combination of imperfect objects. Physics Review Letters89, 028071.

Devlin S. & Kudenko D.2011. Theoretical considerations of potential-based reward shaping for multi-agent systems. In Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).

Farinelli A., Rogers A. & Jennings N.2008. Maximising sensor network efficiency through agent-based coordination of sense/sleep schedules. In Workshop on Energy in Wireless Sensor Networks.

Grzes M. & Kudenko D.2010. Online learning of shaping rewards in reinforcement learning. Neural Networks23, 541–550.

Hayden S., Carrick C. & Yang Q.1999. A catalog of agent coordination patterns. In Proceedings of the 3rd Annual Conference on Autonomous Agents.

HolmesParker C., Agogino A. & Tumer K.2012. Evolving distributed resource sharing for cubesat constellations. In Proceedings of the Genetic and Evolutionary Computation Conference (GECCO), Philadelphia, PA, July 2012.

HolmesParker C., Agogino A. & Tumer K.2013. Exploiting structure and utilizing agent-centric rewards to promote coordination in large multiagent systems (extended-abstract). In Proceedings of the 12th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).

Horling B. & Lesser V.2005. A survey of multiagent organizational paradigms. Knowledge Engineering Review19(4), 281–316.

Horling B., Mailler R. & Lesser V.2004. A case study of organizational effects in a distributed sensor network. In Proceedings of the International Conference on Intelligent Agent Technology.

Howley E. & Duggan J.2011. Investing in the commons: a study of openness and the emergence of cooperation. Advances in Complex Systems14.

Knudson M. & Tumer K.2010. Coevolution of heterogeneous multi-robot teams. In Genetic and Evolutionary Computation Conference (GECCO).

Kok J. & Vlassis N.2006. Collaborative multiagent reinforcement learning by payoff propagation. Journal of Machine Learning Research (JMLR)7, 1789–1828.

Mehta N., Ray S., Tadepalli P. & Dietterich T.2008. Automatic discovery and transfer of maxq hierarchies. In Proceedings of the 25th International Conference on Machine Learning (ICML).

Ng A., Harada D. & Russell S.1999. Policy invariance under reward transformations: theory and application to reward shaping. In Proceedings of International Conference on Machine Learning.

Panait L. & Luke S.2005. Cooperative multi-agent learning – the state of the art. Journal of Autonomous Agents and MultiAgent Systems (JAAMAS)11(3), 387–434.

Rogers A., Farinelli A. & Jennings N.2010. Self-organising sensors for wide area surveillance using the max-sum algorithm. In Self-Organizing Architectures, 6090, 84–100. Lecture Notes in Computer Science, Springer.

Sutton R. & Barto A.1998. Reinforcement Learning An Introduction. MIT Press.

Tambe M., Bowring E., Jung H., Kaminka G., Maheswaran R., Marecki J., Modi P., Nair R., Okamoto S., Pearce J., Paruchuri P., Pynadath D., Scerri P., Schurr N. & Varakantham P.2005. Conflicts in teamwork – hybrids to the rescue. In Proceedings of the 4th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).

Tham C. & Renaud J.2005. Multi-agent systems on sensor networks a distributed reinforcement learning approach. In Intelligent Sensors, Sensor Networks and Information Processing Conference (ISSNIP).

Tumer K.2005. Designing agent utilities for coordinated, scalable, and robust multiagent systems. In Challenges in the Coordination of Large Scale Multiagent Systems, P. Scerri, R. Mailler & R. Vincent (eds). Springer, 173–188.

Vinyals M., Rodriguez-Aguilar J. & Cerquides J.2010. A survey on sensor networks from a multiagent perspective. The Computer Journal.

Vrancx P., Verbeeck K. & Nowe A.2008. Decentralized learning in Markov games. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics38(4), 976–981.

Williamson S., Gerding E. & Jennings N.2009. Reward shaping for valuing communications during multi-agent coordination. In Proceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).

Wolpert D. H. & Tumer K.2001. Optimal payoff functions for members of collectives. Advances in Complex Systems4(2/3), 265–279.

Xu Y., Scerri P., Yu B., Okamoto S., Lewis M. & Sycara K.2005. An integrated token-based algorithm for scalable coordination. In Proceedings of the 4th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).

Zhang C., Abdallah S. & Lesser V.2009. Integrating organizational control into multi-agent learning. In Procceedings of the 8th International Conference on Autonomous Agents and Multiagent Systems (AAMAS).