Planning and Evaluating Multiagent Influences Under Reward Uncertainty (Extended Abstract). [Go Back]


Publication Info

Stefan Witwicki, Inn-Tung Chen, Edmund Durfee, and Satinder Singh. In Proceedings of the Eleventh International Conference on Autonomous Agents and Multiagent Systems (AAMAS-2012), pages 1277-1278. Valencia, Spain. June 2012.


Abstract

Forming commitments about abstract influences that agents can exert on one another has shown promise in improving the tractability of multiagent coordination under uncertainty. We now extend this approach to domains with meta-level reward-model uncertainty. Intuitively, an agent may actually improve collective performance by forming a weaker commitment that allows more latitude to adapt its policy as it refines its reward model. To account for reward uncertainty as such, we introduce and contrast three new techniques.


Bibtex
@inproceedings{Witwicki:AAMAS2012b,
    author = {Stefan Witwicki, Inn-Tung Chen, Edmund Durfee, and Satinder Singh},
    title = {Planning and Evaluating Multiagent Influences Under 
	    Reward Uncertainty (Extended Abstract)},
    booktitle = {Proceedings of the Eleventh International Conference on Autonomous 
		Agents and Multiagent Systems (AAMAS-2012)},
    month = {June},
    year = {2012},
    pages = {1277-1278},
    address = {Valencia, Spain}
}

Download Paper
[PDF]

[Go Back]