Skip to main navigation Skip to search Skip to main content

Preference Learning in Automated Negotiation Using Gaussian Uncertainty Models

  • Haralambie Leahu
  • , Michael Kaisers
  • , Tim Baarslag

    Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

    Abstract

    In this paper, we propose a general two-objective Markov Decision Process (MDP) modeling paradigm for automated negotiation with incomplete information, in which preference elicitation alternates with negotiation actions, with the objective to optimize negotiation outcomes. The key ingredient in our MDP framework is a stochastic utility model governed by a Gaussian law, formalizing the agent's belief (uncertainty) over the user's preferences. Our belief model is fairly general and can be updated in real time as new data becomes available, which makes it a fundamental modeling too.
    Original languageEnglish
    Title of host publicationProceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems
    Place of PublicationRichland, SC
    PublisherInternational Foundation for Autonomous Agents and Multiagent Systems
    Pages2087-2089
    Number of pages3
    ISBN (Print)978-1-4503-6309-9
    Publication statusPublished - 2019

    Publication series

    NameAAMAS '19
    PublisherInternational Foundation for Autonomous Agents and Multiagent Systems

    Keywords

    • automated negotiation, gaussian processes, preference elicitation

    Fingerprint

    Dive into the research topics of 'Preference Learning in Automated Negotiation Using Gaussian Uncertainty Models'. Together they form a unique fingerprint.

    Cite this