Projects:   Automated Story Generation   Text Adventure Games   Explainable AI   Value Alignment   Novelty Adaptation  

Automated Story Generation

Humans use storytelling to entertain, share experiences, educate, and to facilitate social bonding. For an intelligent system to be unable to generate a story limits its ability to interact with humans in naturalistic ways. Automated Story Generation, in particular, has been a grand challenge in artificial intelligence, requiring a system to construct a sequence of sentences that can be read and understood as a story. This research seeks fundamental advances in automated story generation and related fields such as machine reading, narrative understanding, and commonsense reasoning.

Representative Publications:

  • Symbolic planning for automated story generation.
    Mark O. Riedl and R. Michael Young
    Narrative Planning: Balancing Plot and Character
    Journal of Artificial Intelligence Research 39 (2010).
    arXiv Journal bibtex
  • Foundational work on neural story generation.
    Lara J. Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, and Mark O. Riedl
    Event Representations for Automated Story Generation with Deep Neural Nets
    Proceedings of the 2018 Conference of the Association for the Advancement of Artificial Intelligence (2018).
    arXiv Conference bibtex
  • Goal-directed controllability of neural story generation systems.
    Pradyumna Tambwekar, Murtaza Dhuliawala, Lara J. Martin, Animesh Mehta, Brent Harrison, and Mark O. Riedl
    Controllable Neural Story Plot Generation via Reward Shaping
    Proceedings of the 2019 International Joint Conference on Artificial Intelligence (2019).
    arXiv Conference bibtex

Text Adventure Games

Natural language communication can be used to affect change in the real world. Text adventure games, in which players must make sense of the world through text descriptions and declare actions through natural language, can provide a stepping stone toward more real-world environments where agents must communicate to understand the state of the world and indirectly affect change inthe world. Text adventure games are also, by some metrics, harder than video games such as StarCraft. For example the classic game Zork has never been beaten. We seek up develop new reinforcement learning agents that can reason about and solve language-based tasks involving long-term causal dependencies.

Representative Publications:

  • We introduce KG-DQN, a method for planing text-adventure games using knowledge graphs as a means of handling partial observability and combinatorially large action spaces
    Prithviraj Ammanabrolu and Mark O. Riedl
    Playing Text-Adventure Games with Graph-Based Deep Reinforcement Learning
    Proceedings of the 2019 Conference of the North American Association for Computational Linguistics (2019).
    arXiv Conference bibtex
  • We improve on KG-DQN results with KG-A2C.
    Prithviraj Ammanabrolu and Matthew Hausknecht
    Graph Constrained Reinforcement Learning for Natural Language Action Spaces
    International Conference on Learning Representations (2020).
    OpenReview Conference bibtex
  • We show that large language models can be fine-tuned to generate knowledge graphs, improving sample efficiency. We further show that an agent that learns the structure of the game can set a new state of the art in Zork (specifically passing the Grue).
    Prithviraj Ammanabrolu, Ethan Tien, Matthew Hausknecht, and Mark O Riedl
    How to avoid being eaten by a grue: Structured exploration strategies for textual worlds
    arXiv preprint arXiv:2006.07409 (2020).
    arXiv bibtex

Explainable AI

AI systems are increasingly deployed in high-stakes setting that affect non-technical end-users. Explanations can help users understand what an AI system is doing and the decisions it makes. However, we don’t understand the human factors of explanations and how they create trust and improve the space of actions and remediations available to users. In this project we seek to understand how explanations affect users and how to design better explanation generation systems.

Representative Publications:

  • Introducing the concept of 'Rationale Generation'
    Brent Harrison, Upol Ehsan, and Mark O. Riedl
    Rationalization: A Neural Machine Translation Approach to Generating Natural Language Explanations
    Proceedings of the 1st AAAI Conference on AI, Ethics, and Society (2017).
    arXiv Conference bibtex
  • Experiments on the human factors of rationale generation
    Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark O. Riedl
    Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions
    Proceedings of the 2019 ACM International Conference on Intelligent User Interfaces (2019).
    arXiv Conference bibtex
  • Explanation generation systems are parts of socio-technical systems. We explore the effects of explanations on teams.
    Upol Ehsan, Q. Liao, Michael J. Muller, Mark O. Riedl, and Justin D. Weisz
    Expanding Explainability: Towards Social Transparency in AI systems
    Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (2021).
    arXiv Conference bibtex

Value Alignment

Value alignmentis a property of an intelligent agent indicating that it can only pursue goals and activities that are beneficial to humans. How do we teach AI systems values? We introduce normative alignment, the concept that an agent should adhere to social and cultural norms. We present techniques for teaching AI systems sociocultural norms and biasing agent behavior (whether a generative language model or a reinforcement learning agent) toward agreed upon norms for a particular society.

Representative Publications:

  • We introduce a neural model that can classify textual descriptions of behavior as normative. The model achieves high zero-shot transfer across domains.
    Spencer Frazier, Md Sultan Al Nahian, Mark O. Riedl, and Brent Harrison
    Learning Norms from Stories: A Prior for Value Aligned Agents
    Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2020).
    arXiv Conference bibtex
  • Using the above normative classifier, we can use reinforcement learning to reduce the amount of non-normative behavior descriptions generated by large pre-trained language models, making them safer.
    Xiangyu Peng, S. Li, Spencer Frazier, and Mark O. Riedl
    Reducing Non-Normative Text Generation from Language Models
    International Conference on Natural Language Generation (2020).
    arXiv Conference bibtex
  • We show how a normative classifier can be introduced as a source of reward in reinforcement learning agents, resulting in value aligned agents that can learn altruistic behavior even while pursing task rewards.
    Md Sultan Al Nahian, Spencer Frazier, Brent Harrison, and Mark O. Riedl
    Training Value-Aligned Reinforcement Learning Agents Using a Normative Prior
    arXiv:2104.09469 (2021).
    arXiv bibtex

Novelty Adaptation

Deep reinforcement learning systems have been demonstrated to be very effective at playing games, but also brittle to novelty. For example when the rules of a game change (or under board game ‘house rules’), a pre-trained policy model may no longer suffice, requiring sample-inefficient trial-and-error learning to update the policy model. In this work, we seek to develop algorithms that learn the “rules of the game”, detect when the rules change, and rapidly adapt to the novelty.

Representative Publications:

  • We propose a new agent architecture for reinforcement learning agents that can detect novelty in the "rules of the game" and imagine how the new rules work, retraining on its imagination.
    Xiangyu Peng, Jonathan C. Balloch, and Mark O. Riedl
    Detecting and Adapting to Novelty in Games
    Proceedings of the AAAI21 Workshop on on Reinforcement Learning in Games (2021).
    arXiv Workshop bibtex