Automated Story Generation Text Games and Open-Ended Role-Playing Dialogue Agents Computational Creativity Procedural Content Generation Explainable AI Value Alignment Novelty Adaptation

Automated Story Generation

Humans use storytelling to entertain, share experiences, educate, and to facilitate social bonding. For an intelligent system to be unable to generate a story limits its ability to interact with humans in naturalistic ways. Automated Story Generation, in particular, has been a grand challenge in artificial intelligence, requiring a system to construct a sequence of sentences that can be read and understood as a story. This research seeks fundamental advances in automated story generation and related fields such as machine reading, narrative understanding, and commonsense reasoning.

Representative Publications:

  • Symbolic planning for automated story generation.
    Mark O. Riedl and R. Michael Young
    Narrative Planning: Balancing Plot and Character
    Journal of Artificial Intelligence Research 39 (2010).
    arXiv Journal bibtex
  • Foundational work on neural story generation.
    Lara J. Martin, Prithviraj Ammanabrolu, Xinyu Wang, William Hancock, Shruti Singh, Brent Harrison, and Mark O. Riedl
    Event Representations for Automated Story Generation with Deep Neural Nets
    Proceedings of the 2018 Conference of the Association for the Advancement of Artificial Intelligence (2018).
    arXiv Conference bibtex
  • Reinforcement learning fine-tuning of language models for goal-directedness.
    Pradyumna Tambwekar, Murtaza Dhuliawala, Lara J. Martin, Animesh Mehta, Brent Harrison, and Mark O. Riedl
    Controllable Neural Story Plot Generation via Reward Shaping
    Proceedings of the 2019 International Joint Conference on Artificial Intelligence (2019).
    arXiv Conference bibtex
  • Checking the generation of a language model against reader commonsense expectations.
    Xiangyu Peng, Siyan Li, Sarah Wiegreffe, and Mark O. Riedl
    Inferring the Reader: Guiding Automated Story Generation with Commonsense Reasoning
    Findings of EMNLP 2022 (2022).
    arXiv bibtex
  • A story generation that builds a model of the reader to make better story decisions, including working toward a story goal.
    Xiangyu Peng, Kaige Xie, Amal Alabdulkarim, Harshith Kayam, Samihan Dani, and Mark O. Riedl
    Guiding Neural Story Generation with Reader Models
    Findings of EMNLP 2022 (2022).
    arXiv bibtex

Text Games and Open-Ended Role-Playing

Natural language communication can be used to affect change in the real world. Text adventure games, in which players must make sense of the world through text descriptions and declare actions through natural language, can provide a stepping stone toward more real-world environments where agents must communicate to understand the state of the world and indirectly affect change inthe world. Text adventure games are also, by some metrics, harder than video games such as StarCraft. For example the classic game Zork has never been beaten. We seek up develop new reinforcement learning agents that can reason about and solve language-based tasks involving long-term causal dependencies. We also seek open-ended agents capable of role-playing in text environments with humans.

Representative Publications:

  • We introduce KG-DQN, a method for planing text-adventure games using knowledge graphs as a means of handling partial observability and combinatorially large action spaces
    Prithviraj Ammanabrolu and Mark O. Riedl
    Playing Text-Adventure Games with Graph-Based Deep Reinforcement Learning
    Proceedings of the 2019 Conference of the North American Association for Computational Linguistics (2019).
    arXiv Conference bibtex
  • We improve on KG-DQN results with KG-A2C.
    Prithviraj Ammanabrolu and Matthew Hausknecht
    Graph Constrained Reinforcement Learning for Natural Language Action Spaces
    International Conference on Learning Representations (2020).
    OpenReview Conference bibtex
  • We show that large language models can be fine-tuned to generate knowledge graphs, improving sample efficiency. We further show that an agent that learns the structure of the game can set a new state of the art in Zork (specifically passing the Grue).
    Prithviraj Ammanabrolu, Ethan Tien, Matthew Hausknecht, and Mark O Riedl
    How to avoid being eaten by a grue: Structured exploration strategies for textual worlds
    arXiv preprint arXiv:2006.07409 (2020).
    arXiv bibtex
  • Train an open-ended role-playing agent using exemplar stories.
    Xiangyu Peng, Christopher Cui, Wei Zhou, Renee Jia, and Mark O. Riedl
    Story Shaping: Teaching Agents Human-like Behavior with Stories
    Proceedings of the 2023 AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (2023).
    arXiv Conference bibtex

Dialogue Agents

Agents that communicate while acting.

Representative Publications:

  • Communicating in character, using Critical Role data.
    Wai Man Si, Prithviraj Ammanabrolu, and Mark O. Riedl
    Telling Stories through Multi-User Dialogue by Modeling Character Relations
    Proceedings of the 2021 SIGDIAL Conference (2021).
    arXiv Conference bibtex
  • Teaching an agent to speak and act with an automated curriculum of procedurally generated game worlds.
    Prithviraj Ammanabrolu, Renee Jia, and Mark O. Riedl
    Situated Dialogue Learning through Procedural Environment Generation
    Proceedings of ACL 2022 (2022).
    arXiv Conference bibtex

Computational Creativity

We investigate computational theories of creativity. We also seek to build co-creative agents, which are capable of interacting with human creators as peers.

Representative Publications:

  • A computational theory of creativity put to use to create fully playable games.
    Matthew Guzdial and Mark O. Riedl
    Automated Game Design via Conceptual Expansion
    Proceedings of the 2018 AAAI Conference on AI and Interactive Digital Entertainment (2018).
    arXiv Conference bibtex
  • A study of how humans and co-creative agents can communicate their creative intentions to each other.
    Zhiyu Lin, Upol Ehsan, Rohan Agarwal, Samihan Dani, Vidushi Vashishth, and Mark O. Riedl
    Beyond Prompts: Exploring the Design Space of Mixed-Initiative Co-Creativity Systems
    Proceedings of the 2023 International Conference on Computational Creativity (2023).
    arXiv Conference bibtex

Procedural Content Generation

Procedural Content Generation is the use of algorithms to create game content. We explore AI techniques for procedural content generation and game generation in the context of 2D platformer games, text worlds, rhythm action games, and more.

Representative Publications:

  • Learning to generate Super Mario Bros. levels from online gameplay videos.
    Matthew Guzdial and Mark O. Riedl
    Game Level Generation from Gameplay Videos
    Proceedings of the 2016 AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (2016).
    PDF Conference bibtex
  • Generating rhythm action games using neural networks.


    ().
  • Generating playable text game worlds from story inputs.
    Prithviraj Ammanabrolu, Wesley Cheung, Dan Tu, William Broniec, and Mark O Riedl
    Bringing stories alive: Generating interactive fiction worlds
    Proceedings of the Sixteenth AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment (AIIDE-20) (2020).
    arXiv Conference bibtex

Explainable AI

AI systems are increasingly deployed in high-stakes setting that affect non-technical end-users. Explanations can help users understand what an AI system is doing and the decisions it makes. However, we don’t understand the human factors of explanations and how they create trust and improve the space of actions and remediations available to users. In this project we seek to understand how explanations affect users and how to design better explanation generation systems.

Representative Publications:

  • Introducing the concept of 'Rationale Generation'


    ().
  • Experiments on the human factors of rationale generation
    Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark O. Riedl
    Automated Rationale Generation: A Technique for Explainable AI and its Effects on Human Perceptions
    Proceedings of the 2019 ACM International Conference on Intelligent User Interfaces (2019).
    arXiv Conference bibtex
  • Explanation generation systems are parts of socio-technical systems. We explore the effects of explanations on teams.
    Upol Ehsan, Q. Liao, Michael J. Muller, Mark O. Riedl, and Justin D. Weisz
    Expanding Explainability: Towards Social Transparency in AI systems
    Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (2021).
    arXiv Conference bibtex
  • Articulates a human-centered perspective on XAI grounded.
    Upol Ehsan and Mark O. Riedl
    Human-centered Explainable AI: Towards a Reflective Sociotechnical Approach
    Proceedings of HCI International 2020: 22nd International Conference On Human-Computer Interaction (2020).
    arXiv Conference bibtex
  • What can go wrong if one doesn't study the human factors of explanations.
    Upol Ehsan and Mark O. Riedl
    Explainability Pitfalls: Beyond Dark Patterns in Explainable AI
    Proceedings of the NeurIPS Workshop on Human Centered AI (2021).
    arXiv Workshop bibtex

Value Alignment

Value alignmentis a property of an intelligent agent indicating that it can only pursue goals and activities that are beneficial to humans. How do we teach AI systems values? We introduce normative alignment, the concept that an agent should adhere to social and cultural norms. We present techniques for teaching AI systems sociocultural norms and biasing agent behavior (whether a generative language model or a reinforcement learning agent) toward agreed upon norms for a particular society.

Representative Publications:

  • We introduce a neural model that can classify textual descriptions of behavior as normative. The model achieves high zero-shot transfer across domains.
    Spencer Frazier, Md Sultan Al Nahian, Mark O. Riedl, and Brent Harrison
    Learning Norms from Stories: A Prior for Value Aligned Agents
    Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2020).
    arXiv Conference bibtex
  • Using the above normative classifier, we can use reinforcement learning to reduce the amount of non-normative behavior descriptions generated by large pre-trained language models, making them safer.
    Xiangyu Peng, S. Li, Spencer Frazier, and Mark O. Riedl
    Reducing Non-Normative Text Generation from Language Models
    International Conference on Natural Language Generation (2020).
    arXiv Conference bibtex
  • We show how a normative classifier can be introduced as a source of reward in reinforcement learning agents, resulting in value aligned agents that can learn altruistic behavior even while pursing task rewards.
    Md Sultan Al Nahian, Spencer Frazier, Brent Harrison, and Mark O. Riedl
    Training Value-Aligned Reinforcement Learning Agents Using a Normative Prior
    arXiv:2104.09469 (2021).
    arXiv bibtex

Novelty Adaptation

Deep reinforcement learning systems have been demonstrated to be very effective at playing games, but also brittleness to novelty. We seek to develop sample-efficient and robust world models that capture the “rules of the game”, detect when the rules change, and rapidly adapt to the novelty in real time.

Representative Publications:

  • A suite of mini-grid environments in which novel changes to the world dynamics are introduced, requiring adaptation. Includes an ontology of novelty types, and metrics for measuring novelty adaptation.
    Jonathan Balloch, Zhiyu Lin, Mustafa Hussain, Aarun Srinivas, Robert Wright, Xiangyu Peng, Julia Kim, and Mark Riedl
    NovGrid: A Flexible Grid World for Evaluating Agent Response to Novelty
    Proceedings of the AAAI Spring Symposium on Designing Artificial Intelligence for Open Worlds (2022).
    arXiv Conference bibtex
  • A reinforcement learning architecture with neuro-symbolic world model that detects and adapts to novelty rapidly and efficiently.
    Jonathan C. Balloch, Zhiyu Lin, Xiangyu Peng, Mustafa Hussain, Aarun Srinivas, Robert Wright, Julia M. Kim, and Mark O. Riedl
    Neuro-Symbolic World Models for Adapting to Open World Novelty
    Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems (2023).
    arXiv Conference bibtex
  • The relationship between exploration in reinforcement learning and domain transfer.
    Jonathan C. Balloch, Julia Kim, Jessica B. Langebrake Inman, and Mark O. Riedl
    The Role of Exploration for Task Transfer in Reinforcement Learning
    Proceedings of the IROS 2022 Workshop on Lifelong Learning of High-level Cognitive and Reasoning Skills (2022).
    arXiv Workshop bibtex