Advanced Game AI Reading

Home Schedule Reading Assignments Piazza Flyer


All papers are available in T-Square.

Irrational Agents

Joseph Bates. 1994. The role of emotion in believable agents. Communications of the ACM, 7(37).

    There is a notion in the Arts of "believable character." It does not mean an honest or reliable character, but one that provides the illusion of life, thus permitting the audience’s suspension of disbelief.

A. Bryan Loyall. 1997. Believable Agents: Building Interactive Personalities. Ph.D. Thesis. Technical Report CMU-CS-97-123, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA. May 1997.

Andrew Gordon and Mike van Lent. 2002. Virtual Humans as Participants vs. Virtual Humans as Actors. Discussion at the 2002 AAAI Spring Symposium on Artificial Intelligence and Interactive Entertainment.

    Should virtual humans be thought of as actors playing a role in a virtual play, or real participants living in a virtual world? Although the question is of philosophical interest, it also has implications in how the virtual humanÕs knowledge, goals, communication and sensing are implemented. The question is also one aspect of the more general tension between realism and believability in virtual environments. In most cases, components of a virtual world (such as its physics) are made believable as a short cut when fully realistic components are more desirable but prohibitively expensive in development time and/or processing power. In the actor vs. participant case, however, an argument can be made that actors who behave believably, but have additional unrealistic knowledge (more knowledge than their characters would realistically know) and capabilities, are actually more useful than realistic participants.

Ian Horswill. 2007. Psychopathology, Narrative, and Cognitive Architecture (or: why AI characters should be just as screwed-up as we are). Proceedings of the AAAI Fall Symposium on Intelligent Narrative Technologies.

    Historically, AI research has understandably focused on those aspects of cognition that distinguish humans from other animals – in particular, our capacity for complex problem solving. However, with a few notable exceptions, narratives in popular media generally focus on those aspects of human experience that we share with other social animals: attachment, mating and child rearing, violence, group affiliation, and inter-group and inter-individual conflict. Moreover, the stories we tell often focus on the ways in which these processes break down. In this paper, I will argue that current agent architectures don’t offer particularly good models of these phenomena, and discuss specific phenomena that I think it would be illuminating to understand at a computational level.

Behavior Planning

Blumberg, Galyean. 1995. Multi-Level Direction of Autonomous Creatures for Real-Time Virtual Environments.

    There have been several recent efforts to build behavior-based autonomous creatures. While competent autonomous action is highly desirable, there is an important need to integrate autonomy with “directability”. In this paper we discuss the problem of building autonomous animated creatures for interactive virtual environments which are also capable of being directed at multiple levels. We present an approach to control which allows an external entity to “direct” an autonomous creature at the motivational level, the task level, and the direct motor level. We also detail a layered architecture and a general behavioral model for perception and action-selection which incorporates explicit support for multi-level direction. These ideas have been implemented and used to develop several autonomous animated creatures.

Ken Perlin, Athomas Goldberg. 1996. Improv: a system for scripting interactive actors in virtual worlds. Proceedings of the 23rd annual conference on Computer graphics and interactive techniques

    Improv is a system for the creation of real−time behavior−based animated actors. There have been several recent efforts to build network distributed autonomous agents. But in general these efforts do not focus on the author’s view. To create rich interactive worlds inhabited by believable animated actors, authors need the proper tools. Improv provides tools to create actors that respond to users and to each other in real−time, with personalities and moods consistent with the author’s goals and intentions. Improv consists of two subsystems. The first subsystem is an Animation Engine that uses procedural techniques to enable authors to create layered, continuous, non−repetitive motions and smooth transitions between them. The second subsystem is a Behavior Engine that enables authors to create sophisticated rules governing how actors communicate, change, and make decisions. The combined system provides an integrated set of tools for authoring the "minds" and "bodies" of interactive actors. The system uses an english−style scripting language so that creative experts who are not primarily programmers can create powerful interactive applications.

Doyle, Hayes-Roth. 1997. Agents in Annotated Worlds.

    Virtual worlds offer great potential as environments for education, entertainment, and collaborative work. Agents that function effectively in heterogeneous virtual spaces must have the ability to acquire new behaviors and useful semantic information from those contexts. The human-computer interaction literature discusses how to construct spaces and objects that provide "knowledge in the world" that aids human beings to perform these tasks. In this paper, we describe how to build comparable annotated environments containing explanations of the purpose and uses of spaces and activities that allow agents quickly to become intelligent actors in those spaces. Examples are provided from our application domain, believable agents acting as inhabitants and guides in a children’s exploratory world.

John Laird. 2001. It Knows What You’re Going To Do: Adding Anticipation to a Quakebot. Proceedings of the fifth international conference on Autonomous agents.

    The complexity of AI characters in computer games is continually improving; however they still fall short of human players. In this paper we describe an AI bot for the game Quake II that tries to incorporate some of the missing capabilities. This bot is distinguished by its ability to build its own map as it explores a level, use a wide variety of tactics based on its internal map, and in some cases, anticipate its opponent’s actions. The bot was developed in the Soar architecture and uses dynamical hierarchical task decomposition to organize it knowledge and actions. It also uses internal prediction based on its own tactics to anticipate its opponent’s actions. This paper describes the implementation, its strengths and weaknesses, and discusses future research.

Charles Rich, Candace L. Sidner, and Neal Lesh. 2001. COLLAGEN: Applying Collaborative Discourse Theory to Human-Computer Interaction. AI Magazine Volume 22 Number 4.

    We describe an approach to intelligent user interfaces, based on the idea of making the computer a collaborator, and an application-independent technology for implementing such interfaces.

Timothy Bickmore, Daniel Schulman. 2009. A Virtual Laboratory for Studying Long-term Relationships between Humans and Virtual Agents. Proc. of 8th Int. Conf. on Autonomous Agents and Multiagent Systems.

    Longitudinal studies of human-virtual agent interaction are expensive and time consuming to conduct. We present a new concept and tool for conducting such studies—the virtual laboratory—in which a standing group of study participants interacts periodically with a computer agent that can be remotely manipulated to effect different study conditions, with outcome measures also collected remotely. This architecture allows new experiments to be dynamically defined and immediately implemented in the continuously-running system without delays due to recruitment and system reconfiguration. The use of this tool in the study of a virtual agent that plays the role of an exercise counselor for older adults is described, along with the results of an initial experiment into the effects of conversational variability on user engagement and exercise behavior.


Clark Elliott. 1993. Using the Affective Reasoner to Support Social Simulations. Proceedings of the 13th international joint conference on Artifical intelligence.

    This paper is in two parts. In the first part, the outline of an emotion reasoning architecture, embodied in a simulation program called the Affective Reasoner, is presented, and a rudimentary personality representation for simulated agent is introduced. In the second part, an exercise is reviewed in which the Affective Reasoner is given the task of representing agents with different personality types in such a way as to allow the user to engage in a simulated interaction with them. Representational issues pertaining to the unique appraisal and behavioral styles of the different personality types are addressed. Conclusions are drawn about the usefulness of the Affective Reasoner in such a paradigm.

Jonathan Gratch and Stacy Marsella. 2004. A Domain-independent framework for modeling emotion. Journal of Cognitive Systems Research, Volume 5, Issue 4.

    In this article, we show how psychological theories of emotion shed light on the interaction between emotion and cognition, and thus can inform the design of human-like autonomous agents that must convey these core aspects of human behavior. We lay out a general computational framework of appraisal and coping as a central organizing principle for such systems. We then discuss a detailed domain-independent model based on this framework, illustrating how it has been applied to the problem of generating behavior for a significant social training application. The model is useful not only for deriving emotional state, but also for informing a number of the behaviors that must be modeled by virtual humans such as facial expressions, dialogue management, planning, reacting, and social understanding. Thus, the work is of potential interest to models of strategic decision-making, action selection, facial animation, and social intelligence.

Ana Paiva, Joao Dias, Daniel Sobral, Ruth Aylett, Polly Sobreperez, Sarah Woods, Carsten Zoll. 2004. Caring for Agents and Agents that Care: Building Empathic Relations with
Synthetic Agents. Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems

    When building agents and synthetic characters, and in order to achieve believability, we must consider the emotional relations established between users and characters, that is, we must consider the issue of "empathy". Defined in broad terms as "An observer reacting emotionally because he perceives that another is experiencing or about to experience an emotion", empathy is an important element to consider in the creation of relations between humans and agents. In this paper we will focus on the role of empathy in the construction of synthetic characters, providing some requirements for such construction and illustrating the presented concepts with a specific system called FearNot!. FearNot! was developed to address the difficult and often devastating problem of bullying in schools. By using role playing and empathic synthetic characters in a 3D environment, FearNot! allows children from 8 to 12 to experience a virtual scenario where they can witness (in a third-person perspective) bullying situations. To build empathy into FearNot! we have considered the following components: agentýs architecture; the charactersý embodiment and emotional expression; proximity with the user and emotionally charged situations.We will describe how these were implemented in FearNot! and report on the preliminary results we have with it.

Patrick Gebhard. 2005. ALMA – A Layered Model of Affect. Proceedings of the 2005 ACM Conference on Autonomous Agents and Multiagent Systems.

    In this paper we introduce ALMA – A Layered Model of Affect. It integrates three major affective characteristics: emotions, moods and personality that cover short, medium, and long term affect. The use of this model consists of two phases: In the preparation phase appraisal rules and personality profiles for characters must be specified with the help of AffectML – our XML based affect modeling language. In the runtime phase, the specified appraisal rules are used to compute real-time emotions and moods as results of a subjective appraisal of relevant input. The computed affective characteristics are represented in AffectML and can be processed by sub-sequent modules that control the cognitive processes and physical behavior of embodied conversational characters. ALMA is part of the VirtualHuman project which develops interactive virtual characters that serve as dialog partners with human-like conversational skills. ALMA provides our virtual humans with a personality profile and with real-time emotions and moods. These are used by the multimodal behavior generation module to enrich the lifelike and believable qualities.

Stacy Marsella, Jonathan Gratch and Paolo Petta. Computational Models of Emotion. In in Scherer, K.R., Bänziger, T., & Roesch, E. (Eds.) A blueprint for a affective computing: A sourcebook and manual. Oxford: Oxford University Press, 2010

    Recent years have seen a significant expansion in research on computational models of human emotional processes, driven both by their potential for basic research on emotion and cognition as well as their promise for an ever increasing range of applications. This has led to a truly interdisciplinary, mutually beneficial partnership between emotion research in psychology and computational science, of which this volume is an exemplar. To understand this partnership and its potential for transforming existing practices in emotion research across disciplines and for disclosing important novel areas of research, we explore in this chapter the history of work in computational models of emotion including the various uses to which they have been put, the theoretical traditions that have shaped their development, and how these uses and traditions are reflected in their underlying architectures.


Paolo Rizzo, Manuela Veloso, Maria Miceli, Amedeo Cesta. Goal-Based Personalities and Social Behaviors in Believable Agents. Applied Artificial Intelligence, 13, 1999.

    Agents are considered "believable" when viewed by an audience as endowed with behaviors, attitutdes, and emotions, typicaly of different personalities. Our work is aimed at realizing believable agents that perform helping behaviors infuenced by their personalities; the latter are represented as different clusters of prioritized goals and preferences over plans for achieving goals. The articile describes how such model of personality is implemented in planning with the PRODIGY system and in execution with the RAP system. Both systems are integrated in a plan-based architecture where behaviors characteristic of different "helping personality types" are automatically designed and executed in a virtual world. The article also shows examples of the kinds of plan produced by PRODIGY for different personalities and xontexts, and how such plans are executed by RAP when a helping character interacts with a user ina virtual world.

Mike Poznanski and Paul Thagard. Changing personalities: towards realistic virtual characters. Journal of Experimental and Theoretical Artificial Intelligence, 17(3), 2005.

    Computer modelling of personality and behaviour is becoming increasingly important in many fields of computer science and psychology. Personality and emotion-driven Believable Agents are needed in areas like human–machine interfaces, electronic advertising and, most notably, electronic entertainment. Computer models of personality can help explain personality by illustrating its underlying structure and dynamics. This work presents a neural network model of personality and personality change. The goals are to help understand personality and create more realistic and believable characters for interactive video games. The model is based largely on trait theories of personality. Behaviour in the model results from the interaction of three components: (1) personality-based predispositions for behaviour, (2) moods/emotions and (3) environmental situations. Personality develops gradually over time depending on the situations encountered. Modelling personality change produces interesting and believable virtual characters whose behaviours change in psychologically plausible ways.

Social Simulation

David Pynadath and Stacy Marsella. 2005. PsychSim: Modeling Theory of Mind with Decision-Theoretic Agents. Proceedings of the 19th international joint conference on Artificial intelligence.

    Agent-based modeling of human social behavior is an increasingly important research area. A key factor in human social interaction is our beliefs about others, a theory of mind. Whether we believe a message depends not only on its content but also on our model of the communicator. How we act depends not only on the immediate effect but also on how we believe others will react. In this paper, we discuss PsychSim, an implemented multiagent-based simulation tool for modeling interactions and influence. While typical approaches to such modeling have used first-order logic, Psych-Sim agents have their own decision-theoretic model of the world, including beliefs about its environment and recursive models of other agents. Using these quantitative models of uncertainty and preferences, we have translated existing psychological theories into a decision-theoretic semantics that allow the agents to reason about degrees of believability in a novel way. We discuss PsychSim's underlying architecture and describe its application to a school violence scenario for illustration.

Josh McCoy, Mike Treanor, Ben Samuel, Brandon Tearse, Michael Mateas, and Noah Wardrip-Fruin. 2010. Comme il Faut 2: A fully realized model for socially-oriented gameplay. Proceedings of the 3rd Workshop on Intelligent Narrative Technologies.

    Social games—common patterns of character interactions that modify the social environment of the story world—provide a useful abstraction when authoring a story composed of interactive characters, making it possible to create games with deep possibility spaces that are about social interaction (which would be intractable if hand-authoring all the options). In this paper, we detail the workings of a major new version of our social artificial intelligence system, Comme il Faut, that enables social game play in interactive media experiences. The workings of Comme il Faut 2 are shown, with running examples, from both knowledge representation and process perspectives. Finally, the paper concludes with a plan for evaluating and demonstrating Comme il Faut 2 through an implementation of an interactive media experience that consists of a playable social space.


Michael Mateas, Andrew Stern. Natural Language Understanding in Façade: Surface-text Processing. In Proceedings of the Conference on Technologies for Interactive Digital Storytelling and Entertainment (2004)

    Façade is a real-time, first-person dramatic world in which the player, visiting the married couple Grace and Trip at their apartment, quickly becomes entangled in the high-conflict dissolution of their marriage. The Façade interactive drama integrates real-time, autonomous believable agents, drama management for coordinating plot-level interactivity, and broad, shallow support for natural language understanding and discourse management. In previous papers, we have described the motivation for Façade’s interaction design and architecture, described ABL, our believable agent language, and presented overviews of the entire architecture. In this paper we focus on Façade’s natural language processing (NLP) system, specifically the understanding (NLU) portion that extracts discourse acts from player-typed surface text.

Anton Leuski and David Traum. 2010. NPCEditor: A Tool for Building Question-Answering Characters. Proceedings of the Seventh conference on International Language Resources and Evaluation.

    NPCEditor is a system for building and deploying virtual characters capable of engaging a user in spoken dialog on a limited domain. The dialogue may take any form as long as the character responses can be specified a priori. For example, NPCEditor has been used for constructing question answering characters where a user asks questions and the character responds, but other scenarios are possible. At the core of the system is a state of the art statistical language classification technology for mapping from user's text input to system responses. NPCEditor combines the classifier with a database that stores the character information and relevant language data, a server that allows the character designer to deploy the completed characters, and a user-friendly editor that helps the designer to accomplish both character design and deployment tasks. In the paper we define the overall system architecture, describe individual NPCEditor components, and guide the reader through the steps of building a virtual character.

Mairesse, Walker, Controlling User Perceptions of Linguistic Style: Trainable Generation of Personality Traits. Computational Linguistics, 37(3), 2011.

    Recent work in natural language generation has begun to take linguistic variation into account, developing algorithms that are capable of modifying the system’s linguistic style based either on the user’s linguistic style or other factors, such as personality or politeness. While stylistic control has traditionally relied on handcrafted rules, statistical methods are likely to be needed for generation systems to scale to the production of the large range of variation observed in human dialogues. Previous work on statistical natural language generation (SNLG) has shown that the grammaticality and naturalness of generated utterances can be optimized from data, however these data-driven methods have not been shown to produce stylistic variation that is perceived by humans in the way that the system intended. This paper describes PERSONAGE, a highly parameterizable language generator whose parameters are based on psychological findings about the linguistic reflexes of personality. We present a novel SNLG method which uses parameter estimation models trained on personality-annotated data to predict the generation decisions required to convey any combination of scalar values along the five main dimensions of personality. A human evaluation showsthat parameter estimation models produce recognizable stylistic variation along multiple dimensions, on a continuous scale, and without the computational cost incurred by overgeneration techniques.

Marilyn A. Walker, Ricky Grant, Jennifer Sawyer, Grace I. Lin, Noah Wardrip-Fruin, and Michael Buell. Perceived or Not Perceived: Film Character Models for Expressive NLG. Proceedings of the Fourth Joint Conference on Interactive Digital Storytelling, 2011

    This paper presents a method for learning models of character linguistic style from a corpus of film dialogues and tests the method in a perceptual experiment. We apply our method in the context of SpyFeet, a prototype role playing game. In previous work, we used the PERSONAGE engine to produce restaurant recommendations that varied according to the speaker’s personality. Here we show for the first time that: (1) our expressive generation engine can operate on content from the story structures of an RPG; (2) PERSONAGE parameter models can be learned from film dialogue; (3) PERSONAGE rule-based models for extraversion and neuroticism are be perceived as intended in a new domain (SpyFeet character utterances); and (4) that the parameter models learned from film dialogue are generally perceived as being similar to the character that the model is based on. This is the first step of our long term goal to create off-the-shelf tools to support authors in the creation of interesting dramatic characters and dialogue partners, for a broad range of types of interactive stories and role playing games.

Learning from Humans

Monica Nicolescu and Maja Mataric. 2002. Natural Methods for Robot Task Learning: Instructive Demonstrations, Generalization and Practice. Proceedings of the second international joint conference on Autonomous agents and multiagent systems.

    Among humans, teaching various tasks is a complex process which relies on multiple means for interaction and learning, both on the part of the teacher and of the learner. Used together, these modalities lead to effective teaching and learning approaches, respectively. In the robotics domain, task teaching has been mostly addressed by using only one or very few of these interactions. In this paper we present an approach for teaching robots that relies on the key features and the general approach people use when teaching each other: first give a demonstration, then allow the learner to refine the acquired capabilities by practicing under the teacher's supervision, involving a small number of trials. Depending on the quality of the learned task, the teacher may either demonstrate it again or provide specific feedback during the learner's practice trial for further refinement. Also, as people do during demonstrations, the teacher can provide simple instructions and informative cues, increasing the performance of learning. Thus, instructive demonstrations, generalization over multiple demonstrations and practice trials are essential features for a successful human-robot teaching approach. We implemented a system that enables all these capabilities and validated these concepts with a Pioneer 2DX mobile robot learning tasks from multiple demonstrations and teacher feedback.

Jeff Orkin and Deb Roy. 2007. The Restaurant Game: Learning Social Behavior and Language from Thousands of Players Online. Appears in Journal of Game Development (JOGD) 3(1).

    We envision a future in which conversational virtual agents collaborate with humans in games and training simulations. A representation of common ground for everyday scenarios is essential for these agents if they are to be effective collaborators and communicators. Effective collaborators can infer a partner’s goals and predict future actions. Effective communicators can infer the meaning of utterances based on semantic context. This article introduces a computational model of common ground called a Plan Network, a statistical model that encodes context-sensitive expected patterns of behavior and language, with dependencies on social roles and object affordances. We describe a methodology for unsupervised learning of a Plan Network using a multiplayer video game, visualization of this network, and evaluation of the learned model with respect to human judgment of typical behavior. Specifically, we describe learning the Restaurant Plan Network from data collected from over 5,000 gameplay sessions of a minimal investment multiplayer online (MIMO) role-playing game called The Restaurant Game. Our results demonstrate a kind of social common sense for virtual agents, and have implications for automatic authoring of content in the future.

Bulent Tastan, and Gita Sukthankar. 2011. Learning Policies for First Person Shooter Games Using Inverse Reinforcement Learning. Proceedings of the Seventh AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment.

    The creation of effective autonomous agents (bots) for combat scenarios has long been a goal of the gaming industry. However, a secondary consideration is whether the autonomous bots behave like human players; this is especially important for simulation/training applications which aim to instruct participants in real-world tasks. Bots often compensate for a lack of combat acumen with advantages such as accurate targeting, predefined navigational networks, and perfect world knowledge, which makes them challenging but often predictable opponents. In this paper, we examine the problem of teaching a bot to play like a human in first-person shooter game combat scenarios. Our bot learns attack, exploration and targeting policies from data collected from expert human player demonstrations in Unreal Tournament. We hypothesize that one key difference between human players and autonomous bots lies in the relative valuation of game states. To capture the internal model used by expert human players to evaluate the benefits of different actions, we use inverse reinforcement learning to learn rewards for different game states. We report the results of a human subjects’ study evaluating the performance of bot policies learned from human demonstration against a set of standard bot policies. Our study reveals that human players found our bots to be significantly more human-like than the standard bots during play. Our technique represents a promising stepping-stone toward addressing challenges such as the Bot Turing Test (the CIG Bot 2K Competition).

Boyang Li, Stephen Lee-Urban, Darren Scott Appling, and Mark O. Riedl. 2012. Automatically Learning to Tell Stories about Social Situations from the Crowd. Proceedings of the LREC 2012 Workshop on Computational Models of Narrative.

    Narrative intelligence is the use of narrative to make sense of the world and to communicate with other people. The generation of stories involving social and cultural situations (eating at a restaurant, going on a date, etc.) requires an extensive amount of experiential knowledge. While this knowledge can be encoded in the form of scripts, schemas, or frames, the manual authoring of these knowledge structures presents a significant bottleneck in the creation of systems demonstrating narrative intelligence. In this paper we describe a technique for automatically learning robust, script-like knowledge from crowdsourced narratives. Crowdsourcing, the use of anonymous human workers, provides an opportunity for rapidly acquiring a corpus of highly specialized narratives about sociocultural situations. We describe a three-stage approach to script acquisition and learning. First, we query human workers to write natural language narrative examples of a given situation. Second, we learn the set of possible events that can occur in a situation by finding semantic similarities between the narrative examples. Third, we learn the relevance of any event to the situation and extract a probable temporal ordering between events. We describe how these scripts, which we call plot graphs, can be utilized to generate believable stories about social situations.

Jeff Orkin and Deb Roy. 2012. Understanding Speech in Interactive Narratives with Crowdsourced Data. Proceedings of the 8th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment.

    Speech recognition failures and limited vocabulary coverage pose challenges for speech interaction with characters in games. We describe an end-to-end system for automating characters from a large corpus of recorded human game logs, and demonstrate that inferring utterance meaning through a combination of plan recognition and surface text similarity compensates for recognition and understanding failures significantly better than relying on surface similarity alone.

Improvisational Agents

Barbara Hayes-Roth, Lee Brownston, and Erik Sincoff. 1995 Directed improvisation by computer characters. Technical Report KSL-95-04. Stanford University.

    We present a directed improvisation paradigm, in which computer characters improvise a joint course of behavior that follows users' directions, but also engages and entertains users with the novelty, life-like qualities, and performance properties of their improvisations. We present requirements for improvisational characters that differ from the usual requirements for conventional computer agents and present an architecture that is designed to meet the new requirements. Two implemented characters exploit some of these architectural features to meet simple versions of the requirements. Finally, we illustrate the utility of improvisational characters for a variety of applications related to the arts and entertainment, including a suite of interaction modes in our testbed environment, a Virtual Theater for Children.

Brian Magerko, Peter Dohogne, and Chris DeLeon. 2011. Employing Fuzzy Concepts for Digital Improvisational Theatre. Proceedings of the 7th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment.

    This paper describes the creation of a digital improvisational theatre game, called Party Quirks, that allows a human user to improvise a scene with synthetic actors according to the rules of the real-world Party Quirks improv game. The AI actor behaviors are based on our study of communication strategies between real-life actors on stage and the fuzzy concepts that they employ to define and portray characters. This paper describes the underlying fuzzy concepts used to enable reasoning in ambiguous environments, like improv theatre. It also details the development of content for the system, which involved the creation of a system for animation authoring, design for efficient data reuse, and a work flow centered on Google Docs enabling parallel data entry and rapid iteration.

Brian Magerko and Brian O’Neill. 2012. Formal Models of Western Films for Interactive Narrative Technologies. Proceedings of the LREC 2012 Workshop on Computational Models of Narrative.

    Interactive narrative technologies have typically addressed the authoring bottleneck problem by focusing on authoring a tractable story space (i.e. the space of possible experiences for a user) coupled with an AI technology for mediating the user’s journey through this space. This article describes an alternative, potentially more general and expressive approach to interactive narrative that focuses on the procedural representation of story construction between an AI agent and a human interactor. This notion of procedural interaction relies on shared background knowledge between all actors involved; therefore, we have developed a body of background knowledge for improvising Western-style stories that includes the authoring of scripts (i.e. prototypical joint activities in Westerns). This article describes our methodology for the design and development of these scripts, the formal representation used for encoding them in our interactive narrative technology, and the lessons learned from this experience in regards to building a story corpus for interactive narrative research.

Cultural Models



Alexander J. Quinn and Benjamin B. Bederson. 2011. Human Computation: A Survey and Taxonomy of a Growing Field. Proceedings of CHI 2011.

    The rapid growth of human computation within research and industry has produced many novel ideas aimed at organizing web users to do great things. However, the growth is not adequately supported by a framework with which to understand each new system in the context of the old. We classify human computation systems to help identify parallels between different systems and reveal “holes” in the existing work as opportunities for new research. Since human computation is often confused with “crowdsourcing” and other terms, we explore the position of human computation with respect to these related topics.