It has been remarked upon that I publish a lot of papers on a number of topics. It might not be clear how they all fit together. What is it that I work on, really? Of course, I have no problems seeing how the various strands of my research inform each other and contribute towards a small set of common goals. But that might not be so easy for you to see.
So here's an incomplete selection of themes from my recent research, together with links to some relevant papers. For simplicity and for the sake of the length of this text vs your attention span, I will limit myself to work that I've published in the last two years. Note that though I've written code for some of the research systems myself, contributed with conceptual development to all of these papers, and written parts of almost all of them, most of the work below was done by my various students and other collaborators. That's why I'll use the academic "we" in the examples below, referring to my various groups of collaborators. I'm of course very happy that so many talented and hard-working people feel like attaching my name to the author lists of their papers, for one reason or another.
Generally, my research has the dual aim of using AI to enable better (or more interesting) games, and on using games to enable better AI. This means coming up with better game-playing algorithms, with algorithms that can perhaps also play games in a human-like manner, methods for generating complete games or part of games, studying games and players, and for using games to test AI algorithms. This all comes together, for example you cannot design a game without being able to play it and knowing how humans play it, and you can't advance your game-playing AI without having suitable games and levels to try it out on. Ultimately, we're aiming towards general AI that can not only play any game but also design any game, including the game that you seem to want to play right now. That's very general, so let's be more specific.
Procedural content generation
PCG, creating game content with algorithms, is sort of hot right now. For several reasons: if the computers create levels, items, quests etc we don't have to do it ourselves, so games could be endless, adapt to the player, be cheaper to produce etc. Also, creating a content generators is about defining and understanding a particular aesthetic. My various collaborators and me have been working on PCG for quite some time now (actually, since before it was hot) in particular exploring how to best use evolutionary algorithms for generating various forms of game content. We call this search-based PCG. Some examples of recent work includes basing Super Mario level generation on design patterns, evolving maps to advantage particular game-playing algorithms, and multiobjective and multimodal methods for strategy game map generation. We've also introduced new algorithms like constrained novelty search and repurposed older methods such as n-grams for level generation. Understanding the output of these generative methods is very important, and for that reason we have developed ways of characterizing level generators, and generic metrics for level design. A major effort was to edit and write the first textbook on procedural content generation in games; earlier we wrote about goals and challenges for PCG research.Mixed-initiative and AI-assisted game design
While having the computer create levels, items, quests etc all by itself is great, there's also room for collaboration with humans. Computers are good at some things and humans at others, and human designers might want to interfere at various points in the content creation process. Mixed-initiative PCG systems implement a dialogue between a designer and computer, where PCG processes assist the human designer while retaining the designer's freedom. Sentient Sketchbook is one such system, where humans design strategy game maps while algorithms critique the designs and offer suggestions; it also learns the designer's style. Another of our systems is Ropossum, which can generate complete levels for the physics puzzler Cut the Rope and also assist designers. It uses a combination of grammatical evolution, reasoning and tree search, but we have recently experimented with using object path projections for playability testing and with creating levels based on evolved (fictive) playtraces.Data games
There's more digital available than ever before, including large amounts of open data that anyone can access. This includes geographical, economical and demographical data amongst other forms. Data games are games that are built on such data, in the sense that the game's content is generated from open data. We're trying to create ways to improve the generation of meaningful game content by seeding PCG with real-world data, but also make data exploration more playful. Our work involves data-based content generation for existing games such as Open Data Monopoly and Open Data Civilization, using game mechanics for data exploration and visualization such as in Open Trumps and Bar Chart Ball, and data-based procedural generation of complete games as in Data Adventures.Generating complete games
The logical endpoint of PCG is generating the whole game, not just the levels and textures but all the rules and mechanics and everything else that's not the game engine. Even what the game is about and what the goal is. If we are to understand game design properly, we need to build systems that can generate good games; and if we want to test general AI properly we need an infinite supply of new games. We have argued that those game types that would be most realistic to try yo generate are classical board games and simple 2D arcade games; this is also what we have been attempting to generate earlier. Recently, we have invented ways of representing and generating card games, by searching through a space of card games that includes well-known games such as Texas Hold'em. We have also designed a Video Game Description Language which can be used to define simple card games, and invented ways of automatically evaluating the quality of such games and generating new games. It is also interesting to see how different games can be generated by simply varying the parameters of a simple existing game – in our work on generating unique Flappy Bird variants we found that plenty of playable yet different games can emerge.MCTS for video games
In order to be able to generate games you need to test them, and to test them automatically you need AI that can play the games. Being able to play games is of course also important for other reasons, such as providing good opponents and collaborators to the player. Monte Carlo Tree Search is a statistical tree search algorithm that has been very successful in playing board games such as Go. We are trying to figure out how to adapt this algorithm to video games, that have very different requirements from board games - for example continuous space and time, as well as lack of guarantee that random actions will lead to a terminal state. In the course of this, we have developed a number of MCTS modifications and MCTS-inspired algorithms for e.g. Super Mario Bros, car racing and general video game playing; the further success of MCTS variants can be seen in the first General Video Game Playing Competition, where the objective is to not just play one game but a whole set of different games.Behavior imitation and procedural personas
Playing a game well is not all there is to playing a game – there's also the issue of playing style to consider. We've seen numerous cases where the best-playing AI plays the game in a decidedly non-humanlike manner. If we want to test a game or a level, or provide interesting NPCs in a game, we need to create AI that can play that game in the style of a particular human, or maybe humans in general. One approach is to train neural networks on gameplay traces as we've tested with Super Mario Bros. A more involved approach is to model the player as a ``procedural persona'', assuming bounded rationality and a set of objectives. This conceptual framework has been combined with q-learning and neuroevolution to play a roguelike dungeon crawler game in various styles. These procedural personas have also been used to evaluate levels in level generation algorithms. We also organized a Turing Test competition for Super Mario Bros-playing agents, where the objective was to fool judges into believing your bot was a human.Game data mining and player modeling
The vast amount of data generated by modern games can be used to understand both games and their players. We call attempts to make sense of this data game data mining. Our efforts include crowd-sourcing platform game level preferences, and using this to study which micro-structures in the game levels best predict certain player responses. We have also found out that we can predict people's life motives from how they play Minecraft, and detect sexual predators from game chat logs. Another question that we have investigated with data mining is how game mechanics create meaning. Of course, much of the behavior imitation work above could be seen as game data mining as well. You could perhaps take this further if you allow the game to select what the player plays so as to learn more about the player.Music and games
There's many similarities between music and games. For example, games are often experienced as quasi-linear structures with variations in intensity and "mood"; music is often also used to accompany and heighten the emotional expression of a game. We worked on affect-expressing music generation for games and on bidirectional communication between games and music so that playing becomes composing and composing becomes level designing. Given all the work that has been done in music generation, it seems reasonable that some of the methods and representations used there can be used for game content generation; here, we have investigated using functional scaffolding for level generation.Surveying and organizing the research field
As I've been involved in the field of artificial and computational intelligence in games since it was just a baby (or at least just a symposium), I've had a chance to get some perspective on what we are doing, why and perhaps where we should be going. Our Panorama of CI/AI in games is an attempt to give a high-level overview of all the different research directions within the field and how they can inform and benefit each other. In some ways, it is a much longer (oh no!) version of this blog post with more pretentious language (really?) and also talks about other people's work. You should read it. We have also written surveys about computational creativity and games and neuroevolution in games. On top of that we recently organized a Dagstuhl seminar on the future of the field."Stuff"
There are are so many interesting things to do – ars longa, vita brevis. So when a nice idea comes by it's always a good idea to try to implement it, run some experiments and turn it into a paper, even though it might not fit perfectly into the current research direction that you've told yourself you're pursuing. Some of my "other" recent papers which I still consider very interesting deals with community structure detection in complex networks and geometric differential evolution. On particular note is our DeLeNoX system for computational creativity, which I think is really cool and should be used for... something.Finally, just a note that we are not done yet. We don't have AI that can play any game or that can design a large variety of good, novel and meaningful games, so the job is not done. And when the job is done this very likely means that we'll have solved general AI and there is no more job to do at all for anyone. But until then: more work to do, come join.
0 comments:
Post a Comment