Welcome to the ICML 2017 workshop: Video Games and Machine Learning (room C4.6)

Good benchmarks are necessary for developing artificial intelligence. Recently, there has been a growing movement for the use of video games as machine learning benchmarks [1,2,3,4], and also an interest in the applications of machine learning from the video games community. While games have been used for AI research for a long time, only recently have we seen modern machine learning methods applied to video games.

This workshop focuses on complex games which provide interesting and hard challenges for machine learning. Going beyond simple toy problems of the past, and games which can easily be solved with search, we focus on games where learning is likely to be necessary to play well. This includes strategy games such as StarCraft [5,6,7], open-world games such as MineCraft [8,9,10], first-person shooters such as Doom [11,12], as well as hard and unsolved 2D games such as Ms. Pac-Man and Montezuma's Revenge [13,14,15]. While we see most of the challenges in game-playing, there are also interesting machine learning challenges in modeling and content generation [16]. This workshop aims at bringing together all researchers from ICML who want to use video games as a benchmark.

Invited Speakers

Call for Papers

We invite contributions on any topic that brings together video games and machine learning. We welcome contributions up to four (4) pages of body, with unlimited pages for references and appendices. Reviewers can take their decision based only on the four pages of body. Submissions should be blind (anonymous version).

All accepted papers will have a poster, we will have dedicated time for poster sessions, and we reserve orals for submissions with (in order of priority): interactive demos, demos, and videos.

Organization

Workshop on the ICML website
Date: August 10th, 2017
Organisers:

Schedule

The workshop starts at 9am on August 10th in room C4.6

9:00 AM introduction
9:20 AM Yuandong Tian AI in Games: Achievements and Challenges
10:00 AM spotlight1: Jakob Foerster, Gregory Farquhar, Triantafyllos Afouras, Nantas Nardelli and Shimon Whiteson Counterfactual Multi-Agent Policy Gradients
10:10 AM spotlight2: Simón Algorta and Özgür Şimşek The Game of Tetris in Machine Learning
10:20 AM Marc Bellemare Modern Reinforcement Learning and the Atari 2600
11:00 AM coffee break and posters
11:30 AM Magnus Nordin Machine Learning for Game Development
12:10 AM spotlight3: Harm van Seijen, Mehdi Fatemi, Joshua Romoff and Romain Laroche Achieving Above-Human Performance on Ms. Pac-Man by Reward Decomposition
12:20 AM spotlight4: Christopher Beckham and Christopher Pal A step towards procedural terrain generation with GANs
12:30 PM lunch break
1:30 PM Katja Hofman Challenges in Collaborative Game AI
2:10 PM Jacob Repp Machine Learning with StarCraft II
2:50 PM coffee break and posters
3:30 PM Max Jaderberg Reinforcement Learning in 3D Game Environments
4:10 PM Honglak Lee Deep Reinforcement Learning with Minecraft
4:50 PM break and panel set-up
5:00 PM panel with all invited speakers
6:00 PM end

References:
[1] Greg Brockman, Catherine Olsson, Alex Ray, et al., "OpenAI Universe" (2016).
[2] Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, Julian Schrittwieser, Keith Anderson, Sarah York, Max Cant, Adam Cain, Adrian Bolton, Stephen Gaffney, Helen King, Demis Hassabis, Shane Legg, Stig Petersen, "DeepMind Lab", arXiv:1612.03801 (2016).
[3] Gabriel Synnaeve, Nantas Nardelli, Alex Auvolat, Soumith Chintala, Timothée Lacroix, Zeming Lin, Florian Richoux, Nicolas Usunier, "TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games", arXiv:1611.00625 (2016), Github.
[4] Why video games are essential for inventing artificial intelligence?
[5] Santiago Ontanon, Gabriel Synnaeve, Alberto Uriarte, Florian Richoux, David Churchill, Mike Preuss, "A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft", IEEE Transactions on Computational Intelligence and AI in games 5.4 (2013): 293-311.
[6] StarCraft AI Competition @ AIIDE 2016
[7] Usunier, Nicolas and Synnaeve, Gabriel and Lin, Zeming and Chintala, Soumith, "Episodic Exploration for Deep Deterministic Policies: An Application to StarCraft Micromanagement Tasks", ICLR 2017.
[8] Junhyuk Oh, Valliappa Chockalingam, Satinder Singh, and Honglak Lee, "Control of Memory, Active Perception, and Action in Minecraft", ICML (2016).
[9] Chen Tessler, Shahar Givony, Tom Zahavy, Daniel J. Mankowitz, Shie Mannor, "A Deep Hierarchical Approach to Lifelong Learning in Minecraft", arXiv preprint arXiv:1604.07255 (2016).
[10] Matthew Johnson, Katja Hofmann, Tim Hutton, David Bignell, "The Malmo Platform for Artificial Intelligence Experimentation", IJCAI (2016).
[11] Visual Doom AI Competition @ CIG 2016
[12] Volodymyr Mnih, Adrià Puigdomènech Badia, Mehdi Mirza, Alex Graves, Tim Harley, Timothy P. Lillicrap, David Silver, Koray Kavukcuoglu, "Asynchronous Methods for Deep Reinforcement Learning", arXiv preprint arXiv:1602.01783 (2016).
[13] Tejas D. Kulkarni, Karthik R. Narasimhan, Ardavan Saeedi, Joshua B. Tenenbaum, "Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation", arXiv prepint arXiv:1604.06057 (2016).
[14] Marc G. Bellemare, Sriram Srinivasan, Georg Ostrovski, Tom Schaul, David Saxton, Remi Munos, "Unifying Count-Based Exploration and Intrinsic Motivation", arXiv preprint arXiv:1606.01868 (2016).
[15] Diego Perez-Liebana, Spyridon Samothrakis, Julian Togelius, Tom Schaul, Simon Lucas, "General Video Game AI: Competition, Challenges and Opportunities", AAAI (2016).
[16] Julian Togelius, Georgios N. Yannakakis, Kenneth O. Stanley and Cameron Browne, "Search-based Procedural Content Generation: a Taxonomy and Survey". IEEE TCIAIG (2011).