Companion players for immersive computer games: How to learn behavioural responses from visual effects
Royal Holloway, University of London
National Productivity Investment Fund (NPIF)
The proposed research will explore the relevance of AI technological innovation to develop adaptive companions for immersive computer games. Given the visual aspects of a game such as characters, objects, surfaces and texture, companions will be developed automatically by tackling hard challenges in gaming such as Reinforcement learning or other AI and machine/deep learning techniques. The outcomes of the resulting exploration will be fed back to the design process in the form of visual interactions and learned behavioural responses to improve the overall immersive experience of the user(s).
In particular, the project will address how a computer program can learn from people, both by intelligent imitation and by learning intelligent responses or companion actions. A promising recent line of research is the discovery of a mathematical correspondence between Generative Adversarial Networks (GANs) – a recently developed approach to learning to generate new examples of data similar to those in an existing corpus – and Inverse Reinforcement Learning (IRL), a technique for learning to imitate the behaviour of a ‘teacher’ by inferring the teacher’s goals and costs from its behaviour. This line of work will allow artists and programmers to find new ways on how agents interact with the environment/world they are creating and even automate some of the animation of characters that takes a lot of time to develop.