Showing: 11 - 17 of 17 RESULTS

Harnessing Multi-Agent Systems for Task Execution

Multi-agent systems expand the overall capabilities of generative AI applications. By enabling multiple generative AI agents to work together, performing separate tasks, and leveraging different tools, these systems automate the identification and execution of robust tasks. By distributing complex tasks among specialized agents, multi-agent systems address coordination and communication challenges when working with multiple models. Learn how AWS utilized open source tools and native AWS AI Services to build an “Idea to Game Code”, Multi-Agent System. You’ll see a demo of the system hosted on AWS and dive into how agents work, how they work together, and tips to get started building your own agents.

Takeaways:

  • An understanding of how generative AI agents work and how they can work together to form a multi-agent system.
  • Tips on how to get started building your own agents.
  • Considerations when building agentic systems.

The AI Settlement Generation Challenge in Minecraft

Description: What are the challenges in writing an AI that can generate an interesting settlement adapted to an unseen Minecraft map? What AI techniques work well here? These are just some of the questions the GDMC AI settlement generation challenge set out to answer when it was founded 7 years ago. Adaptive and holistic procedural content generation in games still has many open challenges, and in this talk we will take a look at some selected ideas and AI techniques used to tackle those. I will show a range of colorful and amazing settlement generators, and discuss what worked and what did not. If you feel after the talk that you could do better, I have great news for you, we will be back in 2025 for round 8.

Takeaways:

  • Learning about the GDMC competition – a fun AI programming competition.
  • The use of AI competition to drive research and find out things.
  • A comparison of different AI approaches to the same problem.
  • A better understanding of the problems of adaptive PCG.

AgentMerge: Enhancing Battlefield Automated Issue Management with LLMs

Description: The Battlefield QV department manages an automated issue workflow that handles reports coming from diverse data sources and entities, such as error APIs, automation systems, etc. An important part of this workflow is the interaction with the issue tracking manager Jira, where tickets are created automatically using data retrieved from the reports. The process is not fully automated, as there are still parts that require the hardcoding of rules that may change over time or a manual intervention that can become time consuming when the volume of tickets reaches high peaks. The talk explores the potential of Large Language Models (LLMs) in automated issue management within the Battlefield franchise. It has the goal to address the identification of duplicate issues, replacing the inefficiency of the previous hardcoded rules. We will demonstrate how the same solution based on LLMs could also be reused in all the projects utilizing the same version of Frostbite (our engine). Furthermore, the talk discusses the challenges and best practices of integrating a research project into an established game development workflow, and how to overcome these challenges.

Takeaways:

  • How to leverage the potential of LLMs for specific use cases in game development, particularly in QA.
  • Insights into real-world QA improvements achieved through the use of LLMs compared to traditional approaches, along with the challenges in measuring these improvements.
  • An understanding of the challenges and opportunities associated with using machine learning in game production, and how to effectively combine research and development efforts.

RL Agent Training is Property Based Testing

Description: Training RL agents in games requires collecting lots of data from various game states and trajectories. As games grow in complexity, it is easy for some unintended functionality to affect the distribution of data that an RL agent would be trained on. This means an agent’s behaviour may be affected and ultimately be a signal that some property of the game does not match intentions. This matches the criteria for a property based test and is an inspiration for future game testing mechanisms.

Takeaways:

  • An intuition of property based testing.
  • A concrete example of how RL training helped identify a bug
  • An inspiration for how RL training can be more incorporated into property tests

Empowering Game Designers with Automatic Playtesting

Description: The complexity of modern tabletop games has been steadily increasing since the mid-1990s. This results in an increase in time spent by designers developing (2-3 years on average from idea to commercialisation) and playtesting (6-24 months) a game, raising the barrier of entry to market for independent designers or small companies which do not have enough resources at their disposal. The effect is also felt by players, who find it harder to play such games due to the steep learning curve. This talk will explore how Tabletop R&D, a spin-out company from Queen Mary University of London, aims to address these issues and democratize the tabletop games market by providing game designers with automatic playtesting tools. Using the latest in Game AI technology and digital twins of tabletop games, we speed up development times, reduce costs and increase efficiency of an otherwise traditionally lengthy analogue process.

Takeaways:

  • How to use automatic play-testing with AI agents.
  • Learn about a diverse set of metrics to evaluate game-play experience.
  • The value of exploring the design space of your game.

Analytic Geometry Is Your Friend

Computing the position of objects in a game is one of the most frequent features to be carried out: where to stop a movement (to avoid a collision)? Where to hide, protect or start a jump? Where do two zones intersect (which objects will be affected by the zone of effect of my action)?

The goal of this presentation is to suggest that solving the analytical geometry equations resulting from the various game situations is preferable to using the various environment projection queries provided by your favourite game engine: first write the equations to which the coordinates of the game objects must conform, then solve these equations to obtain the numerical values of these coordinates; and don’t forget to get help from a math software. These two steps will be illustrated in the case of projecting a grid onto a navigation surface for the Unreal Engine with the help of Mathematica.

Takeaways: This presentation is intended for AI game developers facing one of the three situations:

  • Too much computing resources are spent generating sample locations and then filtering them
    • Write and solve the equations modelling the geometric problem
  • Too many erroneous or approximate locations require post-processing to be usable
    • Compute values from exact solutions of your geometric problems
  • Bullet proofs tests of locations are needed
    • Compare locations with exact solutions of your geometric equations