Abstract
In this thesis, we have explored and advanced automated testing techniques for modern 3D computer games, focusing on overcoming the limitations of traditional testing methods. Testing in the game industry is a critical yet costly and labor-intensive process, made more complex by the dynamic, interactive, and experiential nature of games. These challenges demand more than conventional testing approaches.
To address this, we explored agent-based artificial intelligence using the iv4XR framework. This framework supports the creation of autonomous agents that perceive, reason, and act within game environments in a goal-driven and adaptive manner. These agents continuously monitor environmental changes, maintain internal representations (beliefs) of game states, define testing objectives (desires), and select actions (intentions) to achieve those objectives. Operating through perception-action cycles, they dynamically respond to evolving scenarios, enabling more accurate, flexible, and player-like testing.
By simulating intelligent and context-aware gameplay, these agents can navigate and assess game environments with a high level of adaptability. This significantly improves the robustness of automated tests, allowing them to handle frequent and unpredictable changes common in game development. Our extensive experiments, including mutation testing, demonstrated that these adaptive agents outperform traditional automated approaches in robustness and fault detection (Chapter 4).
To enhance practicality, we combined this agent-based approach with established testing methodologies such as model-based and scenario-based testing. We proposed an online method that constructs behavioral models of the game dynamically, avoiding the need for manually defined models. Agents autonomously build and refine models during interaction with the game under test. By leveraging real-time learning, agents efficiently solved complex testing tasks. This approach was validated across diverse game scenarios, confirming its scalability and applicability (Chapter 5).
We also investigated specification-based game testing by developing a method that uses formal specifications written in Linear Temporal Logic (LTL) to automatically generate test cases. LTL is naturally suited for expressing expected behaviors over time and offers a more expressive alternative to pre/post-conditions or manual scripting. While existing techniques for generating test sequences from LTL are not scalable for games due to the lack of available models, our approach overcomes this limitation. Experiments show that it can effectively produce test cases for various LTL-defined requirements (Chapter 6).
Finally, we addressed the complexities of multiplayer and cooperative games by introducing a cooperative multi-agent testing approach. In this setup, multiple agents work together, share information, and coordinate their actions to achieve broader and more efficient test coverage than possible with single-agent systems. Our experiments demonstrated clear benefits, showing enhanced testing effectiveness, scalability, and fault detection through coordinated agent behavior (Chapter 7).
Collectively, the research presented in this thesis significantly advances the state of the art in automated game testing. By improving robustness, adaptability, and efficiency, our work provides a solid foundation for future research and offers practical value for both software testing and game development communities.
Original language | English |
---|---|
Qualification | Doctor of Philosophy |
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 4 Jun 2025 |
Publisher | |
DOIs | |
Publication status | Published - 4 Jun 2025 |
Keywords
- Automated testing of computer games
- robust automated testing
- automated game testing
- model-based game testing
- agent-based testing
- agent-based game testing
- scenario-based gametesting
- multi-agent testing