How Metric Validation can help you finetune your game

Over the past year, Unity Game Simulation has enabled developers to balance their games during development by running multiple playthroughs in parallel in the cloud. Today we are excited to share the next step in automating aspects of the balancing workflow by releasing Metric Validation, a precursor to our upcoming Optimization feature. In this blog post, we will review the balancing framework within Unity Game Simulation, introduce the new Metric Validation feature, and share a case study with our partner Furyion where our upcoming Optimization feature enabled them to balance their game 10x faster.

Try Game Simulation for free today!

When launching a game, one of the best ways to delight your players is to include them in the design process, through soft launches, mass playtesting, and early betas. 

These techniques work, but as games become more complex over time, asking players to iteratively validate all the different paths and strategies within a game can become tedious, costly, or just impossible. The Unity Game Simulation team believes that a solid framework for measuring your game’s balance with automated playthroughs can drastically reduce the player data required from playtesting. 

A framework for game balance Three key elements

The framework we’ve been developing focuses on three key elements of a game – metrics, test cases, and parameters. Metrics are the results of a playthrough that are important to the balance or health of the game. Test cases are the different scenarios that you’d like to measure within a game. Finally, parameters are the configurations that can change your game, thus directly impacting your metrics.

Let’s consider balancing a simple racing game with many different vehicles. Our goal when balancing the game is to provide fun and distinct experiences without giving too much advantage from a player’s vehicle choice. We can measure this advantage by having similarly skilled bots or players race each other. The difference in completion time becomes a metric to evaluate advantage for a vehicle. With this metric, we want to evaluate all of our different vehicles. To do this, we can set up a series of test cases where every combination of two vehicles competes against each other. When our test cases don’t meet expectations, we need a way to change the performance of those vehicles. By creating parameters that affect each vehicle’s speed, acceleration, or special power ups we can alter the performance of the vehicles until we get a better result. Taken together, these three elements let us evaluate and change our game’s balance to meet design goals.

Unity Game Simulation empowers these kinds of experiments by letting you list all of your test cases and parameters in a single experiment. A grid search is then performed to create every parameter combination for every test case and run that particular automated playthrough. Finally, the test case and parameters are returned with each metric so every result can be connected with its configuration.

Taking action with your metrics

The design goal above can be reduced to a simple expression: the difference in completion time between any pair of vehicles should be between 0 and 20 seconds. Our new feature, Metric Validation, enables you to define these expressions and automatically evaluate them. The metric generated from an automated playthrough is scored with a boolean system evaluating to 1

Continue reading

This post was originally published on this site