Fixing Time.deltaTime in Unity 2020.2 for smoother gameplay: What did it take?

https://blogs.unity3d.com/wp-content/uploads/2020/09/2020.1-jitter.mp4

Unity 2020.2 beta introduces a fix to an issue that afflicts many development platforms: inconsistent Time.deltaTime values, which lead to jerky, stuttering movements. Read this blog post to understand what was going on and how the upcoming version of Unity helps you create slightly smoother gameplay.

Since the dawn of gaming, achieving framerate-independent movement in video games meant taking frame delta time into account:

void Update()
{
transform.position += m_Velocity * Time.deltaTime;
}

This achieves the desired effect of an object moving at constant average velocity, regardless of the frame rate the game is running at. It should, in theory, also move the object at a steady pace if your frame rate is rock solid. In practice, the picture is quite different. If you looked at actual reported Time.deltaTime values, you might have seen this:

6.854 ms
7.423 ms
6.691 ms
6.707 ms
7.045 ms
7.346 ms
6.513 ms

This is an issue that affects many game engines, including Unity – and we’re thankful to our users for bringing it to our attention. Happily, Unity 2020.2 beta begins to address it.

So why does this happen? Why, when the frame rate is locked to constant 144 fps, is Time.deltaTime not equal to 1⁄144 seconds (~6.94 ms) every time? In this blog post, I’ll take you on the journey of investigating and ultimately fixing this phenomenon.

What is delta time and why is it important?

In layman’s terms, delta time is the amount of time your last frame took to complete. It sounds simple, but it’s not as intuitive as you might think. In most game development books you’ll find this canonical definition of a game loop:

while (true)
{
ProcessInput();
Update();
Render();
}

With a game loop like this, it’s easy to calculate delta time:

var time = GetTime();
while (true)
{
var lastTime = time;
time = GetTime();
var deltaTime = time – lastTime;
ProcessInput();
Update(deltaTime);
Render(deltaTime);
}

While this model is simple and easy to understand, it’s highly inadequate for modern game engines. To achieve high performance, engines nowadays use a technique called “pipelining,” which allows an engine to work on more than one frame at any given time.

Compare this:

To this:

In both of these cases, individual parts of the game loop take the same amount of time, but the second case executes them in parallel, which allows it to push out more than twice as many frames in the same amount of time. Pipelining the engine changes the frame time from being equal to the sum of all pipeline stages to being equal to the longest one.

However, even that is a simplification of what actually happens every frame in the engine:

Each pipeline stage takes a different amount of time every frame. Perhaps this frame has more objects on the screen than the last, which would make rendering take longer. Or perhaps the player rolled their face on the keyboard, which made input processing take longer. Since different pipeline stages take different amounts of time, we need to artificially halt the faster ones so they don’t get ahead too much. Most commonly, this is implemented by waiting until some previous frame is flipped to the front buffer (also known as the screen buffer). If VSync is enabled, this additionally synchronizes to the start of the display’s VBLANK period. I’ll touch more on this later.

With that knowledge in mind, let’s take a look at a

Continue reading

This post was originally published on this site