After Microsoft’s announcement of new GPU monitoring tools last week, we had some questions about the finer points of the…
When we covered Microsoft’s new plans for a new GPU analysis tool last week as part of the Fall Creator’s Update, we had some questions about how the tool would function and what information it would present. We’ ve since spoken to a Microsoft spokesperson with some knowledge of the situation who was willing to share some additional information on how the tool will work and what it can do.
First, last week, we questioned whether the GPU would report total memory requested (which is what tools like GPU-Z do) or actually report total memory utilized. Here’s what Microsoft had to say:
The memory information displayed comes directly from the GPU video memory manager (VidMm) and represents the amount of memory currently in use (not the amount requested) . Because these are exposed from VidMm this information is accurate for any application using graphics memory, including DX9,11,12, OpenGL, CUDA, etc apps. Under the performance tab you’ ll find both dedicated memory usage as well as shared memory usage. Dedicated memory represents memory that is exclusively reserved for use by the GPU and is managed by VidMm. On discrete GPUs this is your VRAM. On integrated GPUs, this is the amount of system memory that is reserved for graphics. (Note that most integrated GPUs typically use shared memory because it is more efficient) . Shared memory represents system memory that can be used by the GPU. Shared memory can be used by the CPU when needed or as “video memory” for the GPU when needed. If you look under the details tab, there is a breakdown of GPU memory by process. This number represents the total amount of memory used by that process. The sum of the memory used by all processes may be higher than the overall GPU memory because graphics memory can be shared across processes.
The other question we asked was whether it’ ll be possible to set affinities or application priorities on GPU workloads. Windows, generally speaking, is far better than it once was at sharing CPU and GPU resources across multiple workloads simultaneously. But GPUs still tend to lag CPUs in this regard, especially since GPUs themselves are designed so differently than CPUs.
Intriguingly, Microsoft isn’ t necessarily against adding this capability as well, at a later date. GPU kernels will already automatically prioritize in-focus applications to give them a larger share of resources, but the entire GPU monitoring feature was added due to consumer requests and feedback. If there’s similar strong demand for the ability to set application priority, the company may add it in the future.
Small changes like this may not seem like major upgrades. But they’ re useful particularly when monitoring how workloads are being handled by the operating system. The current Task Manager tells me a great deal of information in its Performance Tab, including explicit CPU identification, thread distribution, overall speed, and how many processes and threads are running at this moment. I can even see system uptime.
For decades, the GPU wasn’ t afforded the same treatment, because its operation was often invisible to the end user. Even when GPUs began to add capabilities like hardware video decode for Blu-ray, roughly 10 years ago, we often discussed those options in terms of decreased CPU utilization and system power consumption. Both metrics are relevant, of course. But they also speak to a data-gathering gap between what sort of information could be polled from the GPU versus the CPU.
This new Task Manager addition won’ t bring GPU data-gathering options entirely up to parity with CPUs. But it seems like a worthy addition that recognizes that GPUs are now increasingly important in their own right. And if the feature proves popular, Microsoft is open to updating it in the future.
Now read: Windows 10: The Best Hidden Features, Tips, and Tricks