PC Labs‘ cutting-edge testing of graphics cards from AMD, Intel, and Nvidia encompasses leading AAA games, industry-standard synthetic tests, AI benchmarks, and content creation apps. Here’s what we use and how we do it.
Graphics cards (often called GPUs, or „graphics processing units“) are a component that PC gamers and hard-core PC users are very well familiar with. In recent years, however, GPUs have come to greater prominence for their aptitude at mining cryptocurrencies and processing AI tasks. Whether you’re still chasing that crypto high, working on a complicated graphical render for work, or (more likely) looking to maximize your frames per second (fps) in the latest first-person shooter, you’ll need a graphics card.
But which one will work best for your intended purposes, and within your budget? That’s what we aim to find out at PC Labs with our slate of in-depth testing.
We test all graphics cards to gauge their performance, thermal traits, and power consumption across several key use cases—not just gaming. To keep pace with constant performance leaps, we upgrade our benchmarking regimen regularly—roughly, every two years—which generally follows new generational GPU releases.Our 2025 Graphics Card Test Bed
We run all the graphics cards we test on the same PC Labs test-bench system to ensure consistent results from card to card. Inside this desktop PC is a stock AMD Ryzen 9 9950X processor seated on a Gigabyte X870E Aorus Master motherboard with a 360mm Cooler Master AIO for cooling. We use 32GB of Crucial memory set at 5,600MHz, and 1TB of storage as a boot drive via a Corsair MP600 Pro NVMe 4.0 SSD. (A second NVMe 4.0 SSD hosts our test software and test PC games.) A Corsair HX1500i 1,500-watt 80 Plus Platinum power supply covers our power needs.
Using this testbed, we run a wide range of tests spread across four categories:
Synthetic graphics tests (programs that simulate graphical stresses and game processes)
Content creation applications
AI benchmarks
Real-world games
With most synthetic, content-creation, and AI tests, we record program-specific scores that are mainly meaningful compared with one another. With games, we generally employ their built-in benchmark modes at a variety of prearranged settings, recording scores in frames per second (fps).
We also record the peak GPU operating temperatures and peak power consumption measures that we observe during a subset of our content creation and gaming tests. Before we start any tests, we update to the most recent version of Windows 11 Pro and the latest graphics drivers for the card at hand.Synthetic Graphics BenchmarksUL 3DMark Suite
Our synthetic benchmarks consist of several tests within 3DMark, a sprawling test suite that the developers at Underwriters’ Labs (UL) have affectionately branded “The Gamer’s Benchmark.” Although the suite has many subtests we could run, we focus on six to simulate different graphical performance metrics.
The first of these tests is 3DMark Port Royal, a simulation designed to benchmark the ray-tracing capabilities of different GPUs. Port Royal requires a minimum of 6GB of VRAM to run effectively and drivers that support DirectX 12. We also employ 3DMark Solar Bay, another ray-tracing benchmark for less demanding systems. (Solar Bay is also used to benchmark phones and tablets.)
Next, we run 3DMark Steel Nomad specifically to focus on rasterization performance. Rasterization is the process of converting 3D models into 2D pixel representations on your screen. Steel Nomad is specifically designed for high-end computers. (A Steel Nomad “Light” version is also available for less demanding systems, but we don’t use it for our GPU-specific benchmarks, though we do use it for testing full systems.)
Following that test, we run 3DMark Time Spy Extreme, a demanding DirectX 12 benchmark we’ve retained from our pre-2025 testing regimen. Time Spy Extreme runs its graphics test workload at 3,840 by 2,160 pixels. While it doesn’t require a 4K monitor to run (the test runs at the target resolution offscreen), it does require the tested graphics card to have at least 4GB of VRAM.
Next up is 3DMark Speed Way, a more recent ray-tracing benchmark for high-end systems. Like Port Royal, it requires a graphics card that supports DirectX 12 Ultimate and has 6GB of VRAM. And last, we have 3DMark’s Screen Optimization test. This benchmark tests the unique resolution upscaling techniques within each make of graphics card. The test taps into FidelityFX Super Resolution (FSR) for AMD cards, Deep Learning Super Sampling (DLSS) for Nvidia cards, and Xe Super Sampling (XeSS) for Intel Arc cards. Each version of the test runs a specific scene for each make of card twice, once with screen optimization turned off and another time with it on.
Upscaling is a rendering technique in which the traditional GPU hardware renders graphics at a lower resolution, like 1080p, then boosts that resolution to a higher standard, like 1440p or 4K. This technique helps cards run better at rendering resolutions that would otherwise be difficult or impossible for the graphics card in question to do well.Unigine Superposition
The Unigine Superposition benchmark uses Unigine’s own graphics engine to simulate a multipart scene in a chaotic classroom setting that experiences an explosion of mysterious scientific phenomena.
Our previous testing regimen ran this benchmark at the 1080p Extreme, 4K Optimized, and 8K Optimized presets. For 2025 and beyond, we’ve narrowed that to using the 4K Optimized preset only, and running it with the Graphics API setting on DirectX and then again with OpenGL.AI Benchmark Testing
With graphics cards increasingly seen as some of the best-suited engines for certain AI tasks, apart from dedicated neural processing units (NPUs), we’ve incorporated a new test for AI text generation, and are working to implement more tests as the tools for AI benchmarking expand. The field of AI benchmarking is fast-evolving and complex, and our current test examines only one aspect of AI processing.UL Procyon Text Generation Benchmark
ocyon is a suite of benchmarks that includes modules that measure AI performance.