Dive into a recent report on online retail performance, like how 100ms delays lead to lower conversion rates and how much 2-second delays hurt bounce rates.
I’ m extremely excited to announce the release of our very first State of Online Retail Performance report. This report is a semi-annual analysis of the intersection of performance metrics from three different perspectives: IT, business, and user experience.
It’s always a thrill to release new research into the wild, and I’ m extra thrilled about this particular project.
As our first piece of new research to be released under the Akamai umbrella, it’s fitting that this project is also the biggest of its kind in the performance industry. We gathered one month’s worth of beacon data from leading retail sites, comprised of our customers who have given permission for their data to be anonymized, aggregated and used in this type of research. This study represents a whopping 27.7 billion beacons’ worth of user data, which equates to more than 10 billion user visits.
When we started this project, we had a number of questions we wanted to ask our data. These are three of the big ones:
There are far too many findings to share in one blog post, but today I want to talk about a few of the things we discovered.
When I talk to marketers and site owners and ask them about their site’s conversion rate, everyone knows what their average or median rate is. But very few have broken out conversion rates by load time. If they did, they’ d get a much more nuanced understanding of their site’s user experience. As the graph below shows, optimal page performance correlates to significantly higher conversion rates. (More on this in point 2, below.)
In other words, if you’ re chugging along, assuming that your average desktop conversion rate of 4.1% is just fine because it falls within industry norms, you’ re missing out on opportunities to optimize web performance and increase conversions.
This is huge. It’s the first time I’ ve seen sub-two-second load times appear in this kind of research.
This is enormously significant for at least two reasons:
Even tenths of a second count. As discussed in finding 2, desktop pages that loaded in 2.7 seconds experienced a peak conversion rate of 12.8%. Pages that loaded 100 milliseconds slower — in other words, in 2.8 seconds — experienced a 2.4% decrease in conversion rate. Smartphones and tablets were affected more, with 7.1% and 3.8% decreases in conversion rates, respectively.
These impacts were felt even more with pages that were one and two seconds slower. Desktop pages that experienced a two-second delay — loading in 3.8 seconds instead of the optimal 1.8 seconds — had conversion rates that were almost 37% lower.
Unlike our conversion findings, 100-millisecond delays didn’ t have a significant effect on bounce rates, but at one and two seconds, the impact of delays was much more noticeable.
In other words, you might think that four- or five-second load times are pretty fast. But if those load times correlate with a 103% increase in bounce rate, that hurts — a lot.
Start render time — defined as the moment when content begins to render in the browser — is a solid metric for measuring user-perceived performance. We found that the optimal start render time for desktop users was under one second. Mobile and tablet user expectations were not far behind.
Faster pages compel shoppers to spend more time on retail sites, visit more pages, and add more items to their carts. When examined alongside metrics like bounce rate and conversions, session length (defined as the number of pages visited in a single visit) is a strong indicator of user engagement and satisfaction.
Similar to the effects of slowdowns on bounce rate, we found that 100 milliseconds had little to no impact on session length. But the results became more pronounced at the one- and two-second points. Sessions with median page loads that were one second slower than optimal speeds were up to 25% shorter. A two-second delay correlated with a 51% decrease in session length for mobile users, a 47% decrease for desktop users, and an almost 38% decrease for tablet users.
Some interesting patterns emerged from these findings.
We can only guess why people behave differently according to their device type.
We found that almost half (47%) of the retail traffic came from mobile devices, but only 22% of conversions happened on mobile. Clearly, mobile is an important part of the entire transaction process, even if people aren’ t converting on their phones. The greater risk is losing these mobile shoppers who are sensitive to slowness and more likely to bounce. And perhaps because desktop users tend to convert more overall (accounting for 68% of all conversions in our study) , they’ re more sensitive to speed as it relates to conversions.
And maybe tablet users have learned to be patient because, according to other research we’ ve done, many tend to use older – and therefore less performant – tablets. (Speaking purely anecdotally, this theory resonates with me. I use a newer laptop, desktop, and phone, but my iPad is almost five years old. And it is circa-1995 slooooow.)
As I said, these behavior patterns are ripe for speculation. I definitely welcome your theories.
As I mentioned at the top of this post, these are only a few of the findings that are available in the report. I strongly encourage you to get the report and learn the rest.