PageSpeed Insights
About PageSpeed Insights
PageSpeed Insights (PSI) reports on the performance of a page on both mobile and desktop devices, and provides suggestions on how that page may be improved.
PSI provides both lab and field data about a page. Lab data is useful for debugging performance issues, as it is collected in a controlled environment. However, it may not capture real-world bottlenecks. Field data is useful for capturing true, real-world user experience - but has a more limited set of metrics. See How To Think About Speed Tools for more information on the 2 types of data.
Performance score
At the top of the report, PSI provides a score which summarizes the page’s performance. This score is determined by running Lighthouse to collect and analyze lab data about the page. A score of 90 or above is considered fast, and 50 to 90 is considered moderate. Below 50 is considered to be slow.
Real-World Field Data
When PSI is given a URL, it will look it up in the Chrome User Experience Report (CrUX) dataset. If available, PSI reports the First Contentful Paint (FCP) and the First Input Delay (FID) metric data for the origin and potentially the specific page URL.
Classifying Fast, Moderate, Slow
PSI also classifies field data into 3 buckets, describing experiences deemed fast, moderate, or slow. PSI sets the following thresholds for fast / moderate / slow, based on our analysis of the CrUX dataset:
Generally speaking, fast pages are roughly in the top ~10%, moderate pages are in the next 40%, and slow pages are in the bottom 50%. The numbers have been rounded for readability. These thresholds apply to both mobile and desktop and have been set based on human perceptual abilities.
Distribution and selected value of FCP and FID
PSI presents a distribution of these metrics so that developers can understand the range of FCP and FID values for that page or origin. This distribution is also split into three categories: Fast, Moderate, and Slow, denoted with green, orange, and red bars. For example, seeing 14% within FCP's orange bar indicates that 14% of all observed FCP values fall between 1,000ms and 2,500ms. This data represents an aggregate view of all page loads over the previous 30 days.
Above the distribution bars, PSI reports the 75th percentile First Contentful Paint and the 95th percentile First Input Delay, presented in seconds and milliseconds respectfully. These percentiles are selected so that developers can understand the most frustrating user experiences on their site. These field metric values are classified as fast/moderate/slow by applying the same thresholds shown above.
Field data summary label
An overall label is calculated from the field metric values:
- Fast: If both FCP and FID are Fast.
- Slow: If any either FCP or FID is Slow.
- Moderate: All other cases.
Differences between Field Data in PSI and CrUX
The difference between the field data in PSI versus the Chrome User Experience Report on BigQuery, is that PSI’s data is updated daily for the trailing 30 day period. The data set on BigQuery is only updated monthly.
Lab data
PSI uses Lighthouse to analyze the given URL, generating a performance score that estimates the page's performance on different metrics, including: First Contentful Paint, First Meaningful Paint, Speed Index, First CPU Idle, Time to Interactive, and Estimated Input Latency.
Each metric is scored and labeled with a icon:
- Fast is indicated with a green check mark
- Moderate is indicated with orange informational circle
- Slow is indicated with a red warning triangle
Audits
Lighthouse separates its audits into three sections:
- Opportunities provide suggestions how to improve the page’s performance metrics. Each suggestion in this section estimates how much faster the page will load if the improvement is implemented.
- Diagnostics provide additional information about how a page adheres to best practices for web development.
- Passed Audits indicates the audits that have been passed by the page.
Frequently asked questions (FAQs)
What device and network conditions does Lighthouse use to simulate a page load?
Currently, Lighthouse simulates a page load on a mid-tier device (Moto G4) on a mobile network.
Why do the field data and lab data contradict each other? The Field data says the URL is slow, but the Lab data says the URL is fast!
The field data is a historical report about how a particular URL has performed, and represents anonymized performance data from users in the real-world on a variety of devices and network conditions. The lab data is based on a simulated load of a page on a single device and fixed set of network conditions. As a result, the values may differ.
Why is the 75th percentile chosen for FCP and the 95th percentile for FID?
Our goal is to make sure that pages work well for the majority of users. By focusing on 75th and 95th percentile values for our metrics, this ensures that pages meet a minimum standard of performance under the most difficult device and network conditions.
Why does the FCP in v4 and v5 have different values?
FCP in v5 reports the 75th percentile (as of November 4th 2019), previously it was the 90th percentile. In v4, FCP reports the median (50th percentile).
What is a good score for the lab data?
Any green score (90+) is considered good.
Why does the performance score change from run to run? I didn’t change anything on my page!
Variability in performance measurement is introduced via a number of channels with different levels of impact. Several common sources of metric variability are local network availability, client hardware availability, and client resource contention.
Why is the real-world Chrome User Experience Report speed data not available for a URL?
Chrome User Experience Report aggregates real-world speed data from opted-in users and requires that a URL must be public (crawlable and indexable) and have sufficient number of distinct samples that provide a representative, anonymized view of performance of the URL.
Why is the real-world Chrome User Experience Report speed data not available for an origin?
Chrome User Experience Report aggregates real-world speed data from opted-in users and requires that an origin's root page must be public (crawlable and indexable) and have sufficient number of distinct samples that provide a representative, anonymized view of origin’s performance across all URLs that are visited on that origin.
More questions?
If you've got a question about using PageSpeed Insights that is specific and answerable, ask your question in English on Stack Overflow.
If you have general feedback or questions about PageSpeed Insights, or you want to start a general discussion, start a thread in the mailing list.