Browse deep-dive articles that explain how to capture, read, and troubleshoot HTTP Archive files—plus a guided tour of the HAR Analyzer dashboard.
Start with foundational articles that explain what HAR files are and how to work with them.
Learn the basics
1. Intro / What is HAR?
A HAR file (short for HTTP Archive) is a log of everything your browser does on the network while loading a web page. Think of it as the “black box recorder” for your browsing session — every request to a server, every response, every delay is written down inside.
It exists because modern websites are complex. A single page might trigger hundreds of requests: HTML, JavaScript, CSS, fonts, images, API calls, ads, tracking pixels. If something feels slow or broken, a HAR file shows exactly where the problem lies — whether it’s a sluggish server, a redirect loop, or a script that blocks everything else.
Technically, a HAR is just a big JSON file. Open it in a text editor and you’ll see structured data with entries like url, status, timings, and more. For a human, it looks overwhelming — lines upon lines of requests — but for tools it’s gold: a complete timeline of how your page loaded.
2. When to use HAR files
HAR files are most useful when something about a web page feels off — slow loading, broken images, failed logins, or just a mysterious spinner that never ends. They act like a detective’s notebook, showing you exactly what the browser tried to do and how long each step took.
You might use a HAR file when:
Diagnosing performance issues – find out if a slow page is caused by the server, too many redirects, or heavy JavaScript files.
Debugging network errors – see which requests failed, timed out, or returned errors like 404 or 500.
Analyzing third-party impact – spot ad scripts, analytics trackers, or embedded widgets that delay the rest of the page.
Checking API calls – capture the exact requests your app makes, including headers, payloads, and responses.
Working with support teams – many SaaS providers ask for a HAR file when you report a bug. It gives them a reproducible view of what your browser experienced.
3. How to generate a HAR file
Creating a HAR file is built right into your browser. It only takes a minute, and you don’t need any special tools.
Chrome / Edge / Brave (Chromium browsers)
Open the page you want to capture.
Press F12 (or right-click → Inspect) to open DevTools.
Go to the Network tab.
Check that the red ● record button is active (click it if not).
Refresh the page to capture everything from the start.
Let the page fully load.
Right-click anywhere in the network request list → Save all as HAR with content.
Firefox
Open the page.
Press F12 to open DevTools.
Switch to the Network tab.
Make sure recording is enabled (red dot).
Refresh the page and wait until it loads completely.
Right-click in the list → Save all as HAR.
⚠️ Tips & common mistakes
Always start capturing before refreshing the page.
Don’t switch tabs mid-capture; browsers may pause recording.
HAR files can contain sensitive data like cookies or request payloads. Share them only with trusted people or tools.
4. How to read a HAR file manually
Open a HAR file in a text editor and you’ll see a huge JSON object. Each entry represents a single request your browser made. The most important fields are:
url – the address requested.
status – HTTP status code (200 OK, 404 Not Found, etc.).
timings – how long each phase took: DNS lookup, connection, waiting for server, content download.
request / response – headers, payloads, and body details.
Many tools also show a waterfall chart: a timeline of all requests stacked vertically. Each bar shows how long a file took to load, making it easier to see bottlenecks at a glance.
Without a viewer, HAR files look messy. With one, they reveal the exact moment where performance goes sideways.
5. Common issues visible in HAR files
Here are patterns you’ll often uncover in HAR analysis:
Slow server response (high TTFB) – server takes too long to reply.
Large media files – images or videos not optimized.
Redirect chains – too many hops before reaching the final page.
Blocking JavaScript – scripts that delay rendering of the page.
Failed requests – 404s, 500s, timeouts.
Spotting these in HAR files helps developers fix what’s slowing things down.
6. Tools for HAR analysis
You can analyze HAR files in several ways:
Browser DevTools – Chrome and Firefox can re-open HAR files in the Network tab.
HAR Viewer / command-line tools – open-source utilities to parse and visualize HAR.
Online tools – like har-analyzer.dev, which gives you instant charts, breakdowns, and filters without touching raw JSON.
7. Step-by-step example
Let’s say a user complains: “The checkout page is super slow.”
Generate a HAR while loading the checkout page.
Open it in an analyzer.
You notice one request to /api/cart took 4.2 seconds while everything else was fast.
That points to the backend cart service as the bottleneck.
HAR files make invisible delays visible. Instead of guessing, you get hard evidence.
8. FAQ
Do HAR files contain passwords? – Sometimes they may contain tokens or form data. Always sanitize before sharing.
How large can a HAR be? – Depends on the session; big pages with many requests can create 10MB+ HAR files.
Is it safe to share? – Only with trusted developers or support teams, since it may contain private data.
HAR Analyzer Dashboard
Once you upload a HAR file to har-analyzer.dev you'll see a dashboard designed to surface the most important performance insights. Start with the General Metrics panel to get a quick read on the health of your page, then dig deeper into individual requests and timelines.
Content & Resource Breakdown
Every page is a cocktail of different resource types: scripts, styles, images, fonts, APIs, and sometimes even video or audio. By breaking requests down by type and by weight, you can see what's dominating your load time and page size.
Images: PNG, JPEG, WebP, SVG. Usually the largest number of requests.
CSS / Stylesheets: Define look and layout.
Fonts: Webfonts can be heavy (especially if multiple weights and character sets are loaded).
XHR / API calls: Dynamic data requests after page load.
Media: Video/audio. Sometimes huge, sometimes streamed.
Other: Things that don't neatly fit, like favicon or manifest files.
Why it matters:
A page may look fine on the surface, but if 70% of requests are tiny JS scripts, you've got a bundling problem.
Fonts and CSS may not be numerous, but even one missing file can block rendering.
Transfer Size by Type
This shows the total data weight (in KB/MB) of each category.
Images often dominate size, even if they aren't the most numerous.
JavaScript can balloon if frameworks and third-party libraries aren't optimized.
Fonts are surprisingly heavy — a single typeface with multiple weights can hit 1–2 MB.
Media (video, audio) will dwarf everything else if not lazy-loaded or streamed.
Why it matters:
Page weight directly impacts mobile performance and data usage.
Even if you only have 10 font requests, if they weigh 4 MB, they matter more than 200 tiny JS requests.
⚡ Practical Insights
JS-heavy page → Sluggish on mobile CPUs, more parsing time, worse interactivity.
Image-heavy page → Longer loads, especially if not optimized for the viewport.
Font-heavy page → Flash of invisible text (FOIT) or layout shift if not handled properly.
API-heavy page → Can feel “fast at first,” but interactions are delayed while data arrives.
Optimization strategies:
Consolidate and minify JavaScript, defer non-critical scripts.
Compress and lazy-load images.
Subset fonts (only load characters and weights you actually need).
Stream media instead of loading it up front.
Example from your HAR
If your Content Breakdown shows:
Scripts = 40% of requests but 30% of weight→ many small files, possible bundling issue.
Images = 25% of requests but 50% of weight→ optimize or lazy-load.
Fonts = 5% of requests but 15% of weight → too many weights/styles.
👉 That tells you exactly where to start optimizing.
✨ Takeaway: Think of the resource breakdown as your page's nutrition label. Is it mostly sugar (scripts), carbs (images), or fat (fonts)? The healthiest sites have a balanced diet — and avoid stuffing themselves with empty calories from third-party ads and trackers.
Response Time Breakdown
Not all requests are created equal. Some are lightning-fast, others drag on, and a handful may crawl like snails. Averages can hide those extremes, so breaking requests into fast, moderate, and slow buckets gives you a clearer picture of a page's performance profile.
Fast (<500 ms)
Requests that completed in under half a second.
These are usually:
Cached assets served from a CDN or the browser cache.
Small files such as icons, CSS snippets, or JSON.
Well-optimized servers with quick response times.
Why it matters: A high percentage of fast requests means your infrastructure is solid and caching is working.
Rule of thumb: Healthy sites often have70–90% of requests in this range.
Moderate (500–1000 ms)
Requests that take about half a second to a full second.
Often includes:
Mid-sized files such as images or fonts.
API calls that involve some backend logic.
Resources coming from slightly distant CDNs.
Why it matters: This is the gray zone. Users may not notice, but if too many requests land here, the page can feel sluggish.
Slow (>1000 ms)
Anything taking more than a second.
Common culprits:
Third-party ads and trackers.
API endpoints under heavy load.
Large media files such as images or video snippets.
Misconfigured caching that forces recalculation.
Why it matters: Even a handful of slow requests can block the critical path. If a CSS file takes two seconds, the page layout may not render until it's done.
Rule of thumb: Slow requests should ideally make upless than 5% of total requests.
How to Use This Insight
High percentage of fast requests → good optimization, but look for outliers.
Large cluster in the moderate zone → fine-tune CDNs, optimize image sizes, and review API endpoints.
Too many slow requests → determine if they're critical; if not, defer or lazy-load them.
Example from your HAR
Fast (<500 ms): ~86% → Most requests are quick, which is a strong baseline.
Moderate (500–1000 ms): ~12% → A chunk of requests need some optimization.
Slow (>1000 ms): ~1% → A few very slow calls, like a 2.4s ad request.
This suggests the infrastructure is mostly fine, but third-party services may be introducing unnecessary drag.
✨ Takeaway
Response time breakdown is like a race report:
Most of your runners (requests) should be sprinters.
A few joggers are okay.
Too many limping along will slow the whole team, so track them down and optimize.
Connection Reuse Insights
When a page loads, the browser opens network connections (sockets) to servers. Each new connection comes with overhead: DNS lookup, TCP handshake, TLS handshake. That's tens to hundreds of milliseconds before even one byte of content arrives. If the browser can reuse an already open connection for many requests, it saves a lot of time.
Total Connections
The total number of TCP/TLS connections opened during the page load.
Each new connection = extra delay and resource usage.
Too many connections can mean the page is spreading requests across too many domains, or it isn't reusing keep-alive connections.
Reused Connections
The number of connections that served multiple requests.
If one socket handles 50 requests, that's very efficient.
If most sockets are only used once, that's inefficient and slows things down.
Reuse Ratio
The percentage of connections that were reused.
100% reuse ratio = ideal (most requests flow through a few stable connections, usually thanks to HTTP/2 or HTTP/3).
0% reuse ratio = bad (every request opens a new connection, wasting time).
Example from your HAR:
Total connections: 1
Reused connections: 1
Reuse ratio: 100%
That's extremely efficient — the browser ran nearly all requests through a single socket (most likely HTTP/2 multiplexing).
Top Reused Sockets
Shows which host used the same connection for the most requests.
Example: www.idnes.cz handled 590 requests over one connection.
That means the browser opened one highway to the server and kept it busy instead of building dozens of small roads.
⚡ How to Interpret
Many new connections, low reuse:
Requests are spread across too many domains.
Possibly missing keep-alive or still using old HTTP/1.1.
Few connections, high reuse:
Great optimization, likely HTTP/2 or HTTP/3 in use.
How to Improve Connection Reuse
Consolidate assets under fewer domains.
Make sure keep-alive is enabled on servers.
Prefer HTTP/2 or HTTP/3 so multiple requests can travel over a single connection (multiplexing).
Takeaway: Connection reuse is like highway traffic:
If you build one wide, fast highway (reused connection), lots of cars flow smoothly.
If you force every car to build its own little road (new connection every time), traffic crawls.
General Metrics
These are the high-level numbers that give you a “health check” of the page. Think of them as the vital signs of your web performance — heartbeat, blood pressure, body temperature. Each one tells you something slightly different about how the page behaved during loading.
Errors & Warnings
Every HAR captures not only the successful requests but also the ones that failed or behaved inefficiently. Treat this section as your checklist of “what needs fixing.”
Redirect Chains
What it is: The browser requests a URL, gets a 301/302 response (“go somewhere else”), and then does it again… and again.
Why it matters:
Each redirect hop = extra DNS + TCP + TLS + request + response.
Users wait longer before seeing the final content.
Search engines dislike long chains — they can harm SEO.
Example:
aktualne.cz→www.aktualne.cz→https://www.aktualne.cz= two unnecessary detours.
Tip:
Keep redirects to a single hop at most.
Serve the canonical URL from the start.
Failed Requests (4xx/5xx)
4xx errors:
403 Forbidden = the server refused access (bad cookies or blocked resources).
404 Not Found = the page references something that no longer exists.
405 Method Not Allowed = the client sent an unsupported request type.
5xx errors:
500+ = server-side problems (application crash, bad configuration).
Why it matters:
Every failed request means part of the page is broken or loading without purpose.
Too many 4xx/5xx responses = an unreliable site that drives users away.
Example from the HAR:
403 chyby na www.aktualne.cz/auth→ the site is calling a protected API without proper authorization.
Large Transfers
What it is: Individual files larger than 1 MB.
Why it matters:
Large files slow down mobile users.
They can trigger timeouts or waste data allowances.
Example: The sample HAR shows no files above 1 MB — great news. On other pages you might spot a massive PNG that could easily be compressed.
Tip:
Optimize images (compression, modern formats).
Split large JS bundles.
Long Durations
What it is: Requests that took longer than 2 seconds.
Why it matters:
They delay rendering, especially if the resource is critical (CSS, JS).
They are often API calls or sluggish ad networks.
Example from the HAR:
ib.adnxs.com/prebid/setuidtook 2.43 s — extremely slow for such a small request.
Tip:
If it’s a third party, load it asynchronously so it doesn’t block content.
If it’s your backend, focus on API optimization (caching, DB indexes).
⚡ How to look at this in practice
Redirect chains = unnecessary detours.
Failed requests = potholes in the road.
Large transfers = heavy backpacks that slow the whole team.
Long durations = marathon runners who never cross the finish line.
👉 Each warning is an opportunity — fix them and every user will notice the speed-up.
✨ Takeaway: Think of this section as an engine diagnostic — it highlights errors, overloads, and sluggish components. If you want quick performance wins, start here.
Timing Analysis
Every single request in a HAR goes through several phases. Taken together they form the “life cycle” of a network call. Looking at the averages (and outliers) across all requests reveals exactly where the page is losing time.
DNS Lookup
What it is: Translating a domain (for example,example.com) into an IP address.
How it works: The browser asks DNS servers, “Where does example.com live?” and waits for the answer.
Why it matters:
Slow DNS lookups delay the very first step of every connection.
Typical values: 10–50 ms.
If a domain spikes into the hundreds of milliseconds, it hints at a bad DNS provider or missing DNS cache.
Tip:
Use a CDN with global DNS (Cloudflare, Akamai).
Warm up connections with dns-prefetch.
Connect (TCP Handshake)
What it is: Opening the network connection (a TCP socket) between the browser and the server.
Why it matters:
Every new socket carries overhead: the TCP 3-way handshake.
Values around 50–150 ms are normal, but mobile networks can be higher.
Opening many new connections frequently can signal poor connection reuse (see the connection insights section).
Tip:
Reduce the number of distinct domains → fewer new connections.
HTTP/2 and HTTP/3 let many requests share a single connection.
SSL / TLS Handshake
What it is: Establishing the secure HTTPS connection.
Why it matters:
Encryption is mandatory (for security and SEO), but the handshake costs time (usually 50–200 ms).
Long handshakes can point to certificate issues or missing optimizations (for example TLS 1.2 instead of TLS 1.3).
Tip:
Enable TLS 1.3 (faster and more secure).
Use OCSP stapling so certificate checks don’t add delay.
Wait (TTFB – Time to First Byte)
What it is: The time between sending the request and the moment the server delivers the first byte of the response.
Why it matters:
It’s essentially a measure of backend speed.
A long TTFB = slow server, database, or backend logic.
Typical values: <100 ms (fast API), 200–500 ms (typical sites), >1 s (slow server).
Tip:
Optimize database queries.
Use caching (CDN, reverse proxy).
Monitor TTFB from different regions — it can vary dramatically.
Receive (Content Download)
What it is: The actual transfer of data from the server to the browser.
Why it matters:
It depends on file size and network speed.
Usually tiny (<20 ms for small files), but large images or video can take seconds.
Tip:
Compress files (gzip, Brotli).
Optimize images and use modern formats (WebP, AVIF).
Stream video (HLS, DASH).
⚡ How to read it all together
When you see an averages chart (DNS 22 ms, Connect 133 ms, SSL 78 ms, Wait 44 ms, Receive 16 ms):
DNS and Connect are relatively expensive → many new connections were opened.
Wait is low (44 ms) → servers responded quickly.
Receive is almost negligible → most files were small or well compressed.
👉 Overall the bottleneck isn’t the server but the network plumbing (too many connections, too many domains).
✨ Takeaway: Every request is a relay race. If any runner (DNS, TCP, TLS, server, download) passes the baton slowly, the whole race drags on. Timing Analysis shows which runner is lagging.
Third-party vs First-party
When a page loads, not all resources are equal. Some come directly from your servers (first-party), others are pulled from external providers (third-party). Splitting traffic this way gives you a clear picture of ownership and dependency.
Requests Split
This metric shows the number of requests made to first-party vs. third-party domains.
First-party requests:
Files hosted on your own domain (e.g.www.mysite.comor your CDN atcdn.mysite.com).
These are the assets you control: HTML, CSS, images, core scripts, APIs.
You can optimize, cache, compress, or reorganize them however you want.
Third-party requests:
Files loaded from domains you don’t control (e.g. Google Ads, Facebook, analytics trackers, fonts fromfonts.gstatic.com, videos from YouTube).
Each one is a dependency: if they’re slow, your page is slow; if they’re down, parts of your site break.
Browsers also apply extra security and isolation layers to third-party requests, which can add latency.
Why it matters:
A page that’s 90% third-party requests is like a house built on rented land — you don’t control most of what happens.
Every third-party script increases the chance of performance bottlenecks, data leaks, or even security vulnerabilities.
Example:
In your HAR sample, there were 618 requests, but only 7 were first-party. That means 99% of requests came from third-parties. That’s an extreme imbalance — it suggests the page is highly dependent on ad networks, trackers, or widgets.
Transfer Size Split
This metric shows how much data weight comes from each side.
Sometimes you may have only a handful of third-party requests, but they are huge (like a 2 MB ad script or embedded video).
In other cases, hundreds of small tracker requests might only add up to a few kilobytes.
Why it matters:
Data weight directly impacts load time on slower networks, mobile data usage, and even battery drain on devices.
If third-party domains dominate page size, they effectively “own” your user experience — their slow downloads stall your entire page.
Example:
In your sample, first-party resources weighed just 200 KB, while third-parties totaled 19.5 MB. That means your actual content was only 1% of the payload — everything else was external baggage.
⚖️ How to use this insight
Audit third-parties regularly: Do you really need three analytics trackers, five ad networks, and two heatmaps?
Set performance budgets: For example, no more than 20% of page weight can be third-party.
Prioritize user experience: Third-parties might generate revenue (ads) or insights (analytics), but if they slow your site so much that users bounce, they’re hurting more than helping.
Sandbox non-critical third-parties: Load them after the main content or only when the user interacts (e.g. don’t load chat widgets until the user clicks “help”).
Takeaway: First-party = control, third-party = risk. The healthiest pages have a strong core of first-party resources, with carefully chosen third-party services. If the balance tips too far toward third-parties, you’re no longer in charge of your site’s performance.
Charts & Summaries
Tables and raw numbers are great, but sometimes it’s better tosee how the page behaves. These charts help you spot patterns that would otherwise get lost in the details.
Response Time Histogram
This chart groups every request by how fast it finished. The X axis = speed (e.g. <100 ms, 100–300 ms, 300–1000 ms, etc.), the Y axis = number of requests.
Why it’s useful:
Shows whether most requests are fast or whether you have a “long tail” of slow ones (tail latency).
Even if 90% of requests are quick, a few slow ones can block rendering or core functionality.
Typical pattern: most requests under 300 ms, a handful over 2 seconds → those outliers deserve investigation.
Tip: A neat mound near <100 ms = you’re in good shape. A long tail to the right = problematic resources.
Resource Type Mix
The pie chart shows the breakdown of requests by type:
JavaScript (JS)
Images
CSS / Stylesheets
Fonts
XHR / API calls
Media (video, audio)
Other
Why it’s useful:
Quickly reveals what makes your site “heavy.”
You might have hundreds of tiny JS files (poor bundling) or a few giant images (poor compression).
If JS dominates, the page likely relies on client-side rendering → users with slower CPUs (mobile devices) will feel it.
Practical tip:
<30% JS → healthy
>50% JS → beware of “JS bloat”
Images shouldn’t exceed ~50% of total weight for most sites (except galleries/photo-heavy apps).
Domain Weight (KB)
The bar chart shows how many kilobytes (or megabytes) each domain contributed.
Why it’s useful:
Makes it obvious whether your site is mostly aboutyour content or just relaying third-party data.
One or two providers can inflate total page size dramatically.
Example:
Vzorek HAR:securepubads.g.doubleclick.net= 3.5 MB. That means the ad service alone is larger than the entire content of most blogs.
How to use it:
Watch whether your top three domains are third parties. If so, you’re losing performance control.
Timeline Density
The timeline shows when and how requests appeared during the load.
Why it’s useful:
A short, intense burst at the start → the browser is making parallel requests, the page is well optimized.
A long tail with requests still firing tens of seconds later → you’re stretching the experience (usually ads, tracking pixels, deferred JS).
Great for checking whether lazy loading works — images outside the viewport should start only when the user scrolls.
Practical tip:
Aim to load most critical requests within the first few seconds.
If you see a constant trickle of requests, a script may be polling an API endlessly (draining users and servers).
✨Takeaway: Charts aren’t just pretty pictures. Each one offers a different perspective on site health:
Histogram = speed of individual requests
Resource mix = what the site is made of
Domain weight = which domain is the data hog
Timeline density = how loading is spread out over time
Requests by Domain
Every file your page loads comes from somewhere — and that “somewhere” is a domain. Some of those domains are under your control (first-party, likeexample.com), but many are not (third-party, likedoubleclick.net orfacebook.com). Tracking where the requests go gives you a map of your dependencies.
Top domains by requests
This view counts how many times the browser hit each domain.
Why it matters:
If you see a huge number of requests to third-party domains (ad networks, analytics, social media trackers), it usually means your site is leaning heavily on external services.
Even if each request is tiny, hundreds of extra hits add up in latency, CPU work, and energy consumption on the client device.
Browsers have limits on how many parallel connections they open to the same domain (typically 6 per host for HTTP/1.1, more for HTTP/2/3). Too many small requests = connection queuing = slower page loads.
Example scenario:
1gr.czwith 106 requests → This likely means a core content CDN, serving images, scripts, or CSS.
onetag-sys.comwith 23 requests → This looks like a tracker or advertising service. Each one is a cost in performance and privacy.
Rule of thumb:
Most sites should keep the number of domains low and the number of requests per domain reasonable.
If you see >50% of requests going to one tracker, that’s a red flag.
Optimization strategies:
Consolidate assets to fewer domains (preferably your own CDN).
Limit third-party scripts to only those that are business-critical.
Use resource hints like preconnect anddns-prefetch to reduce connection overhead.
Top domains by transfer size
This view ranks domains not by number of requests, but by total weight in bytes.
Why it matters:
Sometimes a domain only contributes a handful of requests, but each one is huge (for example, a video or a giant JS bundle).
Looking at weight highlights “data hogs” that inflate bandwidth usage.
On mobile networks, weight = money. Heavy third-party content can directly cost your users.
Example scenario:
securepubads.g.doubleclick.net→ 3.5 MB. Even if it’s just a few ad files, this domain is dominating bandwidth.
www.googletagmanager.com→ 1.8 MB. Tag managers can balloon in size when many scripts are added.
Rule of thumb:
Ideally, your own domain should dominate transfer size (first-party content).
If third-party domains top the list, they are literally costing you more than your actual content.
Optimization strategies:
Audit third-party weight: are those MB of ads worth the trade-off in bounce rate?
Use lazy loading for heavy resources (don’t download them until the user actually needs them).
Where possible, cache or serve content through your own optimized CDN.
✨Takeaway: The “Requests by Domain” section shows who you depend on and who’s slowing you down. High counts = too many moving parts; high weight = a few big elephants. Ideally, your own servers should dominate both, with third-parties kept lean and strategic.
Total Requests
This is the total number of separate files or data calls that the browser had to fetch in order to display the page. A single modern webpage can quietly pull in hundreds of resources:
HTML: the core page itself
CSS stylesheets: fonts, colors, layout
JavaScript files: both your code and third-party libraries
Images & icons: JPEGs, PNGs, SVGs, WebP
Fonts: webfonts often weigh more than images
APIs / XHR calls: data the page loads after initial HTML
Ads, analytics, trackers: the invisible swarm
Why it matters:
Each request adds overhead: DNS lookup, connection, TLS handshake, waiting, and downloading.
Too many requests make the “waterfall” long and fragmented. Even if each request is small, together they can drag performance down.
Best practice: consolidate. Use image sprites, bundle JS and CSS (but not too aggressively), lazy-load what isn’t critical.
Rule of thumb:
<100 requests → usually lean and efficient
100–300 → normal for content-heavy sites
500+ → red flag, often overloaded with ads, trackers, or poorly optimized assets
Total Load Time
This is how long it took from the very first request until the very last one finished. It’s basically the stopwatch for your page load.
Important nuance:
This metric is not the same as what a user perceives.
A page might be visually ready in 2 seconds but still making ad requests until 20 seconds.
A page might look broken because key scripts load last, even if total time is short.
Why it matters:
Long load times usually point to background scripts, ads, or deferred APIs dragging on.
Some SEO tools (like Google Lighthouse) don’t weigh “total load time” as much as metrics like Largest Contentful Paint (LCP) or Time to Interactive (TTI) — but it’s still a useful global measure of “how busy” the page was.
Analogy: It’s like the difference between when you can start eating dinner and when the kitchen has finished washing the last pan. Users care about the first part, but devs should track both.
Largest Request
This is the biggest single file downloaded, measured in kilobytes (KB) or megabytes (MB).
Usual suspects:
Images (uncompressed photos can easily hit several MB)
JavaScript bundles (especially if frameworks + vendor code are lumped into one file)
Video snippets or animations
Fonts (surprisingly heavy, especially if multiple weights/styles are included)
Why it matters:
Large assets block other requests. The browser’s bandwidth is not infinite — one oversized image can delay many smaller files.
Mobile users on 3G/4G connections suffer the most here.
Fix strategies:
Compress images (use WebP/AVIF instead of JPEG/PNG where possible).
Split large JS bundles into smaller chunks with code-splitting.
Load fonts asynchronously or only the needed character subsets.
Rule of thumb:
Any single request >1 MB should raise eyebrows.
> 2 MB = unacceptable for critical content.
Slowest Request
This is the resource that took the longest to arrive, measured in milliseconds.
Common culprits:
Slow third-party ads or trackers
API calls to overloaded servers
Fonts/CDNs under heavy traffic
Misconfigured caching (forcing full reloads every time)
Why it matters:
Even one slow request can block rendering if it’s critical (like CSS or JS needed for layout).
Third-party scripts are dangerous here: if your site depends on them, their slowness becomes your slowness.
How to read it:
If the slowest request is non-critical (like an ad pixel), maybe you can live with it.
If it’s CSS or JS, that’s a serious performance bottleneck.
Fix strategies:
Use async/defer attributes for JS so it doesn’t block.
Move non-critical resources off the critical path.
Cache aggressively and serve from CDNs.
Takeaway: These four metrics don’t just describe what happened — they hint at where to optimize: fewer requests, smaller files, and reduced dependency on slow third-parties.