What is a cache?

What is a cache?

October 18, 2020
10 min read
Originally published at

Do you have a vague idea of what a cache is, but want to really understand it? Want to learn how you can use caching to make your apps faster, more resilient, and even less resource-intensive for your clients? Then this article is for you.

Do you have a vague idea of what a cache is, but want to really understand it? Want to learn how you can use caching to make your apps faster, more resilient, and even less resource-intensive for your clients? Then this article is for you.

In this article, we’re going to go through what a cache is, and what kinds of caching is relevant for most frontend developers. We’ll touch on how we can cache data in JavaScript, via service workers, the browser itself, and external caches, such as CDNs and backends. Finally, we’ll look at cache invalidation, and try to get a basic understanding of both what it is and why it’s so hard to get right.

What is a cache? 🤔

Before we dive into the many ways we can implement caching, we should look at some sort of technical definition of what a cache is. Put simply, a cache is a way to save data that you received earlier so that it’s easier to retrieve again later. I’ll explain this through an example.

Like most internet users, you’ve probably downloaded a file to your computer at some point in time. Perhaps it’s a document you’re working on with a few friends from school. Since it’s now on your computer, you can access it whenever you want, without fetching a new copy every time you want to work on it. This feature – of having access to some resources in an easier (or cheaper) way is the main idea of a cache.

We see this kind of technique used in most parts of a modern tech stack. We cache photos in our browser so they show up right away on subsequent visits. We cache the user JSON object in some sort of state management library, so we don’t have to ask the server for what the user’s name is every time we want to change what’s displayed. We even cache entire web apps in the browser so that they work without an internet connection (so-called progressive web apps or PWAs).

Why not cache everything forever, then?

With all of these upsides, you might ask yourself why we don’t cache everything forever! Why even bother fetching new data if we already have it locally? Well, as it turns out, the world isn’t static, and the data we download has the potential to change in the future. Therefore, we run the risk of dealing with out of date information whenever we cache it.

Knowing what to cache, and for how long, is one of those problems that requires you to really consider the use case of each piece of information, and how important it is to reflect changes right away. That’s why I’ve always thought of it as an art to get right. With all that said, we’ll go through some examples and give you some practical hints later on in this article.

The different types of cache

As a frontend developer, you’ll see quite a few different caching types as you progress through the stack. Here’s a description of each “layer” of cache, and when it shines.

JavaScript cache

The very first cache your code will encounter is the cache you typically make yourself. That is, some sort of way to keep the data from your API in memory.

A very simple implementation of a simple cache with no invalidation (relax, we’ll come back to what that means later) could be this:

let cache = {};
async function getCachedValue(key, callback) {
  if (cache.hasOwnProperty(key)) {
    return cache[key];
  const result = await callback();
  cache[key] = result;
  return result;

Here, we have a “global” cache object, which is persisted between calls to this caching function. We check if the cache contains the cache key, and if it does, we simply return the cached value. If it doesn’t, we call the provided callback function to somehow get a value, place it in the cache and return it to the user.

You would then call this function with a key, and a callback that would asynchronously fetch the data in question:

const user = getCachedValue("user", async () => {
  const res = await fetch("/api/user");
  return res.json();

Here, we would fetch the user the first time this code was called. The second time, we would have found the user in the cache, and avoided the extra call to the server.

There are tons of libraries that help with this. I write mostly React code myself, and in that ecosystem, SWR and react-query are two great arguments that implement such a cache for you (in addition to a lot of other nice-to-have features you need).

HTTP cache

Caching is one of the most fundamental features in web browsers, and has been for decades. That’s why it’s built into the very protocol that transfers data from servers to users – the HTTP. Via special header fields prepended to each response, the server can instruct the browser to cache certain files for certain periods of time. In particular, it’s the Cache-Control header you want to read into.

This caching mechanism is the one most users think about when they hear caching. You’ve probably at some point heard the term “clearing your cache” as a way to fix some weird bug on a website, and this is the cache they referred to.

Caching resources via HTTP is an incredible tool for improving your site. By adding the correct cache headers, and perhaps creating unique file names for all static resources, you can cache all resources indefinitely on the client-side (well, until somebody tells your user to clear their cache, that is). Even dynamic content can be cached if done carefully.

I would love to dive deeply into the HTTP caching techniques, but MDN’s resource on the matter is too comprehensive to not recommend instead. Check it out here.

Service worker cache

Sometimes, you need the power of an HTTP cache, with the programmability of JavaScript. That’s where you can reach for so-called service workers. Service workers enable you (among other things) to cache all resources locally, but with full programmatic control over what gets cached when, and for how long.

Service workers act as an intermediary for all network requests. Whenever your web application requests a resource (let’s say, an image), you can intercept it, look up a cached version (or a fallback) and return it, all while you’re fetching an updated version in the background.

Combined with a simple manifest file, service workers even lets you create complete offline experiences for web sites after the original visit. This is an immensely valuable feature in a world where data coverage isn’t as universal as you might think!

Let me add a final word of caution. Since service workers are so incredibly powerful, they also come with the possibility of ruining your web site for the foreseeable future. Since they run as a separate process from the rest of your site, it will persist between one version and the next. In other words, you need to take special care to make sure you don’t screw anything up 😅.

Luckily, there are tools that help you create ready-made service worker caches. You can plug tools like Google’s workbox into your build pipeline, and have one generated for you. Job done!

Backend cache

The last piece of the caching puzzle for frontend devs has nothing to do with the frontend at all. Instead, it’s the caching that happens on the server-side of your application.

But why do we need caching on the backend as well? The servers typically have much more resources and network stability than even the most powerful clients, so why is there a need to cache stuff? Well, as it turns out, the server also asks other services for data.

Take a database query, for instance. Scanning through a database of millions of records to find the ones relevant for a particular query might take seconds. Instead of doing this work over and over again, a backend engineer might choose to cache those queries for a bit of time. Other external services outside of our control might also be great caching opportunities.

Caching on the server-side often includes a concept called distributed caches, which complicates things quite a bit. Since you’re probably running more than one server, and a request can be directed to any one of those servers, you need to have a shared cache between them. This has become easier to set up with tools like hazelcast, but is still a stumbling block for many.

I won’t dive into too much detail about this kind of caching, as I find it a bit out of scope for this article. But know there is a lot to learn here as well!

Removing things from the cache

Sometimes, you don’t want something to be cached anymore. There are typically three good reasons for this. It might have changed, it might be too old, or it might not be used often enough.

Seldomly used entries

Let’s start with removing entries that aren’t used often enough. Why would you want to be stingy about caching data that’s seldomly used? Well, because of space. Put simply, caching is just a way to save data, and some of that data might be pretty large in terms of megabytes. At some point, depending on your system’s configuration, you will run out of space to do this duplicate saving of data. Then, we need to somehow rank our cache entries by usefulness, and how often a cached resource is used is definitely a nice metric for usefulness. So if we’re trying to add a new entry to our cache, we need to remove the least used ones first.

There are several techniques to deciding what’s the least useful entry though – it could be the one that has been looked up the fewest times in a given time interval or the least recently used entry. Which technique you choose is up to you and your specific requirements.

Old entries

Another approach to keep cache sizes in check, while also ensuring that your data is fresh enough, is removing cache entries based on how long they’ve been in the cache. You might want to cache images longer than your user data, since images rarely change, but at some point, you probably want to fetch a new version of the image as well – just in case.

If a cached resource is requested, and the cached item is expired, a new version will be fetched instead, and the old entry will be switched out, leaving the cache fresh again.

Cache invalidation

I told you we’d get back to cache invalidation. So what is it, exactly?

Cache invalidation is the art of removing a subset of the cached data from the cache. You typically want to do this if you’re updating the data in the cache, and want your application to go fetch a new version.

Depending on where you’re doing your caching, you’ll have different ways to do this as well. If you’re doing something programmatically (like in JavaScript), you can simply remove the cache entry, and request a new one in the background.


Caching is hard because caching is a lot of different things. You can cache stuff in your application, via HTTP, via service workers, and even in the backend itself. What to do when isn’t readily apparent to many, but hopefully this gave you some sort of idea of how it all works. Finally, we looked at why you would ever want to remove something from the cache, and different ways to do that, too.

All rights reserved © 2024