December 17

Is Your Cache Crispy Fresh?

Our big project these days is a major overhaul of our map tile servers for our Weather Overlays API. We’re moving the entire codebase over to Node.js and using mapnik to generate tile images.

Performance and resource usage is a major concern. We need to generate images for more than 20 different weather data sets, some of which update as often as every 2 minutes. To keep our maps snappy and not break the bank, we need to do a really good job caching at every level of resource creation.

There are several great caching libraries out there, but we had trouble finding something that matched all of our requirements:

  • Can cache any type of resource (ie, not just an http cache)
  • Can serve “stale” data, while pre-fetching fresh data for the next request
  • Can control memory usage with configurable limits
  • Provides a clean separation between business logic and the cache layer
  • Locks cache misses, so we don’t have to worry about cache-stampeding when a resource expires.

Nothing we found quite fit that bill, so my colleague Seth Miller decided to roll his own instead called CrispCache.

Using CrispCache

CrispCache can be installed via npm:

Let’s see how we could use CrispCache to get the current temperature using the Aeris Observations API. First, we’ll start with our business logic of fetching the data:

Next, create a cached version of currentTemp using CrispCache. I’ll often put these cached wrappers in a services.js  module so I can easily swap out which implementation my app is using (e.g. for test mocks).

The great thing about CrispCache.wrap()  is that it allows us to use our cached function just like we would the original function:

This means our implementation code can be entirely agnostic to the caching layer. It wouldn’t be too much work to override our currentTemp service with a test mock or even disable caching entirely in a development environment.

Stale vs. Expired Cache

“But wait a minute”, I hear you say, “why would I want to serve my users stale data? What happened to ‘crispy fresh'”? Let me explain what happens when a cache entry is stale, and you will see how this makes our application as crispy fresh as a head of iceberg lettuce in the springtime.

A cache entry may exist in one of four states: empty, valid, stale, or expired. When a value is requested from the cache, the cache’s behavior depends on the state of the matching cache entry:

State Cache Behavior
empty Invokes the underlying function, resolves with the result, and saves the result to the cache.
valid Resolves immediately with the cached value. Underlying function is not invoked
stale Resolved immediately with the cached value. Invokes the underlying function, and saves the result for the next request.
expired Removes the expired cached value. Invokes the underlying function, resolves with the result, and saves the result to the cache.

Consider this example in which we invoke currentTemp()  at 3pm:

Because this is our first request, this is a cache miss, which means we will have to wait a moment for the request to the Aeris Observations API to resolve. But if we try again at 3:01pm:

The cache response is instantaneous because the API  response was cached in memory. But what happens if we make another request at 3:16pm, one minute after our cache entry has gone “stale”:

So even though our cache entry is stale, we still get an immediate response. But at the same time, we are firing off another request to the Aeris API in the background so that new data will be ready for our next request:

The result is that we are always providing the freshest data available, without making anyone wait for the data to be fetched.

Limiting Memory Usage (LRU Cache)

On our map image servers, all of our map tile images are cached in memory. Configuring the ttl values for the image cache can be a little tricky. If I set it for an hour, will it devour all of the memory on my server? Can I squeeze more out of my server and cache for a little longer?

Wouldn’t it be nice if you could define the max memory usage of your caches and forget about it? Well here’s a config file pulled right out of our map image server:

As you can see, we’ve set a memory limit of 100mb on our tile image cache. When we create our cache wrapper, we just need to reference the configured maxSize, and tell CrispCache how to determine an entry’s size:

So what is an LRU cache? I’m glad you asked. LRU stands for Least Recently Used, and what it means is that the cache automatically removes entries as it approaches its maxSize, prioritizing the least popular entries for deletion.

Take the following example:

When we add "shazaam"  to the cache, we have exceeded the cache’s configured maxSize. As a result, CrispCache finds the entry which was least used (in this case 'bar' ) and removes it from the cache.

With the maxSize and getOptions: () => ({ size }) configurations, we can rely on CrispCache to manage our memory usage for us.


As a web developer, I am keenly aware of how much of my work is just a thin layer on top of existing open-source tools and platforms. So, I really enjoy the chance for our team to put something back out there for the community.

Give CrispCache a try the next time you’re in the mood for some crispy fresh caching. It does a lot more than I’ve been able to cover here so check out the docs. And while you’re playing with it, open an issue, send a pull request, and we’ll keep building on it.

Leave a Reply

Your email address will not be published. Required fields are marked *