Focused Topics in Redis Day 3 - #9

Caching & Keyspace Notifications

Redis provides a number of Less Recently Used (LRU) algorithms in addition to its standard time-based expiration functionality that are set on Redis keys.

Supporting post-processing when a Redis key is ejected from the Redis datastore, you can subscribe to specific events using special PUB/SUB mode in Redis.

In this topic …

Default Expiration

As an in-memory datastore, Redis behavior when it is running out of memory can be adjusted depending on the needs of the application. Redis's default policy when reaching the maximum memory available is noeviction that raises an Out-of-memory OOM error.

One strategy to avoid OOM, is to set EXPIRE on a key that will be evicted from the datastore when it's time is up. Using the TTL command gives the remaining time before a key is evicted, while the PERSIST command clears an existing expiration on a key.

Setting a memory ceiling for your running Redis instance is accomplished by either setting the maxmemory directive either in redis.conf or during runtime.> CONFIG GET maxmemory
1) "maxmemory"
2) "1048576"> CONFIG GET maxmemory-policy
1) "maxmemory-policy"
2) "noeviction"> CONFIG SET maxmemory 1mb

We'll now create a Python function that adds data until the datastore is full

>>> import uuid
>>> def add_id(redis_instance):
        redis_key = "uuid:{}".format(redis_instance.incr("global:uuid"))
        redis_instance.set(redis_key, uuid.uuid4()) 

Setting an expiration, polling time remaining, and clearing an existing expiration> SADD authors "David Foster Wallace" "James Clavell" "Jane Austen"
(integer) 3> EXPIRE authors 180
(integer) 1> TTL authors
(integer) 173> TTL authors
(integer) 146> PERSIST authors
(integer) 1> TTL authors
(integer) -1> EXPIRE authors 10
(integer) 1> TTL authors
(integer) -2

Redis LRU Caching Options

Redis offers a number of different LRU (less recently used) caching options to better handle OOM cases in your running Redis instance.

The Redis LRU algorithm is not exact, as Redis does not automatically choose the best candidate key for eviction, the least used key, or the key with the earliest access date. Instead, Redis default behavior is take a sample of five keys and evict the least used of those five keys. If we want to increase the accuracy of our LRU algorithm, we can the change the maxmemory-samples directive in either redis.conf or during runtime with the CONFIG SET maxmemory-samples command. Increasing the sample size to 10 improves the performance of the Redis LRU so that it approaches a true LRU algorithm but with the side-effect of more CPU computation. Decreasing the sample size to 3 reduces the accuracy of Redis LRU but with a corresponding increase in processing speed.

volatile LRU Policy Redis keys are evicted if the keys have EXPIRE set, if there not any keys to be evicted when Redis reaches maxmemory an OOM error is returned to the client. Note: under this policy when Redis reached maxmemory, it will start evicting keys that have an expiration set even if the time limit on keys hasn't been reached yet.

Testing volatile LRU> FLUSHDB> CONFIG SET maxmemory-policy volatile-lru

Python function to add and set keys

>>> def add_id_expire(redis_instance):
	count = redis_instance.incr("global:uuid")
        redis_key = "uuid:{}".format(count)
        redis_instance.set(redis_key, uuid.uuid4())
        if count <= 75:
            redis_instance.expire(redis_key, 300)


The allkeys-lru evicts keys based on the ttl. The allkeys-lru can delete ANY key in Redis and there is no way to restrict which keys are to be deleted. If you application needs to persist some Redis keys (say for configuration or reference look-up) DON’T use allkeys-lru policy!

Testing allkeys-lru> FLUSHDB> CONFIG SET maxmemory-policy allkeys-lru
< Running the add_id function in an infinite while loop and a counter

>>> count = 1
>>> while 1:
	count += 1

Using the INFO stats will show us the status of Redis cache.

Redis offer two other types of non-LRU maxmemory policies, volatile-random and allkeys-random mirror the previous polices but instead of calculating LRU of the keys, the keys are either randomly evicted based on the key's TTL in the case of the volatile-random or any random keys in the case of allkeys-random.

Keyspace Notifications Overview

The ability for an application to respond to changes that may occur to the value stored at a particular key or keys. Redis provides a mechanism for client code to subscribe to a Pub/Sub channel that monitors events related to data. Called keyspace notification, functionality for monitoring events like all the commands that change a given key, all keys receiving specific commands like HSET, or all keys that are about deleted because of an EXPIRE command.