Introduction to Redis

EXPIRE Redis command for setting a timeout on a key

Keys in Redis can be set so that the key will be automatically deleted from the datastore after a timeout.

Exercise: Setting a Timeout on a Key> SET Person:334:ReadAction "Website"
OK> EXPIRE Person:334:ReadAction 10
1> TTL Person:334:ReadAction

Setting LRU Eviction

Redis supports a variant of the less recently used (LRU) approach to removing old data as new data is added to a Redis datastore. When Redis is used as a an application cache, LRU eviction prevents the datastore from exceeding the available memory.

LRU Setup Options

Redis offers a number of LRU options that are set in redis.config. First, the maxmemory configuration option must be set, if set to 0 (the default) Redis does not limit memory itself, but is constrained by the operating system's available RAM in 64-bit and 3GB in 32-bit systems.

Exercise: Setting Maxmemory at runtime> CONFIG SET maxmemory 1mb

Exercise: Setting LRU Policies


The noeviction policy is the Redis default and will only display error messages when Redis exceeds existing memory.> CONFIG SET maxmemory-policy noeviction


The all-keys> CONFIG SET maxmemory-policy all-keys

volatile-lru> CONFIG SET maxmemory-policy volatile-lru

allkeys-random> CONFIG SET maxmemory-policy allkeys-random

volatile-random> CONFIG SET maxmemory-policy volatile-random

volatile-ttl> CONFIG SET maxmemory-policy volatile-ttl

The Redis LRU algorithm is not exact, as Redis does not automatically choose the best candidate key for eviction, the least used key, or the key with the earliest access date. Instead, Redis default behavior is take a sample of five keys and evict the least used of those five keys. If we want to increase the accuracy of our LRU algorithm, we can the change the maxmemory-samples directive in either redis.conf or during runtime with the CONFIG SET maxmemory-samples command. Increasing the sample size to 10 improves the performance of the Redis LRU so that it approaches a true LRU algorithm but with the side-effect of more CPU computation. Decreasing the sample size to 3 reduces the accuracy of Redis LRU but with a corresponding increase in processing speed.

volatile LRU Policy Redis keys are evicted if the keys have EXPIRE set, if there not any keys to be evicted when Redis reaches maxmemory an OOM error is returned to the client. Note: under this policy when Redis reached maxmemory, it will start evicting keys that have an expiration set even if the time limit on keys hasn't been reached yet.

Testing volatile LRU> FLUSHDB> CONFIG SET maxmemory-policy volatile-lru

Python function to add and set keys

>>> def add_id_expire(redis_instance):
	count = redis_instance.incr("global:uuid")
        redis_key = "uuid:{}".format(count)
        redis_instance.set(redis_key, uuid.uuid4())
        if count <= 75:
            redis_instance.expire(redis_key, 300)


The allkeys-lru evicts keys based on the ttl. The allkeys-lru can delete ANY key in Redis and there is no way to restrict which keys are to be deleted. If you application needs to persist some Redis keys (say for configuration or reference look-up) DON’T use allkeys-lru policy!

Testing allkeys-lru> FLUSHDB> CONFIG SET maxmemory-policy allkeys-lru
< Running the add_id function in an infinite while loop and a counter

>>> count = 1
>>> while 1:
	count += 1

Using the INFO stats will show us the status of Redis cache.

Redis offer two other types of non-LRU maxmemory policies, volatile-random and allkeys-random mirror the previous polices but instead of calculating LRU of the keys, the keys are either randomly evicted based on the key's TTL in the case of the volatile-random or any random keys in the case of allkeys-random.

References and Resources
From the official documentation on the website on the expire command.
Using Redis as an LRU cache
From the official documentation on the website on this topic.