Redis isn't a direct competitor to memcached. The great thing about Redis (disk-backed cache) is actually a liability if you're using it like an L2 cache which is how memcached is intended to be used.
To be clear: in memcached you can effectively set-and-forget keys and never worry about running out of memory. It uses an LRU to figure out what to throw away when memory gets full. In Redis if you want to achieve that behavior you need to expire your keys, which is another round trip every time. That means expire alone is also not enough, because you could set the key successfully but fail to set the expire. That means you need to also keep track of all your keys and periodically do cleanup.
Redis is awesome, but I would definitely not use it to replace memcached except in the cases where I absolutely need the data to survive a reboot and am willing to go through the hassle of manually managing the cleanup of every key.
Hello, since 2.0 (RC1 is near now...) we have "SETEX" that is SET+EXPIRE in one operation, and with max-memory directive set when Redis runs out of memory will sample three random keys and will delete the one that is going to expire sooner. So this usage is actually supported.
What I think about Redis as a cache is that in many contexts the rich data types and operations supported allow better caching.
An example: oknotizie.virgilio.it is a large social news site, for this site I built a Redis cache where the "latest" news IDs are pushed into a Redis list, but they are also added to the MySQL DB that was formerly here.
So to paginate the "latest news" page I only use fast LRANGE calls against Redis, but if it will return a short read (the cache is empty since the Redis server was restarted) I'll use instead the MySQL server. This actually never happens and everything goes on Redis usually, but since there is no persistent storage needed in Redis side, it is actually a cache.
When a news is deleted I use LREM instead. And so forth.
Interestingly, I think if you tweaked it to be "Randomly sample three keys and blow away the LRU among them" it would be pretty close to a fully LRU implementation for a lot of typical workloads.
Consider a website where 10% of the cache is hot: you'd discard a single hot entry only once every thousand cache evictions. I think for a lot of sites that is likely to be Good Enough, depending on the impact of a cache miss.
Bonus: you can trivially tweak how sensitive that is by increasing the number of items to randomly sample.
To add my 2 cents here: I've been using both memcached and redis in various projects and for me they serve slightly different purposes.
For plain old caching I haven't (yet) had a need to use redis, which is younger and by nature more complex. RAM is so cheap nowadays, we just tend to stuff two memcached machines with 64G, which goes a ridiculously long way.
On the other hand, as antirez pointed out, redis shines when you need "a little more than caching". We have rolled a few custom queueing solutions on top of redis (similar to resque) and working with lists and the SUBSTR operation is an extremely pleasant experience.
Redis seems to be the optimal "roll-your-own-queue" toolkit at this point. We had some strong delivery- and persistence-guarantee requirements in our project but those were not a problem to meet with a simple WAL on the clients and a redis-pair. So far our solution wins over the previous rabbitmq setup big time in all areas (performance, reliability, complexity, transparency). It could have been done with memcached, but less efficiently due to the lack of the aforementioned list-operations.
LRU isn't a panacea. Maintaining a real LRU can involve non-trivial overhead. It also can have trouble predicting some access patterns. Random replacement can definitely out perform LRU, particularly for tree structures.
Do you know how memcached does it? Do they pay the performance overhead, or do they cheat in some way? I don't think perfect LRU is needed, but the general idea behind LRU works /very/ well for a memcached style cache.
Trying to achieve something similar using Redis by for example updating the expire on every read would be a bit of a pain and I think put extra load on the client and server. This isn't a complaint about Redis though, it has its own use cases which are wonderful and I have no objection to using both for their strengths.
To be clear: in memcached you can effectively set-and-forget keys and never worry about running out of memory. It uses an LRU to figure out what to throw away when memory gets full. In Redis if you want to achieve that behavior you need to expire your keys, which is another round trip every time. That means expire alone is also not enough, because you could set the key successfully but fail to set the expire. That means you need to also keep track of all your keys and periodically do cleanup.
Redis is awesome, but I would definitely not use it to replace memcached except in the cases where I absolutely need the data to survive a reboot and am willing to go through the hassle of manually managing the cleanup of every key.