ConcurrentDictionary<K, V> read latency is going to be around 7-15ns for the data hot in the cache, scaling with key length if it is string-based, and anywhere between 75ns and 150ns for reading that out of RAM. Alternate implementations like NonBlocking.ConcurrentDictionary can reach down to 3.5-5ns for integer-based keys given all data is in L1 and the branch predictor is fully primed, on modern x86_64 cores.
I've gotten in arguments over overuse of Redis for this very reason.
Redis is cool but people use it too willy nilly. It's expensive to run, and you have to pay a cost of (de)serialization, and network latency. Sometimes that's necessary, especially if you need to share stuff across multiple nodes and you're afraid of hammering the database too much, but a lot of the time, I'd say even most of the time, you can get away with just a big ol' global ConcurrentDictionary (or whatever the equivalent is in your language of choice).
Basically any thread-safe dictionary is fine, because no matter what kind of locking strategy they're using, it will almost certainly be faster than anything that has to hit the network, by several orders of magnitude, and you have one less thing to manage in your application. You can figure out the best way to invalidate old entries, or find a library to do it (e.g. something like a Guava cache), and you'll likely much better performance than arbitrarily farming to Redis.
> (...) but a lot of the time, I'd say even most of the time, you can get away with just a big ol' global ConcurrentDictionary (or whatever the equivalent is in your language of choice).
That's fine if all you're doing is caching limited data independently on each node and you have no requirement for consistency or even durability.
Most of the usecases you stumble upon do not fit that scenario. That's why "a lot of the time" suggesting a dictionary is plain wrong.
> That's fine if all you're doing is caching limited data independently on each node and you have no requirement for consistency or even durability.
You've assumed you have multiple nodes here. Our point is that you can scale _significantly_ more with a ConcurrentDictionary (or equivalent) than you think - far more than a single node backed by redis will go. You have a point about durability, but in my experience, most apps that lean on Redis don't actually solve that problem. If the app goes down mid request, it will leave broken state behind. The data itself in redis might be valid, but the app can send partial data. The solution to this is the same as handling durability in a non-redis case too.
Also, runnign Redis on the same machine for persistance is an option for durability.
> Most of the usecases you stumble upon do not fit that scenario. That's why "a lot of the time" suggesting a dictionary is plain wrong.
I disagree. Most of the usecases _do_ fit this scenario, and Redis is (over) used as a tool to allow for horizontal scaling caused by an unwillingness to scale vertically. At the level of Meta/X yes you absolutely need it. But for a web app with about 10k concurrent users, you can handle it on a single node.
ConcurrentDictionary<K, V> read latency is going to be around 7-15ns for the data hot in the cache, scaling with key length if it is string-based, and anywhere between 75ns and 150ns for reading that out of RAM. Alternate implementations like NonBlocking.ConcurrentDictionary can reach down to 3.5-5ns for integer-based keys given all data is in L1 and the branch predictor is fully primed, on modern x86_64 cores.