reWriting a very fast cache service with millions of entries

time to read 3 min | 415 words

imageI run into this article that talks about building a cache service in Go to handle millions of entries. Go ahead and read the article, there is also an associated project on GitHub.

I don’t get it. Rather, I don’t get the need here.

The authors seem to want to have a way to store a lot of data (for a given value of lots) that is accessible over REST.  The need to be able to run 5,000 – 10,000 requests per second over this. And also be able to expire things.

I decided to take a look into what it would take to run this in RavenDB. It is pretty late here, so I was lazy. I run the following command against our live-test instance:

image

This say to create 1,024 connections and get the same document. On the right you can see the live-test machine stats while this was running. It peaked at about 80% CPU. I should note that the live-test instance is pretty much the cheapest one that we could get away with, and it is far from me.

Ping time from my laptop to the live-test is around 230 – 250 ms. Right around the numbers that wrk is reporting. I’m using 1,024 connections here to compensate for the distance. What happens when I’m running this locally, without the huge distance?

image

So I can do more than 22,000 requests per second (on a 2016 era laptop, mind) with max latency of 5.5 ms (which the original article called for average time). Granted, I’m simplifying things here, because I’m checking a single document and not including writes. But 5,000 – 10,000 requests per second are small numbers for RavenDB. Very easily achievable.

RavenDB even has the @expires feature, which allows you to specify a time a document will automatically be removed.

The nice thing about using RavenDB for this sort of feature is that millions of objects and gigabytes of data are not something that are of particular concern for us. Raise that by an orders of magnitude, and that is our standard benchmark. You’ll need to raise it by a few more orders of magnitudes before we start taking things seriously.

More posts in "re" series:

  1. (27 Dec 2019) Writing a very fast cache service with millions of entries
  2. (26 Dec 2019) Why databases use ordered indexes but programming uses hash tables
  3. (12 Nov 2019) Document-Level Optimistic Concurrency in MongoDB
  4. (25 Oct 2019) RavenDB. Two years of pain and joy
  5. (19 Aug 2019) The Order of the JSON, AKA–irresponsible assumptions and blind spots
  6. (10 Oct 2017) Entity Framework Core performance tuning–Part III
  7. (09 Oct 2017) Different I/O Access Methods for Linux
  8. (06 Oct 2017) Entity Framework Core performance tuning–Part II
  9. (04 Oct 2017) Entity Framework Core performance tuning–part I
  10. (26 Apr 2017) Writing a Time Series Database from Scratch
  11. (28 Jul 2016) Why Uber Engineering Switched from Postgres to MySQL
  12. (15 Jun 2016) Why you can't be a good .NET developer
  13. (12 Nov 2013) Why You Should Never Use MongoDB
  14. (21 Aug 2013) How memory mapped files, filesystems and cloud storage works
  15. (15 Apr 2012) Kiip’s MongoDB’s experience
  16. (18 Oct 2010) Diverse.NET
  17. (10 Apr 2010) NoSQL, meh
  18. (30 Sep 2009) Are you smart enough to do without TDD
  19. (17 Aug 2008) MVC Storefront Part 19
  20. (24 Mar 2008) How to create fully encapsulated Domain Models
  21. (21 Feb 2008) Versioning Issues With Abstract Base Classes and Interfaces
  22. (18 Aug 2007) Saving to Blob
  23. (27 Jul 2007) SSIS - 15 Faults Rebuttal
  24. (29 May 2007) The OR/M Smackdown
  25. (06 Mar 2007) IoC and Average Programmers
  26. (19 Sep 2005) DLinq Mapping