Building a social media platform without going bankruptPart II–Accepting posts

time to read 6 min | 1076 words

This design deal with creating what is effectively a Twitter clone, seeing how we can do that efficiently. A really nice feature of Twitter is that it has just one type of interaction a tweet. The actual tweet may be a share, a reply, a mention or any number of other things, but those are properties on the post, not a different model entirely. Contrast that with Facebook, on the other hand, where you have Posts and Replies as very distinct items.

As it turns out, that can be utilized quite effectively to build the core foundation of the system with great efficiency. There are two separate sides for a social network, the write and read portions. And the read side is massively bigger than the write side. Twitter currently has about 6,000 tweets a second, for example, but it has 186 million daily users.

We are going to base the architecture on a few assumptions:

  • We are going to favor reads over writes.
  • Reads’ speed should be a priority at all times.
  • It is fine to take some (finite, small) amount of time to show a post to followers.
  • Favor the users’ experience over actual guarantees.

What this means is that when we write a new post, the process is going to be roughly so:

  • Post the new message to a queue and send confirmation to the client.
  • Add the new post to the user’s timeline on the client side directly.
  • Done.

Really important detail here. The process of placing an item on the queue is simple, trivial to scale on an infinite basis and can easily handle huge spikes in load.

On the other hand, the fact that the client’s code will show the user that the message in their timeline is usually sufficient for good user experience.

There is the need to process that and send it to followers ASAP, but that is as soon as possible in people’s terms. In other words, if it takes 30 seconds or two minutes, it isn’t a big deal.

With just those details, we are pretty much done with the write side. We accepted the post, we pretend to the user that we are done processing it, but that is roughly about it. All the rest of the work that we need to do now is to see how we can most easily generate the read portion of things.

There are some other considerations to take into account. We need to deal not just with text but also images and videos. A pretty core part of the infrastructure is going to be an object storage with S3 compatible API. The S3 API has became an industry standard and is pretty widely supported. That help us reduce the dependency issue. If needed, we can MinIO, run on Backblaze, etc.

When a user send a new post, any media elements of the post are stored directly in the S3 storage and then the post itself is written to a queue. Workers will fetch items from the queue and process them. Such processing may entail things like:

  • Stripping EXIF data from images.
  • Re-encoding videos.
  • Analyzing content for language / issues. For example, we never want to have posts about Broccoli, so we can remove / reject them at this stage.

This is where a lot of the business logic will reside, mind. During the write portion, we have time. This is an asynchronous process that we can afford to take some time. Scaling workers to read from a queue is cheap, simple an easy technique, after all. That means that we can afford to shift most of the work required to this part of the process.

For example, maybe a user posted a reply to a message that only allow replies from users mentioned on the post? That sort of thing.

Once processed, we end up with the following architecture:

image

The keys for each post are numeric (this will be important later). We can generate them using the Snowflake method:

image

In other words, we use 40 bits with 16 millisecond precision for the time, 10 bits (1,024) as the machine id and 14 bits (16,384) as the sequence number. The 16 ms precision is already the granularity that you can expect from most computer clocks, so we aren’t actually losing much by giving it up. It does means that we don’t really have to think about it. A single instance can generate 16K ids each 16 ms, or about a million ids per second. More than enough for our needs.

The key about those ids is that they are going to be roughly sorted. That will be very nice to use later on.  When accepting a post, we’ll generate an id for that, and then place that in the key/value store using that id. All other work from that point of is about working with those ids, but we’ll discuss that with more details when we talk about timelines.

For now, I think that this post gives a good (and intentionally partial) view of how I expect to handle a new write:

  • Upload any media to S3 compatible API.
  • Generate a new ID for the post.
  • Run whatever processing you need for the post and the media.
  • Write the post to the key/value store under the relevant id.
    • This include also the appropriate references for the parent post, any associated media, etc.
  • Publish to the appropriate timelines. (I’ll discuss this in a future post)

I’m using the term key/value store here generically, because we’ll do a lookup per id and find the relevant JSON for the post. Such systems can scale pretty much linearly with very little work. Given the fact that we use roughly time based ids and the time base nature of most social interactions, we can usually move most posts to archive mode in a very natural way. But that would be a separate optimization step that I don’t think that would actually be relevant at this point. It is good to have such options, though.

And that is pretty much it for writes. There are probably pieces here that I’m missing, but I expect that they are related to the business processing that you’ll want to do on the posts, not the actual infrastructure. On my next post, I’ll deal with the other side. How do we actually read a post? Given the difference in scale, I think that this is a much more interesting scenario.

More posts in "Building a social media platform without going bankrupt" series:

  1. (05 Feb 2021) Part X–Optimizing for whales
  2. (04 Feb 2021) Part IX–Dealing with the past
  3. (03 Feb 2021) Part VIII–Tagging and searching
  4. (02 Feb 2021) Part VII–Counting views, replies and likes
  5. (01 Feb 2021) Part VI–Dealing with edits and deletions
  6. (29 Jan 2021) Part V–Handling the timeline
  7. (28 Jan 2021) Part IV–Caching and distribution
  8. (27 Jan 2021) Part III–Reading posts
  9. (26 Jan 2021) Part II–Accepting posts
  10. (25 Jan 2021) Part I–Laying the numbers