Ayende @ Rahien

Oren Eini aka Ayende Rahien CEO of Hibernating Rhinos LTD, which develops RavenDB, a NoSQL Open Source Document Database.

Get in touch with me:

oren@ravendb.net

+972 52-548-6969

Posts: 7,325 | Comments: 50,661

Privacy Policy Terms
filter by tags archive
time to read 6 min | 1082 words

A few weeks ago I wrote about the Hare language and its lack of generic data structures. I don’t want to talk about this topic again, instead I want to discuss something more generic (pun intended). In my view, any modern programming language that aims for high performance should have some form of generics in it. To not have that in place is a major mistake and a huge cause for additional complexity and loss of performance. One aspect of that is the fact that generic data structures get a lot more optimizations than one-off implementations. But I already talked about that in the previous post.

The other issue is that by not having generics, there is a huge barrier for optimizations in front of you. You lack the ability to build certain facilities at all. Case in point, let us take a topic that is near and dear to my heart, sorting. Working on sorted data is pretty much the one thing that make databases work. Everything else is just details on top of that, nothing more. Let’s consider how you sort data (in memory) using a few programming languages, using their definitions

Using C:

void qsort (void *array, size_t count, size_t size, comparison_fn_t compare);
int comparison_fn_t (const void *, const void *);

Using C++:

template <class RandomAccessIterator>
   void sort (RandomAccessIterator first, RandomAccessIterator last);

Using Java:

public static void sort(int [] a);
public static void sort(long[] a);
public static void sort(Object[] a);

Using C#:

public static void Sort<T> (T[] array);

Using Hare:

type cmpfunc = fn(a: const *void , b: const *void ) int ;
fn sort([]void , size, *cmpfunc) void ;

Using Rust:

impl<T> [T] {
     pub fn sort(&mut self)
     where
         T: Ord,

}

Using Zig:

pub fn sort(
     comptime T: type,
     items: []T,
     context: anytype,
     comptime lessThan: fn (context: @TypeOf(context), lhs: T, rhs: T) bool,
) void

I’m looking only at the method declaration, not the implementation. In fact, I don’t care about how this is implemented at this point. Let’s assume that I want to sort an array of integers, what would be the result in for all of those languages?

Well, they generally fall into one of a few groups:

C & Hare – will require you to write something like this:

In other words, we are passing a function pointer to the sorting routine and we’ll invoke that on each comparison.

C++, C#, Rust, Zig – will specialize the routine for the call. On invocation, this will look like this:

The idea is that the compiler is able to emit code specifically for the invocation we use. Instead of having to emit a function call on each invocation, the compare call will usually be inlined and the cost of invocation is completely eliminated.

Java is the only one on this list that has a different approach. Instead of using generics at compile time, it is actually doing a dispatch of the code to optimized routines based on runtime types. That does mean that they had to write the same sort code multiple times, of course.

Note that this isn’t anything new or novel. Here is a discussion on the topic when Go got generics, in the benchmark there, there is a 20% performance improvement from moving to the generics version. That result from avoiding the call overhead as well as giving the compiler more optimization opportunities.

Going back to the premise of this post, you can see how a relatively straightforward decision (having generics in the language) can have a huge impact on the performance for what is one of the most common scenarios in computer science.

The counter for this argument is that we can always specialize the code for our needs, right? Except… that this isn’t something that happens. If you have generics, you get this behavior for free. If you don’t, well, this isn’t being done.

I write databases for a living, and the performance of our sorting code is something that we analyze at the assembly level. Pretty much every database developer will have the same behavior, I believe. The performance of sorting is pretty key to everything a database does. I run into this post, talking about performance optimizations in Postgres, and one of the interesting ones there was exactly this topic. Changing the implementation of sorting from using function pointers to direct calls. You can see the commit here. Here is what the code looks like:

image

Postgres is 25 years old(!) and this is a very well known weakness of C vs. C++. Postgres is also making a lot of sorting calls, and this is the sort of thing that is a low hanging fruit for a performance optimization.

As for the effect, this blog post shows 4% – 6% improvement in overall performance as a result of this change. That means that for those particular routines, the effect is pretty amazing.

I can think of very few scenarios where a relatively simple change can bring about 6% performance improvement on a 25 years old, well maintained and actively worked on, codebase.

Why am I calling it out in this manner, however?

Because when I run into this blog post and the optimization, it very strongly resonated  with the previous discussion on generics. It is a great case study for the issue. Because the language (C, in the case of Postgres) isn’t supporting generics in any meaningful way, those sort of changes aren’t happening, and they are very costly.

A modern language that is aiming for performance should take this very important aspect of language design into account. To not do so means that your users will have to do something similar to what Postgres is doing. And as we just saw, that sort of stuff isn’t done.

Not having generics means that you are forcing your users to leave performance on the table.

Indeed, pretty much all the modern languages that care for high performance have generics. The one exception that I can think of is Java, and that is because it chose backward compatibility when it added generics.

Adding this conclusion to the previous post about generics data structure, I think that the final  result is glaringly obvious. If you want high performance system, you should chose a language that allows you to express it easily and succinctly. And generics is a mandatory tooling in the box for that.

time to read 2 min | 296 words

Consider an eCommerce system where customers can buy stuff. Part of handling commerce is handling faults. Those range from “I bought the wrong thing” to “my kid just bought a Ferrari”. Any such system will need some mechanism to handle fixing those faults.

The simplest option we have is the notion of refunds. “You bought by mistake, we can undo that”.

In many systems, the question is then “how do we manage the process of refunds”? You can do something like this:

image

So a customer requests a refund, it is processed by the Help Desk and is sent for approval by Finance, who is then consulting Fraud and then get sign off by the vice –CFO.

There are about 12 refunds a quarter, however. Just the task of writing down the rules for processing refunds costs more than that.

Instead, a refund policy can state that anyone can request a refund within a certain time frame. At which point, the act of processing a refund becomes:

image

Is there a potential for abuse? Probably, but it is going to be caught naturally as we see the number of refunds spike over historical levels. We don’t need to do anything.

In fact, the whole idea relies on two important assumptions:

  1. There is a human in the loop
  2. They are qualified to make decisions and relied upon to try to do the right thing

Trying to create a process to handle this is a bad idea if the number of refunds is negligible. It costs too much, and making refunds easy is actually a goal (since that increases trust in the company as a whole).

time to read 1 min | 164 words

In my previous post, I asked why this change would result in a better performing system, since the total amount of work that is done is the same:

image

The answer is quite simple. The amount of work that our code is doing is the same, sure, but that isn’t all the code that runs.

In the first version, we would allocate the string, and then we’ll start a bunch of async operations. Those operations are likely to take some time and involve I/O (otherwise, they wouldn’t be async).

It is very likely that in the meantime, we’ll get a GC run. At that point, the string pointed to be the ids variable will be promoted (since it survived a GC). That means that it would be collected much later.

Using the new code, the scope of the ids string is far shorter. That means that the GC is more likely to catch it very early and significantly reduce the cost of releasing the memory.

time to read 6 min | 1054 words

I run into this blog post about the Hare language and its approach to generic data structures. From the blog post, we have this quote:

…it’s likely that the application of a higher-level data structure will provide a meaningful impact to your program. Instead of providing such data structures in the standard library (or even, through generics, in third-party libraries), Hare leaves this work to you.

And this one, at the end:

Hare doesn’t provide us with a generic hash map, but we were able to build one ourselves in just a few lines of code. A hash map is one of the simpler data structures we could have shown here, but even for more complex ones, you’ll generally find that it’s not too difficult to implement them yourself in Hare.

I… don’t really know where to begin. The relevant code is here, by the way, and you can see how this works.

A hash table is not a simple data structure, let’s start with that. It is the subject of much research and a ton of effort was spent on optimizing them. They are not the sort of things that you roll out yourself. To give some context, here are some talks from CppCon that talks about this:

So in a quick search, we can see that there is a lot to discuss here. For that matter, here are some benchmark results, which compare:

Why are there so many of those?

Well, because that matters. Each of those implementations is optimizing for something specific in different ways. There isn’t just a hash table algorithm, the details matter. A lot.

The fact that Hare believes that a Hashtable or a map does not have to have a solution is pure insanity in my opinion. Let’s look at the example that is provided in the post, shall we? You can see the raw code here.

image

Let’s take a look to understand what is going on here. There is a static array with 64 buckets that are used as the module cache. In each one of those buckets, you have an array of entries that match that bucket. The hash key here is the FNV32 of the AST node in question.

Let’s see how many things just pop to mind immediately in here as issues. Let’s start with the fact that this is a statically sized hash table, which may be appropriate for this scenario, but won’t fit many others. If we need to handle growing the underlying array, the level of complexity will shoot up significantly.

The code is also not handling deletes (another complicated topic), and the hash collision mode is chaining (via growing the array). In other words, for many other scenarios, you’ll need to roll your own hash table (and see above about the complexities involved).

But let’s take it a bit further. The code is using FNV to compute the hash key. It is also making an assumption here, that the keys will never collide. Let’s see how well that holds up, shall we?

In other words, it took me a few minutes and under 130 ms to find a hash collision for this scenario. The code above does not handle it. For example, here are a couple of collisions:

  • “intoxicated” and “tonsillectomy's”
  • “Helvetius2” and “Rochester0”

Those are going to be counted as the same value by the Hare code above. Fixing this requires non trivial amount of code changes.

For that matter, let’s talk for a second about the manner in which I found it. If I were trying to write the same code in Hare, what would I have to do?

Well, the answer to that is to write a lot of code, of course. Because I would have to re-implement a hash table from scratch.

And the design of the Hare language doesn’t even allow me to provide that as a library. I have to fall down to code generation at best.

These sorts of things matter. In C, you don’t have a hash table, and the most natural data structure is some form of a linked list. So that gets used a lot. You can bring in a hash table, of course, but adapting it for use is non trivial, so they are used a lot less often. Try writing the same in Hare, and then compare the cost in time to run (and time to execute).

In modern languages, the inclusion of a hash table in the base language is a basic requirement. Languages like C++, Rust or Zig have that in the base class library and have the facilities to allow you to write your own generic data structure. That means that good data structures exist. That it make sense to spend the time writing them because they’ll be broadly applicable. Languages like C# or Java took this further and make sure that all objects have GetHashCode() and Equals() methods, specifically to support the hash table scenario. It is that important.

Even Go, before it had generics, had a dedicated syntax carved out in the language to allow you to use maps natively. And now that Go has generics, that is actually far faster.

In many systems, a hash table is one of the core data structures. It is used everywhere, and it lack make the ecosystem a lot more painful. Take a look at how Hare handles query string parameters:

image

I mean, I guess it would be nice to have a way to do streaming on query strings? But the most natural way to do that is to use a hash table directly. The same applies for things like headers in web requests, how would you even model that in Hare?

I couldn’t disagree more with the premise of the original post. A hashtable is not something that you should punt, the consequences for your users are dire.

time to read 4 min | 742 words

RavenDB has the ability to analyze your queries and generate the appropriate indexes for you automatically. This isn’t a feature you need to enable or a toggle to switch, it is just the way it works by default. For more advanced scenarios, you have the ability to write your own indexes to process your data in all sorts of interesting ways.  Indexes in RavenDB are used for aggregation (map-reduce), full text search, spatial queries, background computation and much more. This post isn’t going to talk about what you can do with RavenDB’s indexes, however. I’m going to discuss how you’ll manage them.

There are several ways to create indexes in RavenDB, the one that we usually recommend is to create a class that will inherit from AbstractIndexCreationTask. If you are using C# or TypeScript, you can create strongly typed indexes that will be checked by the compiler for you. If you are using other clients (or JS indexes), you will have the index definition as constant strings inside a dedicated class. Once you have the indexes defined as part of your codebase, you can then create them using a single command: IndexCreation.CreationIndexes();

What I described so far is the mechanics of working with indexes. You can read all about them in the documentation. I want to talk about the implications of this design approach:

  • Your indexes live in the same repository as your code. Whenever you checkout a branch, the index definitions you’ll use will always match the code that queries them.
  • Your indexes are strongly typed and are checked by the compiler. I mentioned this earlier, but this is a huge advantage, worth mentioning twice.
  • You can track changes on your indexes using traditional source control tools. That makes reviewing index changes just a standard part of the job, instead of something you need to do in addition.

RavenDB has a lot of features around index management. Side by side index deployment, rolling indexes, etc. The question is now, when do you deploy those indexes.

During development, it’s standard to deploy your indexes whenever the application starts. This way, you can change your indexes, hit F5 and you are immediately working on the latest index definition without having to make any other actions.

For production, however, we don’t recommend taking this approach. Two versions of the application using different index definitions would “fight” to apply the “right” version of the index, causing version bounce, for example. RavenDB has features such as index locking, but those are to save you from a fall, not for day to day activity.

You should have a dedicated endpoint / tool that you can invoke that would deploy your indexes from your code to your RavenDB instances. The question is, what should that look like? Before I answer this question, I want to discuss another aspect of indexing in RavenDB: automatic indexing.

So far, we discussed static indexes, ones that you define in your code manually. But RavenDB also allows you to run queries without specifying which index they will use. At this point, the query optimizer will generate the right indexes for your needs. This is an excellent feature, but how does that play in production?

If you deploy a new version of your application, it will likely have new ways of querying the database. If you just push that to production blindly, RavenDB will adjust quickly enough, but it will still need to learn all the new ways you query your data. That can take some time, and will likely cause a higher load on the system. Instead of doing all the learning and adjusting in production, there are better ways to do so.

Run the new version of your system on QA / UAT instance and put it through its paces. The QA instance will have the newest static indexes and RavenDB will learn what sort of queries you are issuing and what indexes it needs to run. Once you have completed this work, you can export the indexes from the QA instance and import them into production. Let the new indexes run and process all their data, then you can push the new version of your application out. The production database is already aware of the new behavior and adjusted to it.

As a final note, RavenDB index deployment is idempotent. That means that you can deploy the same set of indexes twice, but it will not cause us to re-index. That reduces the operational overhead that you have to worry about.

time to read 5 min | 810 words

image

I got an interesting question from a customer recently and thought that it would make for a fun blog post. The issue the customer is facing is that they are aggregating data from many sources, and they need to make sense of all the data in a nice manner. For some of the sources, they have some idea about the data, but in many cases, the format of the data they get is pretty arbitrary.

Consider the image on the right, we have four different documents, from separate sources:

  • titles/123-45-678/2022-01-28 – The car title
  • tickets/0000000000000000009-A – A parking ticket that was issued for a car
  • orders/0000000000000000010-A – An order from a garage about fixes made for a customer (which includes some cars)
  • claims/0000000000000000011-A – Details of a rejected insurance claim for a car

We need to make sense of all of this information and provide some information to the user about a car from all those sources. The system in question is primarily interested in cars, so what I would like to do is show a “car file”. All the information at hand that we have for a particular car. The problem is that this is not trivial to do. In some cases, we have a field with the car’s license plate, but each data source named it differently. In the case of the Order document, the details about the specific service for the car are deep inside the document, in a free form text field.

I can, of course, just index the whole thing and try to do a full text search on the data. It would work, but can we do better than that?

A license plate in the system has the following format: 123-45-768. Can we take advantage of that?

If you said regex, you now have two problems :-).

Let’s see what we can do about this…

One way to handle this is to create a multi map-reduce index inside of RavenDB, mapping the relevant items from each collection and then aggregating the values by the car’s license plate from all sources. The problem with this approach is that you’ll need to specialize for each and every data source you have. Sometimes, you know what the data is going to look like and can get valuable insight from that, but in other cases, we are dealing with whatever the data providers will give us…

For that reason, I created the following index, which uses a couple of neat techniques all at once to give me insight into the data that I have in the system, without taking too much time or complexity.

This looks like a lot of code, I know, but the most complex part is in the scanLicensePlates() portion. There we define a regex for the license plate and scan the documents recursively trying to find a proper match.

The idea is we’ll find a license plate in either the field directly (such as Title.LicensePlate) or part of the field contents (such as Orders.Lines.Task field). Regardless of where we find the data, in the map phase we’ll emit a separate value for each detected license plate in the document. We’ll then aggregate by the license plate in the reduce phase. Some part of the complexity here is because we are building a smart summary, here is the output of this index:

As you can see, the map-reduce index results will give us the following data items:

  • The license plate obviously (which is how we’ll typically search this index)
  • The summary for all the data items that we have for this particular license plate. That will likely be something that we’ll want to show to the user.
  • The ids of all the documents related to this license plate, which we’ll typically want to show to the user.

The nice thing about this approach is that we are able to extract actionable information from the system with very little overhead. If we have new types of data sources that we get in the future, they’ll seamlessly merge into the final output for this index.

Of course, if you know more about the data you are working with, you can probably extract more interesting information. For example, we may want to show the current owner of the car, which we can extract from the latest title document we have. Or we may want to compute how many previous owners a particular vehicle has, etc.

As the first step to aggregate information from dynamic data sources, that gives us a lot of power. You can apply this technique in a wide variety of scenarios. If you are finding yourself doing coarse grained searches and trying to regex your way to the right data, this sort of approach can drastically improve your performance and make it far easier to build a system that can be maintained over the long run.

time to read 6 min | 1134 words

When you have a distributed system, one of the key issues that you have to deal with is the notion of data ownership. The problem is that it can be a pretty hard issue to explain properly, given the required background material. I recently came up with an explanation for the issue that I think is both clear and shows the problem in a way that doesn’t require a lot of prerequisites.

First, let’s consider two types of distributed systems:

  • A distributed system that is strongly consistent – such a system requires coordination between at least a majority of the nodes in the system to do anything meaningful. Such systems are inherently limited in their ability to scale out, since the number of nodes that you need for a consensus will become unrealistic quite quickly.
  • A distributed system that is eventually consistent – such a system allows individual components to perform operations on their own, which will be replicated to the other nodes in due time. Such systems are easier to scale, but there is no true global state here.

A strongly consistent system with ten nodes requires each operation to reach at least 6 members before it can proceed. With 100 nodes, you’ll need 51 members to act, etc. There is a limit to how many nodes you can add to such a system before it is untenable. The advantage here, of course, is that you have a globally consistent state. An eventually consistent system has no such limit and can grow without bound. The downside is that it is possible for different parts of the system to make decisions that each make sense individually, but are not taken together. The classic example is the notion of a unique username, a new username that is added in two distinct portions of the system can be stored, but we’ll later find out that we have a duplicate.

A strongly consistent system will prevent that, of course, but has its limititations. A common way to handle that is to split the strongly consistent system in some manner. For example, we may have 100 servers, but we split it into 20 groups, of 5 servers each. Now each username belongs to one of those groups. We can now have our cake and eat it too, we have 100 servers in the system, but we can make strongly consistent operations with a majority of 3 nodes out of 5 for each username. That is great, unless you need to do a strongly consistent operation on two usernames that belong in different groups.

I mentioned that distributed system can be tough, right? And that you may need some background to understand how to solve that.

Instead of trying to treat all the data in the same manner, we can define data ownership rules. Let’s consider a real world example, we have a company that has three branches, in London, New York City and Brisbane. The company needs to issue invoices to customers and it has a requirement that the invoice numbers will be consecutive numbers. I used World Clock Planner to pull the intersection of availability of those offices, which you can see below:

image

Given the requirement for consecutive numbers, what do we know?

Each time that we need to generate a new invoice number, each office will need to coordinate with at least another office (2 out of 3 majority).  For London, that is easy, there are swaths of times where both London and New York business hours are overlapping.

For Brisbane, not so much. Maybe if someone is staying late in the New York office, but Brisbane will not be able to issue any invoices on Friday past 11 AM. I think you’ll agree that being able to issue an invoice on Friday’s noon is not an unreasonable requirement.

The problem here is that we are dealing with a requirement that we cannot fulfill. We cannot issue globally consecutive numbers for invoices with this sort of setup.

I’m using business hours for availability here, but the exact same problem occurs if we are using servers located around the world. If we have to have a consensus, then the cost of getting it will escalate significantly as the system becomes more distributed.

What can we do, then? We can change the requirement. There are two ways to do so. The first is to assign a range of numbers to each office, which they are able to allocate without needing to coordinate with anyone else. The second is to declare that the invoice numbers are local to their office and use the following scheme:

  • LDN-2921
  • NYC-1023
  • BNE-3483

This is making the notion of data ownership explicit. Each office owns its set of invoice numbers and can generate them completely independently. Branch offices may get an invoice from another office, but it is clear that it is not something that they can generate.

In a distributed system, defining the data ownership rules can drastically simplify the overall architecture and complexity that you have to deal with.

As a simple example, assume that I need a particular shirt from a store. The branch that I’m currently at doesn’t have the particular shirt I need. They are able to lookup inventory in other stores and direct me to them. However, they aren’t able to reserve that shirt for me.

The ownership on the shirt is in another branch, changing the data in the local database (even if it is quickly reflected in the other store) isn’t sufficient. Consider the following sequence of events:

  1. Branch A is “reserving” the shirt for me on Branch B’s inventory
  2. At Branch B, the shirt is being sold at the same time

What do you think will be the outcome of that? And how much time and headache do you think you’ll need to resolve this sort of distributed race condition.

On the other hand, a phone call to the other store and a request to hold the shirt until I arrive is a perfect solution to the issue, isn’t it? If we are talking in terms of data ownership, we aren’t trying to modify the other store’s inventory directly. Instead we are calling them and asking them to hold that. The data ownership is respected (and if I can’t get a hold of them, it is clear that there was no reservation).

Note that in the real world it is often easier to just ignore such race conditions since they are rare and “sorry” is usually good enough, but if we are talking about building a distributed system architecture, race conditions are something that happens yesterday, today and tomorrow, but not necessarily in that order.

Dealing with them properly can be a huge hassle, or negligible cost, depending on how you setup your system. I find that proper data ownership rules can be a huge help here.

time to read 2 min | 367 words

Yesterday I gave a webinar about Database Security in a Hostile World and I got a really interesting question at the end:

If your RavenDB and app are on the same server, would you recommend using certificates?

The premise of the talk was that you should always run in a secured mode, even if you are running on a supposedly secured network. That is still very much relevant if you are running on a single server.

Consider the following deployment model:

image
(Icons from https://www.flaticon.com/)

As you can see, both the application and the server are running on the same machine. We can usually assume that there is no possibility for an attacker not running on the same machine to eavesdrop on the communication. Can we skip encryption and authentication in this case?

The answer is no, even for this scenario. That may be a viable model if you are running on your development machine, with nothing really valuable in terms of data, but it isn’t a good model everywhere else.

Why is that? The answer is quite simple, there are two issues that you have to deal with:

  • At some future time, the firewall rules will be relaxed (by an admin debugging something, not realizing what they are doing) and the “local” server may be publicly exposed.
  • Even if you are listening only to port 127.0.0.1, without authentication, you are exposed to anything that is local that can be tricked to contact you. That is a real attack.

In short, for proper security, assume that even if you are running on the local network, with no outside access, you are still in a hostile environment. The key reason for that is that things change. In two years, as the system grows, you’ll want to split the database & application to separate servers. How certain are you that the person doing this split (assume that you are no longer involved) will do the Right Thing versus the minimum configuration changes needed to make it “work”?

Given that the whole design of RavenDB’s security was to make it easy to do the right thing, we should apply it globally.

time to read 2 min | 245 words

In version 4.2 we have added an experimental feature to RavenDB, Graph Queries. That was quite a bit of effort and we were really excited about it. The feature was marked as experimental and had been in the product in that state for the past 4 years or so.

Unfortunately, while quite impressive, it didn’t graduate from an experimental feature to a stable one. Mostly because there wasn’t enough usage of graph queries to warrant it. We have seen its usage in some cases, but it seems that our target audience isn’t interested in graph queries for RavenDB.

Given that there isn’t much use of graph queries, we are also aren’t spending much time there. We are looking at the 6.0 release (scheduled around July 2022) and we realize that this feature makes our life more complicated and that the support burden of keeping it outweigh its benefits.

For that reason, we have made the decision to remove the experimental Graph Queries from RavenDB in the 6.0 release. Before we actually pull the trigger on that, I wanted to get your feedback on the feature and its usage. In particular, if you are using it and if so, what are you using it for?

The most common scenarios for this feature are already covered via projection queries in RavenDB, which often can be easier to express for developers.

Regardless, the feature will remain in the 5.x branch and the 5.2 version LTS will support it until at least 2024.

FUTURE POSTS

  1. InfoQ interview with me on RavenDB, database consistency and using C# as a system language - 15 hours from now

There are posts all the way to May 26, 2022

RECENT SERIES

  1. Challenge (66):
    06 May 2022 - Spot the optimization–solution
  2. Production postmortem (37):
    29 Apr 2022 - Deduplicating replication speed
  3. Recording (3):
    11 Apr 2022 - Clean Architecture with RavenDB
  4. Answer (10):
    07 Apr 2022 - Why is this code broken?
  5. Request for comments (2):
    10 Mar 2022 - Removing graph queries from RavenDB
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats