Ayende @ Rahien

My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:


+972 52-548-6969

, @ Q c

Posts: 6,125 | Comments: 45,486

filter by tags archive

Macto, defining Centralized Service, Distributed Service and Localized Component

time to read 4 min | 606 words

I lately come into the conclusion that I need a few new terms to describe a few common ways to talk about the way that I structure different components in my applications.

Those are Centralized Service, Distributed Service and Localized Component. Mostly, I use them as a way to express the distribution semantics of the item in question.

As you can probably guess, I am using the term service to refer to something that we make remote calls to, while I am using the term component to refer to something that is running locally.

Centralized Service is probably the classic example of a web service. It is a server that is running somewhere to which we make remote calls to. As far as the system is built, there is only one such server. It may be implemented with clustering or load balancing, but logically (and quite often, physically), it is a single server that is processing requests. This is probably the easiest model to work with, since it more or less remove the entire question of concurrency conflicts from the system. Internally, the Centralized Service is using transactions or locks to ensure coherency in the face of concurrency.

Distributed Service is a built upfront to run on a set of server, and the need to handle concurrency conflicts is built into the design of the system. That may be done using sharding, Paxos or other methods. Usually, we build Distributed Service for very high scalability / reliability cases, since it tend to be the more complex solution. An example of a Distributed Service would be DNS, where we explicitly design the system to be resilient to failure, but accept the more complex concurrency issues (slow updates).

Localized Component is a solution to the chatty interface problem. There are quite a few scenarios where we need to make calls to a separated subsystem, but the cost of network traffic completely outweigh the cost of actually performing the operation on the other side. In this case, we may switch from a Centralized Service to a Localized Component. What this means is that instead of executing the operation on the other side, we perform it locally.

In practice, this means that we need to design our system in a way that any data that we would like to have is structured in such a way that it can be brought locally or retrieved very cheaply. An example of such a system appears in this post, although that is a fairly complex one. A more common situation is a component that deals with a set of rule, and we simply need to get the rule from the rule repository and execute it locally.

Another alternative for Localized Components is to structure it in such a way that retrieving and persisting the data is cheap, and processing it is done on locally. That way, the sharing of the data and the actual processing of the data are two distinct issues, which can be resolved separately. A common issue that needs to be resolved with Localized Components is consistency, if the component allow writes, how do other instances of the component, running on different machine get notified about it.

I tend to avoid Distributed Services in favor of Centralized Services and Localized Components, which tend to be easier to work with. It is also easier to lean on existing infrastructure then write an implementation from scratch. For example, I am using Rhino DHT (which I consider to be a Distributed Service) to handle a lot of the complexity inherit to one of those.



am I the only one that didn't get it?

well, a picture would be definitely helpful as being more verbose in some parts (e.g. separating HW from SW plus telling each component what it is, a DB running? IIS? linux? where's cache? )

I am completely lost, the only thing I got from this is the vocabulary :), service, component and distributed.

well...hope I will be able to follow your posts oren



I think you might be trying to gleen too much from this post. All he is really doing is defining some terminology that he'll be using in future posts.

I'm sure there'll be posts galore explaining where the DB, Cache, et al will be located.



the only thing I got from this is the vocabulary

Pretty much that's what you were to get from this.

separating HW from SW plus telling each component what it is, a DB running? IIS? linux? where's cache?

Those are implementation details. For now I believe Ayende is establishing a way of describing the the entire system from a 30,000 foot view. Whether these services and components run on different physical boxes or all on the same box is for the most part is not relevant at this stage.

The important things to note from here is that the system will be made up of a set of distinct services that run as separate processes. They will communicate via some form of TCP/IP protocol. The centralized services may be REST or SOAP webservices. Say, as an example that I'm making up, the "customer" uses an Active Directory Service or LDAP server to store its user info. Ayenda may interface to that using a web service for authentication so that if the "customer" later changes how they do network authentication, only that webservice would need to be updated to use the new auth service.

As for the distributed services you may want to look up some of Greg Young's presentation videos on the InfoQ site and watch them for a better understanding of what I think Ayende is describing here. He has not stated yet that these services will be message based or that he will be using Command Query Seperation. Once again those are details that would still require diving too deep into the problem domain at this point.

First I suspect he will return to the auth spec from the previous post and describe how he plans on breaking up the authentication context and where each of those parts will be handled using the above component and service types.

Disclamer: The above is my interpretation and in no way should be taken as coming from Oren. I'm not Oren and do not share his ability to intimidate both code and people with my sheer physical size :-)

Comment preview

Comments have been closed on this topic.


  1. The design of RavenDB 4.0: Physically segregating collections - one day from now
  2. RavenDB 3.5 Whirlwind tour: I need to be free to explore my data - about one day from now
  3. RavenDB 3.5 whirl wind tour: I'll have the 3+1 goodies to go, please - 5 days from now
  4. The design of RavenDB 4.0: Voron has a one track mind - 6 days from now
  5. RavenDB 3.5 whirl wind tour: Digging deep into the internals - 7 days from now

And 12 more posts are pending...

There are posts all the way to May 30, 2016


  1. The design of RavenDB 4.0 (14):
    03 May 2016 - Making Lucene reliable
  2. RavenDB 3.5 whirl wind tour (14):
    04 May 2016 - I’ll find who is taking my I/O bandwidth and they SHALL pay
  3. Tasks for the new comer (2):
    15 Apr 2016 - Quartz.NET with RavenDB
  4. Code through the looking glass (5):
    18 Mar 2016 - And a linear search to rule them
  5. Find the bug (8):
    29 Feb 2016 - When you can't rely on your own identity
View all series


Main feed Feed Stats
Comments feed   Comments Feed Stats