# Ayende @ Rahien

Hi!
My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:

ayende@ayende.com

+972 52-548-6969

## What is map/reduce for, anyway?

time to read 3 min | 600 words

Yesterday I gave a visual explanation about map/reduce, and the question came up about how to handle computing navigation instructions using map/reduce. That made it clear that while (I hope) what map/reduce is might be clearer, what it is for is not.

Map/reduce is a technique aimed to solve a very simple problem, you have a lot of data and you want to go through it in parallel, probably on multiple machines. The whole idea with the concept is that you can crunch through massive data sets in realistic time frame. In order for map/reduce to be useful, you need several things:

• The calculation that you need to run is one that can be composed. That is, you can run the calculation on a subset of the data, and merge it with the result of another subset.
• Most aggregation / statistical functions allow this, in one form or another.
• The final result is smaller than the initial data set.
• The calculation has no dependencies external input except the dataset being processed.
• Your dataset size is big enough that splitting it up for independent computations will not hurt overall performance.

Now, given those assumptions, you can create a map/reduce job, and submit it to a cluster of machines that would execute it. I am ignoring data locality and failover to make the explanation simple, although they do make for interesting wrinkles in the implementation.

Map/reduce is not applicable, however, in scenarios where the dataset alone is not sufficient to perform the operation. In the case of the navigation computation example, you can’t really handle this via map/reduce because you lack key data point (the starting and ending points). Trying to computing paths from all points to all other points is probably a losing proposition, unless you have a very small graph. The same applies if you have a 1:1 mapping between input and output. Oh, Map/Reduce will still work, but the resulting output is probably going to be too big to be really useful. It also means that you have a simple parallel problem, not a map/reduce sort of problem.

If you need fresh results, map/reduce isn’t applicable either, it is an inherently a batch operation, not an online one. Trying to invoke map/reduce operation for a user request is going to be very expensive, and not something that you really want to do.

Another set of problems that you can’t really apply map/reduce to are recursive problems. Fibonacci being the most well known among them. You cannot apply map/reduce to Fibonacci for the simple reason that you need the previous values before you can compute the current one. That means that you can’t break it apart to sub computations that can be run independently.

If you data size is small enough to fit on a single machine, it is probably going to be faster to process it as a single reduce(map(data)) operation, than go through the entire map/reduce process (which require synchronization). In the end, map/reduce is just a simple paradigm for gaining concurrency, as such it is subject to the benefits and limitations of all parallel programs. The most basic one of them is Amdahl's law.

Map/reduce is a very hot topic, but you need to realize what it is for. It isn’t some magic formula from Google to make things run faster, it is just Select and GroupBy, run over a distributed network.

M/R is unusable in MS windows world - the main advantage is they have cheap hardware and cheap OS (Linux). Forget it if you r .NET developer ... or use Mono :-)

There is a good paper on this, covering the problem of merging, and Quoting from the paper:

"This new Map-Reduce-Merge programming model retains Map-Reduce’s many great features, while adding relational algebra to the list of database principles it upholds. It also contains several configurable components that enable many data-processing patterns."

Google "Map-reduce-merge: simplified relational data processing on large clusters " for details.

one of your better posts in the last time. i like it

j23tom, Why so?

In the grand scheme of things, a windows license is not that big an expense. Sure, if you have 1000+ machines it is. But this algorithm is equally usable on 5-10 machines, and it will allow you to scale out instead of having to scale up.

I do wish that they had made better support for it, so that it was possible to just do out of the box. But I guess it is not that easy.

#### Comment preview

Comments have been closed on this topic.

#### FUTURE POSTS

No future posts left, oh my!

#### RECENT SERIES

1. Technical observations from my wife (3):
13 Nov 2015 - Production issues
2. Production postmortem (13):
13 Nov 2015 - The case of the “it is slow on that machine (only)”
3. Speaking (5):
09 Nov 2015 - Community talk in Kiev, Ukraine–What does it take to be a good developer
4. Find the bug (5):
11 Sep 2015 - The concurrent memory buster
5. Buffer allocation strategies (3):
09 Sep 2015 - Bad usage patterns
View all series