Ayende @ Rahien

My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:


+972 52-548-6969

, @ Q c

Posts: 6,131 | Comments: 45,568

filter by tags archive

Perfoming joins without having all the data in memory

time to read 1 min | 143 words

Probably the easier way to perform a join is by a nested loop, given dataset A and dataset B, joining all the rows to dataset C is simple:

for row_a in A:
	for row_b in B:
		if condition(row_a, row_b):
			add join(row_a, row_b), C

Supporting left/right/cross joins is simple matter from here, but this has the issue of having to have both datasets in memory. I run into it while processing big files, I don't want to have to hold them in memory, especially if I need several level of joins in order to correctly process them.

I thought about bypassing the entire issue by simply writing the data down to a sqlite DB, and doing the join there. While this is possible, I would rather avoid having to do this.

Any suggestions on where to look to solve this issue?


Jon Skeet

The way LINQ to Objects does it is to have all the data for the "inner" sequence in memory, but then stream the "outer" sequence. So in pseudo-code (apologies for not writing it in full Boo!)

CachedB = LoadB

for row_a in StreamA

for row_b in CachedB

    if (condition(row_a, row_b)

        yield join (row_a, row_b)

Where "yield" is the way of streaming the result out. All easy in C# 2 or 3, but I don't know how easy it will be in your actual situation.

Hope it helps though.

Ken Egozi

I once needed to write a simple program to diff two not-too-well-formed text files, with some sort of logic in it.

they were way too big for an in-memory work.

the solution was (do not laugh - it gave best ROI):

  1. read first file line by line while parsing it, insert each parsed record to MsAccess table (takes about 3-4 seconds)

  2. do the same on the second file (another 3-4 seconds)

  3. run a SP on the tables to create the output table (under .5 second)

  4. export to csv file (another blink of an eye)

this app runs every day by that client for over two year now.


know your toolbox. choose wisely.


The most efficient way to join two large datasets is to partition them into several smaller datasets (or at least it was when I working on databases in collage).

This is easiest when your condition is something like rowa.id == robb.a_id (lucky for us it’s a common scenario)

You open N new files, let's call them a1 to aN and use a hash function to divide all rows in A between those N buckets by row_a.id.

Then you do the same for B (by rowb.aid), so you have two sets of files a1..aN and b1..bN

Now you load a1 and b1 and join between them and repeat for a2,b2 up to aN,bN

Because the way you divided the data between files you know all rows in a1 are only joined with b1 etc.

It's not easy but this is how the "big boys" do it.

There are simpler methods for special circumstances (for example if you know one of the datasets is small).

If you need more information you can e-mail me.

Luke Breuer

Are the files so big that you cannot load all of the key values & the associated character indexes for the line beginnings? You would have to scan the files twice, but I don't see that as prohibitive.


I agree with NirD - simplistically that's how Oracle would handle the join.

Derick Bailey

I would actually go down the SQLite route. I've never seen iterative logic of code perform anywhere near what group logic of sql can do... even in a small system like SQLite. I would recommend putting SQLite into in-memory mode, though, so you don't have to worry about file read/writes. I think you can specify the connection string as ":memory:" or something like that, to put sqlite in to in-memory mode.

Philip Løventoft

The Wikipedia article http://en.wikipedia.org/wiki/Joinalgorithm#Joinalgorithms

about join algorithms is actually pretty good and explains some of the common ways to make joins. Hope it helps.

Chad Myers

I think Ayende's point was that he didn't want to have to go down the route of writing a SQL/join engine in his app.

Is there some sort of other quick/dirty way to accomplish this with minimal effort?

I agree with Derick that SQLite is probably your best bet. Otherwise you're going to have to go down the NirD route and re-invent a SQL engine. If you're doing a SQL engine anyhow, might as well just use SQLite and let it do the complicated stuff.

Of course, I'm sure you could write a 3 line Boo script to implement a SQL engine, so...

Ayende Rahien


That would require some way of knowing ahead of time how this goes.

I am thinking of joins in a pipeline, which makes it more interesting.

Ayende Rahien


That would require some way of knowing ahead of time how this goes.

I am thinking of joins in a pipeline, which makes it more interesting.

Jon Skeet

For joins in a pipeline, LINQ to Objects is still a good model to follow - with the restriction that the "inner" dataset has to be small.

(LINQ to Objects actually builds up a Lookup based on the inner dataset, so it doesn't need to loop through it in memory for each row in the outer dataset, but it's only able to do that because the joins are equality-based. That may or may not be your situation :)


Ayende Rahien


I need more than merely equality, so I am not sure what to do here, perhaps I'll limit this to that, or I'll use the inner row iteration only.


Jon Skeet

Well, for non-equality it's still doable - you just don't get quite as efficient operation. If you fundamentally need to compare each outer row with each inner row, then it's inherently an O(N*M) operation anyway, so you're still okay to just keep a list of the inner rows in memory.


If you're doing equality, but not necessarily key equality (what I mean is, if there exist functions f and g so that a match is produced iff f(a) == g(b)) you can do a simple hash join. That would still require one of the rowsets to be in memory, but it'll perform better than O(N*M). Actually, O(N+M) on average if your hashes are "ideal".

For my experience with this, see http://anavish.livejournal.com/29734.html.

Jon Skeet

Sorry, yes - I should have specified that key equality was what I'd been considering in terms of equality all along. An equijoin basically :)

Luke Breuer

Ayende, can you define "how this goes"? Also, your blog engine is no longer sending me emails when you reply to my posts. :-(

Ayende Rahien

The blog engine doesn't do that, this is me taking care of it :-)

I meant, this would require knowing the relative sizes of the data, and would require me to scan it twice.

It is certainly possible, but I don't want to get there.

Luke Breuer

Sheesh, that's a lot of work. I'll bet you've spent more time emailing people your comments than it would take to make the blog engine do that. :-p

It does require a second scan, but that might happen no matter your approach. It's something to think about -- if you populate a table and then index a column, I believe you're doing a full scan of the data twice, although the second time should be more efficient.

Ayende Rahien


I can just dump it on a SQLite DB and don't have to deal with this, that would be better.

Luke Breuer

This is true, and OK as long as SQLite does what you want. I have no idea how well it operates on sets of data that cannot be kept entirely in memory, nor whether a hand-written approach could be made more efficient due to fewer requirements and more domain-specific knowledge.


I'd argue that, this being an ETL tool, you have just the same generic requirements as any database engine, and no more domain-specific knowledge than one.

Ayende Rahien

Yes, but as the sole developer, I don't have the time to build the full engine.


I wasn't implying that; I was replying to Luke's comment about a hand-written approach being better than using a standard in-memory engine due to increased domain knowledge.

Comment preview

Comments have been closed on this topic.


  1. RavenDB Conference 2016–Slides - 4 hours from now
  2. Proposed solution to the low level interview question - about one day from now

There are posts all the way to Jun 02, 2016


  1. The design of RavenDB 4.0 (14):
    26 May 2016 - The client side
  2. RavenDB 3.5 whirl wind tour (14):
    25 May 2016 - Got anything to declare, ya smuggler?
  3. Tasks for the new comer (2):
    15 Apr 2016 - Quartz.NET with RavenDB
  4. Code through the looking glass (5):
    18 Mar 2016 - And a linear search to rule them
  5. Find the bug (8):
    29 Feb 2016 - When you can't rely on your own identity
View all series



Main feed Feed Stats
Comments feed   Comments Feed Stats