Ayende @ Rahien

Refunds available at head office

Perfoming joins without having all the data in memory

Probably the easier way to perform a join is by a nested loop, given dataset A and dataset B, joining all the rows to dataset C is simple:

for row_a in A:
	for row_b in B:
		if condition(row_a, row_b):
			add join(row_a, row_b), C

Supporting left/right/cross joins is simple matter from here, but this has the issue of having to have both datasets in memory. I run into it while processing big files, I don't want to have to hold them in memory, especially if I need several level of joins in order to correctly process them.

I thought about bypassing the entire issue by simply writing the data down to a sqlite DB, and doing the join there. While this is possible, I would rather avoid having to do this.

Any suggestions on where to look to solve this issue?

Comments

Jon Skeet
12/06/2007 10:33 AM by
Jon Skeet

The way LINQ to Objects does it is to have all the data for the "inner" sequence in memory, but then stream the "outer" sequence. So in pseudo-code (apologies for not writing it in full Boo!)

CachedB = LoadB

for row_a in StreamA

for row_b in CachedB

    if (condition(row_a, row_b)

        yield join (row_a, row_b)

Where "yield" is the way of streaming the result out. All easy in C# 2 or 3, but I don't know how easy it will be in your actual situation.

Hope it helps though.

Ken Egozi
12/06/2007 11:14 AM by
Ken Egozi

I once needed to write a simple program to diff two not-too-well-formed text files, with some sort of logic in it.

they were way too big for an in-memory work.

the solution was (do not laugh - it gave best ROI):

  1. read first file line by line while parsing it, insert each parsed record to MsAccess table (takes about 3-4 seconds)

  2. do the same on the second file (another 3-4 seconds)

  3. run a SP on the tables to create the output table (under .5 second)

  4. export to csv file (another blink of an eye)

this app runs every day by that client for over two year now.

flawlessly.

know your toolbox. choose wisely.

NirD
12/06/2007 11:18 AM by
NirD

The most efficient way to join two large datasets is to partition them into several smaller datasets (or at least it was when I working on databases in collage).

This is easiest when your condition is something like rowa.id == robb.a_id (lucky for us it’s a common scenario)

You open N new files, let's call them a1 to aN and use a hash function to divide all rows in A between those N buckets by row_a.id.

Then you do the same for B (by rowb.aid), so you have two sets of files a1..aN and b1..bN

Now you load a1 and b1 and join between them and repeat for a2,b2 up to aN,bN

Because the way you divided the data between files you know all rows in a1 are only joined with b1 etc.

It's not easy but this is how the "big boys" do it.

There are simpler methods for special circumstances (for example if you know one of the datasets is small).

If you need more information you can e-mail me.

Luke Breuer
12/06/2007 01:22 PM by
Luke Breuer

Are the files so big that you cannot load all of the key values & the associated character indexes for the line beginnings? You would have to scan the files twice, but I don't see that as prohibitive.

Jason
12/06/2007 01:51 PM by
Jason

I agree with NirD - simplistically that's how Oracle would handle the join.

Derick Bailey
12/06/2007 01:54 PM by
Derick Bailey

I would actually go down the SQLite route. I've never seen iterative logic of code perform anywhere near what group logic of sql can do... even in a small system like SQLite. I would recommend putting SQLite into in-memory mode, though, so you don't have to worry about file read/writes. I think you can specify the connection string as ":memory:" or something like that, to put sqlite in to in-memory mode.

Philip Løventoft
12/06/2007 02:03 PM by
Philip Løventoft

The Wikipedia article http://en.wikipedia.org/wiki/Joinalgorithm#Joinalgorithms

about join algorithms is actually pretty good and explains some of the common ways to make joins. Hope it helps.

Chad Myers
12/06/2007 02:49 PM by
Chad Myers

I think Ayende's point was that he didn't want to have to go down the route of writing a SQL/join engine in his app.

Is there some sort of other quick/dirty way to accomplish this with minimal effort?

I agree with Derick that SQLite is probably your best bet. Otherwise you're going to have to go down the NirD route and re-invent a SQL engine. If you're doing a SQL engine anyhow, might as well just use SQLite and let it do the complicated stuff.

Of course, I'm sure you could write a 3 line Boo script to implement a SQL engine, so...

Ayende Rahien
12/06/2007 03:18 PM by
Ayende Rahien

Luke,

That would require some way of knowing ahead of time how this goes.

I am thinking of joins in a pipeline, which makes it more interesting.

Ayende Rahien
12/06/2007 03:18 PM by
Ayende Rahien

Luke,

That would require some way of knowing ahead of time how this goes.

I am thinking of joins in a pipeline, which makes it more interesting.

Jon Skeet
12/06/2007 04:04 PM by
Jon Skeet

For joins in a pipeline, LINQ to Objects is still a good model to follow - with the restriction that the "inner" dataset has to be small.

(LINQ to Objects actually builds up a Lookup based on the inner dataset, so it doesn't need to loop through it in memory for each row in the outer dataset, but it's only able to do that because the joins are equality-based. That may or may not be your situation :)

Jon

Ayende Rahien
12/06/2007 05:09 PM by
Ayende Rahien

Jon,

I need more than merely equality, so I am not sure what to do here, perhaps I'll limit this to that, or I'll use the inner row iteration only.

Thanks.

Jon Skeet
12/06/2007 06:04 PM by
Jon Skeet

Well, for non-equality it's still doable - you just don't get quite as efficient operation. If you fundamentally need to compare each outer row with each inner row, then it's inherently an O(N*M) operation anyway, so you're still okay to just keep a list of the inner rows in memory.

Avish
12/06/2007 07:42 PM by
Avish

If you're doing equality, but not necessarily key equality (what I mean is, if there exist functions f and g so that a match is produced iff f(a) == g(b)) you can do a simple hash join. That would still require one of the rowsets to be in memory, but it'll perform better than O(N*M). Actually, O(N+M) on average if your hashes are "ideal".

For my experience with this, see http://anavish.livejournal.com/29734.html.

Jon Skeet
12/06/2007 08:35 PM by
Jon Skeet

Sorry, yes - I should have specified that key equality was what I'd been considering in terms of equality all along. An equijoin basically :)

Luke Breuer
12/07/2007 05:57 PM by
Luke Breuer

Ayende, can you define "how this goes"? Also, your blog engine is no longer sending me emails when you reply to my posts. :-(

Ayende Rahien
12/07/2007 06:11 PM by
Ayende Rahien

The blog engine doesn't do that, this is me taking care of it :-)

I meant, this would require knowing the relative sizes of the data, and would require me to scan it twice.

It is certainly possible, but I don't want to get there.

Luke Breuer
12/07/2007 06:17 PM by
Luke Breuer

Sheesh, that's a lot of work. I'll bet you've spent more time emailing people your comments than it would take to make the blog engine do that. :-p

It does require a second scan, but that might happen no matter your approach. It's something to think about -- if you populate a table and then index a column, I believe you're doing a full scan of the data twice, although the second time should be more efficient.

Ayende Rahien
12/07/2007 06:19 PM by
Ayende Rahien

Luke,

I can just dump it on a SQLite DB and don't have to deal with this, that would be better.

Luke Breuer
12/07/2007 06:20 PM by
Luke Breuer

This is true, and OK as long as SQLite does what you want. I have no idea how well it operates on sets of data that cannot be kept entirely in memory, nor whether a hand-written approach could be made more efficient due to fewer requirements and more domain-specific knowledge.

Avish
12/09/2007 06:26 PM by
Avish

I'd argue that, this being an ETL tool, you have just the same generic requirements as any database engine, and no more domain-specific knowledge than one.

Ayende Rahien
12/09/2007 06:40 PM by
Ayende Rahien

Yes, but as the sole developer, I don't have the time to build the full engine.

Avish
12/10/2007 09:11 PM by
Avish

I wasn't implying that; I was replying to Luke's comment about a hand-written approach being better than using a standard in-memory engine due to increased domain knowledge.

Comments have been closed on this topic.