Ayende @ Rahien

Hi!
My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:

ayende@ayende.com

+972 52-548-6969

, @ Q c

Posts: 5,972 | Comments: 44,511

filter by tags archive

Interview challenges: Decompress that


I’m always looking for additional challenges that I can ask people who interview at Hibernating Rhinos, and I run into an interesting challenge just now.

Implement a Decompress(Stream input, Stream output, byte[][] dictionary) routine for the following protocol:

Prefix byte – composed of the highest bit + 7 bits len

If the highest bit is not set, this is a uncompressed data, copy the next len from the input to the output.

If the highest bit is set, this is compressed data, the next byte will be the index in the dictionary, copy len bytes from that dictionary entry to the output.

After writing the code, the next question is going to be, what are the implication of this protocol? What is it going to be good for? What is it going to be bad for?


Comments

Jake

Interesting. Do you ask candidates to solve this whilst you are watching them, or do you allow them the freedom to solve the problems in their own comfortable environment with their preferred tools?

Jahmai Lay

@Jake I imagine it's something like this: http://y2u.be/rUY8HysBzsE

Oğuzhan Eren

http://en.wikipedia.org/wiki/Dictionary_coder ?

Pop Catalin

I think this kind of compression whould be good for Json and XML documents, and in a typical Json or XML document the members names are higly repeated and not too many (so 127 dictionary entries limit would be pretty good, also max size of 255 for an entry (256 if you modify the alghoritm to copy len+1 bytes) is prety good for this use case.

I think it gets a good tradeof between speed (very fast) and size (good for certain documents types)

Ayende Rahien

Jake, No, I usually let them either do that on a separate room with a development machine or they do it at home

Stefan

Resembles my WBXML work i did for telco industry >8 years ago ;)

Comment preview

Comments have been closed on this topic.

FUTURE POSTS

  1. Reducing parsing costs in RavenDB - one day from now

There are posts all the way to Aug 04, 2015

RECENT SERIES

  1. Production postmortem (5):
    29 Jul 2015 - The evil licensing code
  2. Career planning (6):
    24 Jul 2015 - The immortal choices aren't
  3. API Design (7):
    20 Jul 2015 - We’ll let the users sort it out
  4. What is new in RavenDB 3.5 (3):
    15 Jul 2015 - Exploring data in the dark
  5. The RavenDB Comic Strip (3):
    28 May 2015 - Part III – High availability & sleeping soundly
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats