Building a shared dictionary

time to read 4 min | 631 words

This turned out to be a pretty hard problem. I wanted to do my own thing, but for reference, femtozip is considered to be the master source for such things.

The idea of a shared dictionary system is that you have a training corpus, that you use to extract common elements from the training corpus, which you can then use to build a dictionary, which you’ll then be able to use to compress all the other data.

In order to test this, I generated 100,000 users using Mockaroo. You can find the sample data here: RandomUsers.

The data looks like this:

{"id":1,"name":"Ryan Peterson","country":"Northern Mariana Islands","email":""},
{"id":2,"name":"Judith Mason","country":"Puerto Rico","email":""},
{"id":3,"name":"Kenneth Berry","country":"Pakistan","email":""},
{"id":4,"name":"Judith Ortiz","country":"Cuba","email":""},
{"id":5,"name":"Adam Lewis","country":"Poland","email":""},
{"id":6,"name":"Angela Spencer","country":"Poland","email":""},
{"id":7,"name":"Jason Snyder","country":"Cambodia","email":""},
{"id":8,"name":"Pamela Palmer","country":"Guinea-Bissau","email":""},
{"id":9,"name":"Mary Graham","country":"Niger","email":""},
{"id":10,"name":"Christopher Brooks","country":"Trinidad and Tobago","email":""},
{"id":11,"name":"Anna West","country":"Nepal","email":""},
{"id":12,"name":"Angela Watkins","country":"Iceland","email":""},
{"id":13,"name":"Gregory Coleman","country":"Oman","email":""},
{"id":14,"name":"Andrew Hamilton","country":"Ukraine","email":""},
{"id":15,"name":"James Patterson","country":"Poland","email":""},
{"id":16,"name":"Patricia Kelley","country":"Papua New Guinea","email":""},
{"id":17,"name":"Annie Burton","country":"Germany","email":""},
{"id":18,"name":"Margaret Wilson","country":"Saudia Arabia","email":""},
{"id":19,"name":"Louise Harper","country":"Poland","email":""},
{"id":20,"name":"Henry Hunt","country":"Martinique","email":""}

And what I want to do is to run over the first 1,000 records and extract a shared dictionary. Actually generating the dictionary is surprisingly hard. The first thing I tried is a prefix tree of all the suffixes. That is, given the following entries:


You would have the following tree:

  • b
    • ba
      • ban
        • bana
          • banan
            • banana
  • a
    • an
      • ana
        • anan
          • anana
      • ang
        • ange
  • n
    • na
      • nan
        • nana
      • nag
        • nage
  • l
    • le
      • lem
        • lemo
          • lemon
  • o
    • or
      • ora
        • oran
          • orang
            • orange
  • r
    • ra
      • ran
        • rang
          • range
  • g
    • ge
  • e

My idea was that this will allow me to easily find all the common substrings, and then rank them. But the problem is how do I select the appropriate entries that are actually useful? That is the part where I gave up my simple to follow and explain code and dived into the real science behind it. More on that in my next entry, but in the meantime, I would love it if someone could show me simple code to find the proper terms for the dictionary.