Project Proposal:
Parellelization Huffman Decoding
by Aakash Sabharwal & Rokhini Prabhu
Proposal Page
CheckPoint Page
Final Report Page
Summary
We want to implement huffman decoding in parallel on a multi-core system using two different techniques. We hope to achieve reasonable speedups and also ensure space efficiency in our parallel solutions. We will also do a detailed performance analysis (across the two techniques) and measure scalability of our code by running it on Blacklight.

Background

Huffman encoding is a widely used important loseless data compression technique. It encodes each symbol in a corpus with a binary string. The encoding of the entire corpus is therefore concatenation of the binary string encoding of each symbol.

The binary string encoding of a symbol is obtained from the Huffman tree which is generated from the corpus as follows:

  1. Create an empty priority queue (min heap)
  2. Create a leaf node for each symbol in the corpus and insert it into the priority queue with the priority denoted by the frequency of the symbol
  3. While the priority queue is has more than 1 node
    1. Remove the two nodes with the minimum priorities (minimum frequencies)
    2. Create a new internal node with these 2 nodes as the subtrees and a priority equal to the sum of the two frequencies of the nodes
    3. Insert the new internal node in the priority queue
  4. The remaining node is the root and the tree has now been built

The binary string for each symbol therefore is a representation of the path from the root of the Huffman tree to the node containing the symbol. 0 in the binary string means that we take the left subtree of the current node and 1 means that we take the right subtree of the current node.

Observe that by virtue of the construction, symbols of greater frequency tend to have shorter binary string representations while those of lower frequency tend to have longer binary string representations



A traditional sequential decoding algorithm linearly translates the bit stream from Huffman encoding into the original symbols by repeatedly traversing down the Huffman tree according to the bit read from the input stream until we hit a leaf node (and find the corresponding symbol). 0 bit from the bit stream corresponds to the left subtree while 1 corresponds to the right tree.

In our project, we are going to implement 2 different parallel solutions to Huffman decoding:
  1. We will use the traditional encoding algorithm and would decode the input bitstream in parallel across multiple cores of a machine
  2. We will modify the Huffman tree built by the encoding algorithm by creating a left-sided tree and using this to encode the symbols of the corpus. We will then use the left-sided tree to decode the encoding in parallel.
Challenge
The problem of Huffman decoding is challenging because of the inherent sequential nature of the algorithm. Due to the variable-length encoding of each symbol, we do not know where to start decoding for a symbol in the bitstream. We can not simply split the bit stream into multiple chunks and decode it in parallel because the bit stream for a symbol might span across 2 chunks. As such, we could have incorrect decoding.
Resources
We are mainly refering to two published paper. The first decoding approach is based on the following :
Parallel Huffman Decoding

While the second one is based on the following :
A space-efficient Huffman decoding algorithm and its parallelism We plan to start our codebase from scratch and will test our code using large corpuses obtained from the ML department at CMU.
Goals/Deliverables
PLAN TO ACHIEVE: Use multi-threading and the left sided Huffman tree approach to achieve speedup relative to the sequential Huffman Encoding. We will present a detailed analysis comparing the two parallel decoding approaches and their pros and cons.

HOPE TO ACHIEVE: A plugin that compresses images using Huffman encoding and then Decodes them using our Huffman Decoder. This would provide a good visualization of the accuracy of our decoder.
Platform
Multi core execution using pthreads on GHC and Blacklight.
Proposed Schedule

Week What We Plan To Do
April 5 - April 12Research, set up testing infrastructure, obtain corpuses
April 12 - April 19Sequential Encoding and Decoding
April 19 - April 26Implement first parallel encoding and decoding algorithm
April 26 - May 3Implement second parallel encoding and decoding algorithm
May 3 - May 10Buffer time for debugging + performance analysis