Parallel Reverb Processor

Carl Doersch and Keith Bare

Proposal

Milestone 1

Milestone 2

Final Report

Samples



Parallel Freeverb Code

Proposal

March 20, 2008

Group Info:

Carl Doersch (cdoersch)

Keith Bare (kbare)

Email either with their andrew id at andrew.cmu.edu

Project Web Page:

http://www.contrib.andrew.cmu.edu/~cdoersch/15418/

Project Description:

Parallelize a MIDI synthesizer. The program will take MIDI files as input and use a sound font to produce WAV or similar audio output. Ideally, we will be able to achieve near-linear speedups, at least for small numbers of processors (perhaps 32 or less). There are already sequential programs that handle this task (such as TiMidity++ and Fluid Synth); in order to avoid sundry tasks like parsing MIDI files, we would build on one of these open source frameworks.

For the 75% goal, the program will be parallelized and achieve some performance gains. These performance gains may not be consistent on all MIDI files.

For the 125% goal, there are many possible extensions to the functionality. The program could implement optimizations like note dropping and note quality reduction. Such optimizations are necessary when the program must render output in real-time, and must produce output even when there isn't enough CPU to render all notes. We could also attempt to deal with dynamically allocated CPU's; it may be possible to write a program that remains stable even when it loses CPU's, or is able to use extra CPU's when they become available, or can deal with one processors that run at different speeds. We could also simply spend more time examining different MIDI files, to find which ones achieve less than optimal data partitions. We could find ways to adaptively select a partition.

--Logistics--

Plan of Attack:

With respect to the division of labor, Keith and Carl will both always be up-to-date on how the code and algorithms work. We will divide work based on who is available at a particular time; most of the time, we will simply work together in a cluster.

Unfortunately, it is difficult to know how quickly we will progress without knowing exactly how TiMidity++/Fluid Synth work. The scope of the project will change dramatically depending on how much of the original program's functionality can be preserved.

Schedule:

3/27

Read source code for TiMidity++ and Fluid Synth; decide which one to use; learn the basic data structures.

4/3

Design parallel algorithms, begin writing.

4/10

Finish writing parallel algorithm, start debugging.

4/17

Finish parallel algorithm, so that there's actually a working version of the program. Begin performance testing; experiment with other versions of the parallel code.

4/24

Continue experimenting with other algorithms

5/1

Decide which algorithm we will finally use. Finish all coding and testing, and prepare poster.

Milestones:

4/3: Decide between TiMidity++ and Fluid Synth. Have a general outline of how the chosen framework works and which data structures will be used in the parallel code. Have several designs for the parallel code and discuss the benefits of each. Decide which algorithm will be used, and have a general outline for how the code will work in terms of the data structures in TiMidity++ or Fluid Synth.

4/17: Have a working version of the parallel code. Have results from tests on that code. Analyze the results and decide what should be done to improve the algorithm, and whether it is worthwhile to implement a completely different algorithm.

Literature Search:

As far as we know, MIDI synthesizing has never been done in parallel. There is one document on general parallel audio synthesizing that may be useful (we haven't read it yet):


Jeff, B and K. Schwan. "PARSYNTH: a case study on implementing a real-time digital audio synthesizer" 4th International Workshop on Parallel and Distributed Real-Time Systems, p. 143

Otherwise, there are some general documents in computer music, basic programming references and documentation for TiMidity++ or Fluid Synth that may be useful.

We may also enlist the help of Roger Dannenberg, who works with computer music

Resources Needed:

The project may use either MPI or OpenMP, depending on the final algorithm we use. Then there's two tasks: simply synthesizing files to produce file output, and then synthesizing files in real time. When doing file output, in the MPI case we will most likely use BigBen and in the OpenMP case we will most likely use Cobalt. But to render files in real time, we may need to use a different system, perhaps a 4 or 8 core workstation that actually has a sound card.

Getting Started:

We have done some preliminary research into TiMidity++ and Fluid Synth. We have obtained the source code and are currently investigating the data structures and program flow, to see which parts may be re-used and what needs to be modified in each case. Design and programming can only begin once this analysis is complete.