links to this page:    
View this PageEdit this PageUploads to this PageHistory of this PageTop of the SwikiRecent ChangesSearch the SwikiHelp Guide
Ideas for Squeak Sound Architecture
Last updated at 2:51 pm UTC on 16 January 2006
By Eddie Cottongim

I'd like to address a few shortcomings in the current sound system.


Basically what I'd like to achieve is separation and modularity for file formats and data formats. File formats like aiff and wave support multiple data formats internally.

File Formats: Similar to ImageReadWriters, there should be AudioReadWriters. AiffReadWriter, WaveReadWriter, AuReadWriter, etc. These can give clients pieces of SoundData (CompressedSoundData or PCMSoundData) and may have the ability to seek randomly in the file.

Data Formats: We already have SoundCodecs( MuLawCodec, GSMCodec etc ) which store CompressedData, which is similar to what I want. Create a PCMSoundCodec, which can store its data in PCM16BitUnsignedData, PCM32IEEEFloatData, etc. Additionally, SoundCodecs would optionally have the ability to be resident on disk and keep only enough data in memory to play smoothly.

I'd keep the current SoundBuffer and 16-bit playback mechanism around - its good enough for most things, and great for experimenting. I wouldn't want to create an overcomplicated scheme (Java's seems this way) that prevents people from having fun with sound. Also, if you really need 24 bit output, you probably need other advanced IO too, which is beyond the scope of this design.

Lets see if this comes together.


1. Playing a very large .wav file from disk. We use a WaveReadWriter to read the file. WaveReadWriter determines that this is 16 bit unsigned PCM. It gives us back a SampledSound whose codec is PCMSoundCodec, and whose data is PCM16BitUnsignedData. You execute myWavSound play. SampledSound asks the PCMSoundCodec to interpret its PCM16BitUnsignedData to provide raw samples to play. PCM16BitUnsignedData actually only has a portion of itself in memory. When it needs to load some more data, PCMSoundCodec asks WaveReadWriter to read some more of the source file. Meanwhile, SampledSound is playing the chunks using QueuedSound or something similar.

Comments: It seems iffy that the codec can ask the AudioFileReadWriter to read from an arbitrary file location in the domain of the uncompressed samples. Especially for compressed codecs, that may not be easy. We don't want the AudioFileReadWriter trying to interpret the file to find a location; perhaps the Codec may have to help seek a location in the file, and perhaps maintain some indexes to help random seeks. This could get messy, and is one reason I think that implementing disk-resident sounds would be optional for a SoundCodec.

2. Streaming MP3 over the net (there may be an implementation of this already, this is for argument's sake) - Very similar to the above, but the MP3ReadWriter(which probably won't implement writing for patent reasons) is reading from a socket stream instead of disk. Random seeks aren't possible under this scheme unless maybe you buffer the downloaded data, and can only seek backwards within that.