links to this page:    
View this PageEdit this Page (locked)Uploads to this PageHistory of this PageTop of the SwikiRecent ChangesSearch the SwikiHelp Guide
Last updated at 4:10 pm UTC on 14 January 2006
I have an idea to make Squeak a bit simpler to use especially for newbies. Some of you may have seen my infantile Resource framework I posted a couple of months ago. I was extremely excited and suffered from premature posting :) I am still excited about the idea but more pragmatic about the approach. So I have decided to fill in the blanks with expert assistance and mentoring (that means you).

I feel that Squeak suffers from a fragmentation of classes and could do with a tidy up. I am sure most, if not all of you agree. That is why we have a project for refactoring the kernel, one for Morphic as well as others I don’t know about.

I am proposing another one:

Resources (Codec is another (maybe better) term for the classes I will describe in a sec)

A “resource” is some object encoded on a data source external to Squeak. MP3, OGG, JPEG, MPEG, PNG, Hypertext, Rich text, Word documents, Illustrator files, etc, etc ,etc.

The problem I see is that getting access to a “resource” is easy for some types and not so easy for others. Most types have different interfaces. ImageReadWriter is great but doesn’t go far enough. Also types are usually setup to read streams but again, sometimes, in an inconsistent manner. Which can make code that writes a JPEG to a file, unable to write that JPEG to a HTTP socket. Yes I know there is a way to do most things in the existing classes BUT there is no common way.

As an application developer (say a new game) I want to be able to reference some address and get a Form returned simply and easily. Same for any sounds I want, or video, or my instructions page. I don’t want to have to learn a bunch of other classes just to be able to return my resource object (lazy aren’t I). I don’t care that my object is a JPEG file on a hard disk, I just want it!!!!! :) I dont want to make sure I have closed files, opened in read only, etc.

So to satisfy the lazy rude application developer:

An address is a URI. I know that will turn some people off and probably start a religious war but I think if done right, most people wont even know it’s a URI. I am sure there will be problems with detail but I am confident they can be worked out reasonably simply.
I already have a simple prototype which maps File URIs to the OS file system, I figure if it is good enough for Apache, its good enough for a file system :)
Oh and every type of Codec that works on some sort of directory of other resources will conform to the the same spec. Remote directories should be NO different to a directory on a local machine.

With a URI, you ask for a Codec (or Resource) which can map the resource returned from the URI, to a Squeak object. And of course the return is true too. SomeObject->Codec->URI.

In simple terms an external source is an os file, socket, etc, that can return a stream of binary data. To transform that binary data into something else, you apply one or more Codecs to the stream.
Also note that objects are only encoded or decoded when requested. So a request for an object from a URI results in the source being opened, read, and closed before anything is returned. Obviously optimisations might change that but the user need not know, the user just gets the resource and doesnt care if the source is open, closed or partially read.
Streaming would change this model slightly and is a function for phase 2 !!

Codecs would read a stream of data (not necessarily raw bytes) and output decoded objects. It makes sense to me that Codecs could be “stacked” to produce the data required.
For example what people know as OGG files are actually a Vorbis bit stream embedded in Vorbis packets embedded in OGG pages embedded in an OGG file. To take the example further, the OGG file is embedded in an OS file.
So to decode an OGG file, I would need the following chain of objects:
OSFile <-> OggFile <-> OggPage <-> VorbisPacket <-> VorbisBitStream <-> SoundBuffer
The OggPage and OggFile are needed for storing Vorbis bit streams in a OS file but they are not needed for RTP packets. RTP packets provide their own way of framing and error corrections. So to decode a Vorbis bit stream from RTP, I need:
RTPSocket <-> VorbisPacket <-> VorbisBitStream <-> SoundBuffer

Codec subclasses could include Text, Images (PNG, JPEG), Sound, Video, Zip, UUEncode, etc.

Codecs should be able to tell if a stream is its "type". For example an Ogg file only needs a maximum of 52 bytes to be able to tell if the stream is a valid Ogg header.

I think Traits would help a lot but I dont want to be distracted at the moment. If Traits are going to be in 4.0 then I would port it.