Last updated at 11:10 pm UTC on 8 May 2017
Metacello repository: https://github.com/HPI-SWA-Teaching/Scamper
onConflict: [:ex | ex allow];
Current development head: https://github.com/HPI-SWA-Teaching/Scamper/tree/dev
Scamper architecture https://github.com/HPI-SWA-Teaching/Scamper/wiki/Scamper-Architecture
Rendering pipe line (data flow)
Installation into Squeak 5.1 http://forum.world.st/How-do-I-install-the-Scamper-browser-td4945136.html
Installation into Squeak 6.0a http://forum.world.st/Scamper-in-Squeak6-0a-17230-td4945217.html
Display of simple tables works fine now. See Scamper-HTML-Morphs and HTMLTableMorph.
Scamper is a simple web browser which runs in Squeak. Currently it supports:
- a super kawaii name
- basic formatos logicos: including Swikis, Yahoo, AltaVista
- most text emphasis tags
- HTTP redirects
Some things that don't work:
- serious frames
- some forms (including amazon, for some reason)
- META-based reloads and redirects
Enough is supported that you can browse around on the Web. Images with transparent backgrounds even look pretty cool.
But still, why use a Squeak browser at all, instead of Netscape? Two reasons:
On a deep level, Squeak is a nice place to be. The code and the objects are all there, easily accessible if you ever want to get at them. A Squeak user isn't limitted to just looking at the pretty rendered HTML pages, but can get directly at the underlying objects which structure them; they can move within the web, instead of just looking at it through glass. Sure, most of the time users are content to just read; but when they want to do something more, shouldn't they have the opportunity?
On a pragmatic level, if you already do most of your work within Squeak, it would be nice if you can do mundane things like look up Web documents within Squeak, too.
The improve Scamper page
The most current version of Scamper is in the Squeak release. There is no separate installation currently available. Run Scamper by selecting the "open..." world menu and then selectiong "web browser".
Here is an overview of the design, for anyone interested in web hacking.
- Network-URL category
- parsing absolute and relative URLs, and downloading Web documents through an abstract interface. Also includes a "MIMEDocument" class which combines data with its content-type and source URL.
- HTML category
- The biggest part. It includes four main subparts:
- HtmlTokenizer: divides a text stream into a sequence of HTML tags, text (handling character reference entities) and comments.
- HtmlParser: takes results from the above and generates a tree of HTML entities which is more convenient to manipulate.
- HtmlFormatter: takes result of the above and generates a formatted Text object suitable for display.
- FormInput hiearchy: handles the set of inputs from a Form.
- a simple Web browser with a mousey name. Display is handled via the pluggable views.
- I'm not sure that the simple click and wait cycle is the best way to navigate the web. But then I'm not exactly sure what to replace it with! In any case, it is hoped that one can write replacements for the browser and still find use in the HTML and URL support classes.
- ....dumb dee do....
- One possibility is a true website browser. By this I mean a browser rather similar in structure to the Squeak code browsers. What if I could go to a site (or a page) and get a list of the pages pointed to, the pages' titles, and a few lines of text to let me see if I would want to go to that page? Further, what if I could get links off of a page and have them become a quick list on my screen and be able to jump to those pages through the links without the irritation and delay of going back to the original page? Still another possibility: get a page, see a link, or links, that look interesting and select them: while I read the original page, my browser downloads the selected pages in the background so they are ready for me in my local cache. – Dwight Hughes
Scamper was originally put together by Lex Spoon.
David J. Pennell has been working on dynamic HTML generation, and pointed out the existing Document Object Model at http://w3c.org/DOM/. Feel free to jump in and explain this better!
Hey, this is a Swiki after all. Any comments are quite welcome.
I just grabbed a copy of Squeak 3.4 (latest update #5170); I tried to load up http://meta.wikipedia.org/ in Scamper and found myself confronted with an exception. It seems that htmlEntity valueOfHtmlEntity: doesn't handle hexadecimal... Although a complete Smalltalk newbie, I managed (after much trial and error) to get it hacked in (yay!) I'm sure my code is a bit ugly, but it seems to work.
In-progress fixes: http://leuksman.com/smalltalk/Brion's%20Scamper%20fixes.2.cs
One thing annoying about Navelgator is the series of dialog boxes you get if you want to approve cookies before they are accepted. As an alternative, I propose a "cookie browser." The bottom of the page could contain a list of recently receieved cookies, with a check box by each one so the user can indicate whether they accept that cookie for sending back to the server. As cookies arrive, they get stuffed in the cookie browser. The default behavior (send or not send) could be an easy user-controlled switch.
Scamper Removal and Install Packages