Last updated at 5:25 pm UTC on 18 June 2018
Squeak 5 introduced a new object model and VM known as "Spur". Pharo and Cuis use the same new object model.
<email@example.com> Tue, May 16, 2017 at 5:24 PM
Reply-To: Pharo Development List <firstname.lastname@example.org>
To: Pharo Development List <email@example.com>
In Spur on start up the VM allocates enough memory for the heap plus one "growth increment" (currently 16mb)
and new space (current default about 5mb on 32 bits, 9mb on 64 bits), and for the native code zone
(1mb on x86, about 1.4mb on arm and x64).
Spur *does not* reserve address space for the heap. It requests memory for the heap in segments
(default 16mb; controllable via a vmParameterAt:put: send).
It returns those segments to the is when GC frees segments (the threshold being controllable via a vmParameterAt:put: send).
See also http://wiki.astares.com/pharo/189
Live programming, originally introduced by Smalltalk and Lisp, and now gaining popularity in contemporary systems such as Swift, requires on-the-fly support for object schema migration, such that the layout of objects may be changed while the program is at one and the same time being run and developed. In Smalltalk schema migration is supported by two primitives, one that answers a collection of all instances of a class, and one that exchanges the identities of pairs of objects, called the become primitive. Existing instances are collected, copies are created using the new schema with state copied from the corresponding existing instance, and all pairs of instances are exchanged with become, effecting the schema migration.
Historically the implementation of become has either required an extra level of indirection between an objectís address and its body, slowing down slot access, or has required a sweep of all objects, a very slow operation on large heaps. Spur, a new object representation and memory manager for Smalltalk-like languages, has neither of these deficiencies. It uses direct pointers but still provides a fast become operation in large heaps, thanks to forwarding objects that when read conceptually answer another object and a partial read barrier that avoids the cost of explicitly checking for forwarding objects on the vast majority of object accesses.