SUnit Test Notes
Last updated at 3:58 pm UTC on 15 February 2017
Some Common Errors
"NonBooleanReceiver: proceed for truth."
The expression being asserted evaluated to something other than true of false.
self assert: nil
Usually this can be resolved by ensuring the results are Boolean.
self assert: nil isNil
All tests cases show up as Errors and you don't get a Debugger window.
An exception in the setUp method will result in all test methods showing up as errors. Each test method will show up twice in the error listing. Even more confussing, when you click on one of the test method, you don't get a debugger window with an exception, you get nothing at all.
To solve this problem do something like this.
[...all set up code here...]
on: Exception do: [:each | Transcript show: ' '; show: each asString; cr].
This will show what kind of exception is being raised. By successively narrowing the scope of the block [..] you can localize
and resolve the exception.
Same test passes, fails, errors on different execution.
This is the worse of all possbile worlds for SUnit testing. Reliable feeback (pass or fail) is essential. There arethree types of conditions that cause this to happen:
See the last part of SUnit Tests Asserting Morphic State for one such example.
- Timing or race conditions within the step
We try to ensure that running one test never has an impact on another. This is why we do setUp and tearDown for every test. One potential arrea for compromizing this is the use of TestResources. If you do anything in your TestCase that affects the value of the test resource then you compromize the independance of you tests. I once had an Archive resource where I did a "aRepository loadListing: self listing intoArchive: aPatchArchive as part of my test case set up. Each test I ran gave me stranger and stranger results.
This usually happens when you are first setting up a test and the test case (implicitly) relies on some resource on image condition. I For example, it assumes some preference is set one way or that you are running on a particular platform. These bugs also are likely to reveal themselves when somebody else runs your tests. My first FileResources assumed everybody ran on windows.
Where the Debugger shows something like:
^ D....Test methodsFor: 'testing' stamp: 'tlk 10/16/2004 12:38'
You are trying to file in a change set with methods for "D....Test" where "D....Test" does not exist in the image
Some Common Questions
Instances of test fixtures not getting garbage collected
After running an SUnit Test, I see a large number of instances in my image, when in the Browser I click on the class list, right click, select 'more...' and select 'inspect instances'.
The number of instances you get is equal to the number of test methods in any testResults that TestRunner is still hanging onto. Right click on one of the instances in the ispector, and select 'chase pointers'to see the owner. To get rid of the instances just close TestRunner. This is seldom a problem unless you open many TestRunners and execute TestCases with a large number of test methods and a large number of iVars.
Some Nice Conventions
The following method illustrates two conventions that I try to follow when writing tests for Squeak. The comment here allow a single test to be executed with a double click on the line between the quotes an a print-it. See How to set up SUnit Tests so they can be run without a GUI for some more details.
The second is that I always have a testUp method. This does two things. It verifies that the fixtures have been created as intended and allows me and the user to understand the basic access methods of the class being tested. Such methods not usually explicitly tested. When creating an SUnit test of a class that I'm not familiar with, I usually create this method first. I almost always discover that something isn't quite what I thought it was.
ArchiveMemberTest run: #testSetUp
self assert: (aZipStringMember isKindOf: ZipArchiveMember).
self assert: aZipStringMember contents = 'a member created for
Another thing I often do is include an openInWorld in my testSetUp. If you do an openInWorld as part of the setUp you can get dizzy from all of the morphs flashing up. If you don't do an openInWorld anywhere, tests of GUI applications just feel alfully abstract.
setUp and tearDown (1): Morphs
Unlike languages with explict constuctor and destructors, Smalltalk SUnit tests do not usually need to make much use of tearDown. There is however one case where you will want to use tearDown to explictly help with garbage collection. When you create Morphs, even if you do not do an openInWorld, you should delete them. =setUp
aPatchArchive := PatchArchive
setUp and tearDown (2) Elements added to system collections
Another case where you will want to use tearDown to explictly is to remove objects that you add to system level collections as part of your test. You may think they do no harm but clean up after your self. In the following example, taken from ReviewerPostTest, the SUnit test would continue to run, even if you didn't have the tear down. However you would be left with a series of unwanted change sets, unnamed1, unnamed2, etc, one for each test you ran.
aChangeSet := ChangeSet new
aReviewerPost addChangeSet: aChangeSet.
self assert: aReviewerPost changeSets size = 1
ChangeSorter removeChangeSet: aChangeSet
Why should morphs be deleted at the end of a test? -Lex Spoon
In my image I have a package named TestUtilities which has a subclass of TestCase called ClassTestCase. It's got a selector #testCoverage that I am interested using if it gives me a test case code coverage metric. Are there any usage docs/examples? Google has failed me :-) -SUnit Test Notes