No, no, I'm not being Negative on testing, in fact for teh most part I am positive about testing. But what I want to talk about is a "Negative test" where the desired result is a failure. Charles Miller calls it an Anti-Test, but is sounds a little too much like the Anti-Christ. Negative Tests are a Good Thingtm and they way he describes them is almost exactly how they should be done.
In a previous job I had a rather complex bundle of code that I had the nasty habit of fixing one bug only to break something else, often that I had fixed before. This code was complex, and I often set it down for months at a time while I went and chased the latest sales demand of the week. But every so often an actual paying customer would be scheduled to bring a new shipment of code online and certain bugs QA would find that had to be fixed.
So I developed my own little test suite of code and seeded the test almost entirely with bug report tests, often negative tests. Things like "this shouldn't take more than 500ms." Creating a test case that came up red was often the first step, and for bugs where the bug repost was a positive statement, i.e. "Application Does Foo when you go Bar, Baz, Bif" I would write a test would do that, detect it. The last step was to invert the final return value, or if there were other code branches that would trap out the bug not being their, calling fail("adfadfg").
These tests can the be named something like BUG_12345_Negative to show (a) that is testing something that shouldn't be happening and also the name indicates where to look for verification. Standard fare for QA pros I am sure, but I spend most of my day oggleing over loops, xpaths, and diffs.