Wednesday, August 31, 2011

Productive Antagonism: Or how to complain and get paid for it

So I was at this month's IWST meeting (for those of you who don't know what IWST is, go to these links http://indianapolisworkshops.com/ and http://www.meetup.com/indy-testers), and my primary lightning talk was about Antagonism.

Specifically, I was talking about Constructive Antagonism and how it can assist in the design of software systems (note that these are not actual terms used by anyone but me, I just enjoy capitalizing words to give them emphasis).

I define Constructive Antagonism as the process of challenging ideas for a software project in order to suss out potential problems with the design. Preferably, you would want to start this process as early as possible, in order to prevent wasted effort on bad ideas, ideas that are too tangential to the main product, or that just aren't feasible for the project (at least in the current iteration).

It turns out that my idea was hardly unique, as a few people at the meeting had mentioned similar ideas:

Mike Kelly shared The Six Thinking Hats group discussion tool. (Wikipedia link) and pointed out that it has a hat similar to my idea, and the whole system seems to be similar to what I was thinking of (people play roles in order to get new perspectives on a problem or idea).

Rick Grey shared some of his own experiences, and advised that attempting to have the antagonist role should be temporary (no one wants to get the label of 'the bad guy' on all of his or her projects), and that the role requires a bit of credibility within the group that you practice it (otherwise, there are problems of respecting that role).

We also ended up talking about potentially taking the 6 Hats idea and applying it to an actual company's project in town at one of our next meetings. I'm hoping we get to, as I'm excited to see if my idea works out.

As for my personal work with this...I try to be the antagonist during testing efforts, or discussing bugs. I try to find the worst-case that could result from us not fixing a bug, or not paying attention to a feature or feature-set. I've noticed that it feels like it has about the right mix of results:

1. There are a number of times when the idea I challenged was at the appropriate level (the bug didn't need to move up in priority, the feature didn't need more testing), but after the discussion, I think we all felt that the issue was fairly thoroughly discussed, and we were comfortable with it, which was part of the goal. I don't ever want to feel like I accepted an answer just because it was quick.

2. There are sometimes when the discussion resulted in a bug getting pushed up, or new testing being done (which resulted in finding some good bugs).

3. Sometimes, I just enjoy being pedantic and argumentative. I'd like to think these are rare, but my coworkers would be the best people to ask. ;)

So, my question for the readers:

Have you ever found yourself in the situation of The Antagonist, and did it help, or hurt, what you were doing at the time? (Note that this does not need to be software-related).

Sunday, August 28, 2011

Baby Bug Report

So this blog post comes from a conversation I had at SEP's game night a few weeks ago...this is what happens when you let a software tester into a conversation about babies.

Enjoy!

List of defects for Human 0.0.1 (Iteration BABY):

Note: Guys, I know it's an early build, but still, maybe we should have held off until the next iteration or two...maybe we should shelve this until iteration TEENAGER...

1. Incorrect size.

    The baby does not meet minimum sizing standards, as stated in the requirements doc. Reference HMO_SPN_SZE

2. Facial I/O port does not follow communication standards

    Even checking for localization, none of the commands given to the test unit resulted in the expected behavior (though some giggling and drooling was observed), likewise, the unit could not communicate functionally. Reference HMO_SPN_TLK
   
3. Unit fails to correctly store inputs

    Sometimes, after feeding inputs into the unit, the inputs are rejected, usually expelled somewhat violently (and messily, my work station has needed cleaning frequently this week). Used inputs BRST_MLK and STRND_PEAS, as recommended in the documentation. References HMO_SPN_VMT
   
4. Unexpected Outputs requires DIAPER workaround

    My test unit exhibited frequent (and odiferous) unexpected outputs, despite several attempts to debug, I have not found the source of the issue, but at least with the DIAPER workaround, I can continue my testing with a minimum of mess. Reference HMO_SPN_POO
   
5. Volume control is broken

    At various points, the Unit seemed to encounter an error, and emited an audible message (see next defect for more details) and I was unable to find the configuration for turning down the volume of this warning. While I cannot find a specific requirement on this, I've made a change request...as I doubt our users wish to be deafened. See Change Request CR_CRYBBY
   
6. Audible warning message too generic

    At various points, the unit requires inputs, changing of the DIAPER workaround, etc. It emits a piercing noise for each of these. The error message, however, is generic, and should be tailored to each fault. Reference HMO_SPN_WAIL
   
7. Unit has virtually no security

    During testing, I noticed that the test unit frequently caught viruses, specifically DIAPER RASH, SNIFFLES, COLIC. While the unit recovers, the process is pricy, requiring specialists. Reference HMO_SPN_SCK

8. Locomotion is impaired

    While trying to execute the command WALK, the unit was unresponsive (save for some drool and gurgling). I then attempted the simpler command CRAWL, still no response. I believe this to be a separate bug than #2, though I could be wrong, as I have no way to verify it is receiving my commands. Reference HMO_SPN_TDDL

9. Installation time is far too long
   Installation of the Test Unit took 9 months. Even after attempting to allocate more resources, this was still the case. During this time, the installation hardware also seemed far more sluggish and unresponsive to certain request. The last part of the installation was the worst, with much hard drive thrashing and almost violent reactions from the hardware. Please investigate. Reference HMO_SPN_LBR

10. General lack of functionality

    After extensive testing, the test unit seems to have little to no redeeming qualities, simply sucking up resources. While I normally don't like to give suggestions, this time I do have a Modest Proposal for its usefulness in the current state...