Sunday, February 16, 2014

Claims Testing for Google Chrome's Tweet Deck Extension

I don't want to publish this blog post with 140 characters and neither I do not claim that it will be so. I use Twitter and this is my Twitter ID @testinggarage. I have seen, how Twitter has evolved in its web version, Tweet Deck version and Android version. In this evolvement it has undergone change in experience, functionality and how it fits into various areas in public world need.

While being so, it is making claims for what it is now by telling the changes it brings in. That is, the claim is made for its present state, service or solution offer and experience. Now, we have one attribute of testing which looks at claims made by product and its server or solution to seeker.

I want to write in brief for what I understand for Claims Testing in one perspective. Hence the next five paragraphs will be about it. If wishing to go through the test report alone, go to last second paragraph by scrolling down the page a bit. Now, back to one of my views on Claims Testing in next paragraphs.

What is the claims and how it is related to testing? This question started to pound me when I heard this word six years back. For today I understand and see it in this way. 

  • A product's solution or service exist. This existence itself is a claim.
  • Each elements in this solution or service will have its own claims to meet for giving the intended service or solution.
  • When I look from perspectives of testing
    • looks like each attributes or quality criteria of a product is a claim when looked into it.
      • For example, functionality of each components is a claim; performance of each component and product as a whole is a claim.
  • I see two things when it comes to claims.
    • Explicit -- which is made by product to who needs it and does not need it as well. For example, the requirement lines or document; advertisement; sales and marketing campaign content.
    • Implicit -- this is expected by users and potential solution wanting person. For example, she should be able to use the product with ease by learning what it is.

Identify the claims is simple job. Yes, it might be simple with assumptions being made and available statements said regarding it. But, to understand the claims and actually identifying them, needs patience. And the tests conducted particularly for claims needs time. For this, time needing activity, I have witnessed most skip out from this test or lose focus while doing it. In other view, it is also this way -- software testing evaluates the claims of the product purpose, existence and its activities considering the environment of it.

I tell you from my little and limited testing practice experience, testing a product might look simple in a perspective; but, when the word 'claims' is added to testing and to tell 'claims testing', it is not a light task any more in whatever perspectives it is so. It spans to multiple dimensions and covering all of them is impossible for limited and fixed time we get for testing.

Then how to have the better coverage for context when explicitly said Claims Testing is done? How does the claim made by the coverage about Claims Testing is good enough for the context and what to consider for identifying the claims? This is a key question which misleads the tester, as per my observations so far. Hence stating the claims picked and in what specific context it is being evaluated, and having it mutually agreed with stakeholders helps.

As part of my testing practice, I picked Google Chrome's Tweet Deck extension for claims testing. Now, the key question to me was what will I test for claims? I picked up the changes made to Tweet Deck 3.5.5 and how it synchronizes with Twitter Web and Twitter's Android app. 

This was not simple one to start off. I was with a tester, who was observing what I was doing and had questions to me, for where I was heading to. This practice activity became so much intervened within itself, I enjoyed it. I took 2 hours for identifying and validating the claims. Later, I moved into second session in testing these identified claims (explicit) and started recognizing the implicit claims. These observations are recorded in this report

Though, I have spent around 6 hours of time on this exercise, still I see, the claims testing can go on. But I have to stop at a point on getting fair information based on the context need, and I stopped on assessing the accomplished coverage.

Saturday, February 15, 2014

Plan to Digg and Explore the Digg Reader

I picked up this exercise posted by Weekend Testing Australia New Zealand (@WTANZ_) in Weekend Testing (@weekendtesting) on 14th Sep 2013.

The mission given to tester in this practice session is as below.

Create a test plan for and assume you were asked to test the new Digg Reader product. However the catch is: you only have 1 week to test the whole thing. If you only had one week to test the entire product, what would you test?

I had to understand what was the condition of Digg Reader product to proceed further considering the constraint of one week time. On studying what are the changes and risky areas in Digg Reader for that day's version, I came up with master strategy and initial plan to test the product. However, the strategy and plan keeps evolving as I test.

Looking at the context of product and testing need, I feel the functionality needed the tests to start and cover the other tests spanning to different quality criteria. This report is my session notes having the Test Plan for the given mission.