CAST2013 Tutorial

After spending almost the entire year looking forward to it, I finally attended not only my first conference but my first CAST! The first day consisted of four full-day tutorials. Attendees were given the choice to attend any one of four full-day tutorials, they all looked intriguing but the one I choose to register for was End to End agile Testing with Paul Holland – and I was not disappointed.  What I learned and got out of the tutorial is actually one of my highlights for this year’s CAST.  Now I’ve spoken to Paul before via twitter and email, and he had also informed me early on about a public RST class (completing RST is one of my goals) he was teaching in Ottawa earlier this year, which unfortunately I was not able to attend due to timing issues – but this is the first time I’ve actually met Paul in person.  I will say this – Paul is an awesome teacher!  Aside from the tutorial I also got a chance to speak with him about some of my goals as a tester, how I can go about to accomplish them and he was able to give me some great insights.  I also got a chance to joke around with him for a bit – he is one funny individual!

Getting back to the tutorial – there’s one term that I feel has been engraved into my head, and that is PCO which stands for Product Coverage Outline.  I work on many different projects at my company, and usually asked to create & document a test strategy.  I have to create a test strategy that will be looked at by many different people (Product Owners, Project Managers, Developers, Architects, Systems Development Manager, Test Managers) with different technical knowledge & understanding, and different understanding of what software testing is or what I (the tester) actually do.  This can be even more amplified at a big company such as where I work right now where the application might interact with many other components and due to the number of levels and teams who have worked on the different components, many people have a different understanding of where and what the risks are.  I don’t create 50 page documents explaining my test strategy (despite some in the company who may believe that is what needs to be written), instead I have been creating test strategies consisting of a short document divided into a few sections, and a diagram depicting the application and related components (which went over well) – but the challenge still remained, how can I do this better in a way different people with different understanding will better comprehend?

The PCO

I got my answer and learned how during the tutorial – Product Coverage Outline.  We were split into teams based on where we were sitting (I have more on this below). Paul explained to us (and then had us practice it in our teams during the tutorial) how a PCO can be created and used (in different ways) for different purposes – to explain testing, to guide testing, to divide testing amongst the team (if that’s your aim), to show test progress (along with visual tracking using a whiteboard), to show test coverage and how it can also be used as part of the final test report.  We started off creating our own PCO, although during this stage my team and many other had starting finding bugs and logging them into the shared bug list. This PCO would then help us focus our testing based on risks and features. Before lunch each team presented and explained their PCO in a timespan for 3 minutes.

Testing the application and filing bugs

After lunch Paul explained Test Reports and what was expected from them, and then we started (well officially started) to test the application and log bugs.  We had about 2.5 hours (which went extremely quick) to discuss the test strategy with our respective teams, test, file bugs and create our test report which each team would then present. One challenge here was to do all these things in the time allocated – a testing challenge we all face in the real world.  Another was to find and file bugs, in that we had to ensure another team had not already filed the bug, this eliminated duplicate entries and furthermore the team that filed the bug first would receive credit for it.

 Presenting the Test Report

The final part of the tutorial was each team presenting their Test Reports.  My team and I were nowhere near ready as I had just finished up writing the Test Report when we were called on as the first team to present. I presented the 5 minute test report for my team.  Now this was nowhere near my best presentation – not even close, but I know I can do better because I have done better.  More importantly though is that Paul gave me some insight on how I can do better in area’s I had not been aware of – how I can better deliver my test report verbally explaining the test report based on a story about the status of the product, how we tested it and how good that testing was.

Tough lesson learned

This being my first conference and my first tutorial of this type, I did learn a tough lesson on this first day.  It has nothing to do with Paul or the content or the tutorial itself.  I will say this to anybody new to a conference and tutorial of this type – try to sit in the front with people who seem excited and passionate about being there (which can be difficult to do as you’ve never seen or met these people before).  I feel I made a mistake by for some reason sitting towards the back – now this is solely my opinion based on my experience, doesn’t necessary hold true for everybody or every time.  There are different types of people who come to conferences for different reasons and different levels of passion for testing, different goals or types of things they want to get out of something like this – which is fine I guess.  But for me I want these types of sessions to require me to really think, learn how to think, I like them to challenge my thinking and my way of doing things. In team exercises I like working with passionate people with different ideas who want to share, learn, and push their thinking just like me – unfortunately this wasn’t the case (teammates not on the same page goal-wise) for me during the tutorial.  Despite this tough lesson, I was still able to learn, apply what I learned, got feedback on how to improve it and looking forward to using it on my projects at work and getting better at it by practicing it.

For the remainder of the conference, every session and talk I attended, I sat in the one of the first three rows – learned a ton, asked the speaker questions and had a blast! 🙂

Going to CAST2013

So it’s now official – I’m going to CAST2013.  Earlier this week I finalized my travel arrangements to and from CAST which is being held in Madison, Wisconsin this year. I had booked my hotel about a month and a half ago and registered for the conference itself early this year shortly after the registration had opened – before any type of schedule or sessions had been posted. This will be my first time attending CAST.  Prior personal commitments prevented me from attending CAST2012 in San Jose, California last year.  I’m not sure what to expect in certain regards at the conference but I do have a better idea now thanks to a blog post Erik Davis wrote about some of the things to expect for first-timers attending CAST.

I do expect to learn – a lot! That’s one of the main reasons I am going – to learn, to challenge myself, to listen to other software testers and learn from their experiences and to get ideas and modify those ideas if need be to apply them to my own testing projects with their own context,  to learn and get better from the tutorial I’ve enrolled for, from the speakers, the talks and the keynotes.  I’m looking forward to meeting a lot of the other testers with whom I’ve had some great discussions with via twitter, direct messages, and emails.  Looking forward to meeting some smart & skilled testers who have taken their own personal time to help me out with feedback and advice on how to deal with and approach different test related scenarios (from speaking about testing to management to approaching testing under different circumstances). Looking forward to meeting some of the testers I’ve worked with this year for the Test Competition.  I am also looking forward to meeting, learning from, and exchanges ideas with testers whom I’ve never had a chance to interact with yet.

Needless to say I am looking forward to the conference and what it offers, and expect to have a few blog posts covering different topics once I return.

Test Competition

On April 19th at 10am Eastern Time, the day and time had finally arrived for the NRG Global Test Competition.  Matt Heusser posted the competition rules and off we went.  A few weeks later, after a good amount of time and effort spent judging, comparing reports, discussion and chat, the results of the competition were posted (you can read them here).  This was the first time I was working with Matt as well as the other volunteers including Jason Coutu, Smita Mishra, and Lalit Bhamare among others.  I spent a good amount of time being involved with the test competition and every minute of it was worth it. First and foremost because I had fun and furthermore, I learned a great deal working with the other volunteers in setting up the test competition and from the test competition itself.

Matt first started floating the idea of organizing a test competition on twitter in early January.  We had our first meeting via Google Hangouts in mid January.  Some of our meetings were held in the evening (Eastern Time) which worked out well for me as my mind was more than warmed up and flowing with ideas and thoughts after a day at the office.  Other times our meetings were held in the mornings (7am Eastern Time) as some of our teammates are in India Standard Time – it was much more challenging to get the mind warmed up and flowing with ideas before that morning coffee 🙂

It was a great learning experience being involved as a volunteer & test judge for the competition. We discussed what we wanted to do – and then potential solutions (the how to part). For example how we would communicate with the participants during the competition to answer questions? Where would participants log bugs? What did we want to consider when grading and how would we use our grading scale.  Working with the team in determining all of this was great, I learned a lot from it and have a good range of knowledge to perhaps organize a similar event at the office – and have the team as external test judges (which would be awesome).  I also had an opportunity to use and familiarize myself with Telerik TeamPulse in the weeks leading up to the the test competition – this was the tool used by the participants to log bug reports.

For the competition itself, we had 17 teams registered from four different continents. Teams varied in terms of number of team members and the location of members within the teams. Teams were given 3 hours for the functional portion of the competition and had scheduled time-slots over the course of the weekend for the performance portion of the competition.  Teams had 4 different websites to choose from to test. Some teams choose to execute tests on all 4 websites while other teams chose to perform testing on a select 1, 2, or 3 of the 4 choices.  Now the challenge here wasn’t just to read the rules, ask questions, select websites to test, coordinate with team members, implement test strategy, do the actual testing, log bug reports, write test reports – it was doing all of this (and possibly even more for some teams) in 3 hours!  This is an actual and real challenge we as software testers face every day – we don’t have all the time in the world (and often very little time) to test so we have to choose what we test and how, wisely.

I reviewed every bug report and test report that was submitted at least twice, comparing the bug reports  logged and the content in those reports to our grading scale, and to bug reports logged by other teams.  I reviewed how well and clearly the test reports were written, and how valuable the information in the test reports were for me viewing as a stakeholder.  I went into the websites and tried to reproduce a lot of the bugs that were logged according to the repro steps provided.  I was very impressed with some of the bug reports and test reports we received and that those teams were able to produce and submit this information in a timeframe of 3 hours. I even got a few ideas from one or two test reports that I may be able to apply to certain applications I write test reports for.

Having fun and learning – for me being involved in the test competition as a volunteer & test judge,  these two factors went hand in hand.  As Matt mentions in the Test Competition Results post, it’s rest and regroup stage for the time being, but I am looking forward to what comes next!

Congrats to all the winners – well done!

A Skilled Software Tester

A few months ago an acquaintance of mine asked me what I did for a living, and so I replied “I’m a Software Tester … A Skilled Software Tester”.  There was a noticeable pause between the 2 parts of my answer – mostly because I feel that there’s sometimes a false belief amongst people that don’t know any better that being a Software Tester doesn’t require an individual to be very skilled (this is something I believe that’s changing as the community of skilled software testers continues to grow, learn, promote & learn skills, challenge old ways to doing things, and focus on testing that has value).

I continued my answer to the question with a brief explanation of what I meant, without going on for an extended period of time making it seem like a speech or a lunch & learn.  I explained that I don’t spend my testing time doing things “the old way” (I actually found a better term for it thanks to Keith Klain – more on this in a separate post), of filling out templates, spending hours writing and executing heavily scripted test cases, or writing 30 page test plan documents that nobody will actually read.  I spend my time testing, what & how I test will depend on the application and the different situations surrounding the application. I don’t test everything the same way because I don’t believe in best practices, I believe in good practices in certain contexts. I work with developers, project managers and other testers to provide information about the quality of the software. That was my explanation.

In the weeks that followed, I spent some time thinking about that explanation and how I could explain it better in the future. The content of my explanation was composed of conversations I’ve had with many people (testers and non-testers), explaining to them that there was more to testing than templates, documents and scripted test cases – that skilled software testing and testers did indeed exist and were really good at what they did. I had spoken about testing skills and shifting focus to testing that added value to the project to quite a few people and was looking into ways to do it better. I came across a post written by Keith Klain The skilled testing revolution … which explains that things are changing, the old ways of doing things are on their last legs, and that skilled software testing is starting to gain momentum and recognition. I’ve send the post to some people as it highlights, and better explains the message I’ve been aiming to get across.

As I continue to learn, apply what I’m learning, and talk to other testers – all of which contributes to me becoming a better and more skilled software tester everyday –  I’m glad to say that I’m part of the skilled testing revolution.

A Day Full of Errors & Crashes

Yesterday was an extremely productive day for me – I got a lot done. A lot of work involving a lot of effort, considerable time and a good amount of thinking (most of the things I put my energy and effort towards require thinking).  It was a good day in terms of productivity.  I also encountered a lot of errors & crashes – but I wasn’t testing.

Now let me explain – as a Software Tester I encounter (and generate) a lot of errors, crashes, undesirable/inconsistent/unpredictable behaviours during my investigation of an application.  The errors & crashes I encountered yesterday happened while I wasn’t testing.  They happend while I was using different software applications to complete or perform some tasks – this type of thing isn’t rare for me. What is rare is that the errors & crashes I encountered yesterday happened while I was using different software applications to complete or perform tasks on all 3 of my devices.

A software application I had been using on my laptop as a project and test management tool generated an error and crashed 3 different times – first generating an error message and then ending abruptly seconds later. The error message generated was displayed on the screen for barely a second – I had no time to even read a word it said.  The first time it happend the processes “crippled” my entire machine and I had to restart it.

A few hours later after I had completed my “priority tasks” for the day, I was in relaxation mode. I thought I’d search for an action movie to rent or possibly purchase on my tablet. For some reason every time I filtered the movie selection by action movies the app would crash on my tablet.  After 3 times I gave up – wasn’t my night to watch a movie.

A few hours after that I wasn’t asleep as I should’ve been (I guess the 80km/h winds outside had something to do with it) I decided to reactivate the iMessage service on my phone. I had turned it off earlier in the day for a particular reason.  I was prompted to entered my password to activate it and for some reason I just wasn’t able to activate it. After a few tries (I think it was about 3) I decided to give up and leave it alone until the following morning.  This morning I entered the password once and I was able to reactivate it within seconds.

I guess there will be days like that – I just didn’t foresee it happening while I wasn’t officially testing, and on all 3 devices.

Lesson Learned: Applying a Testing Rule to a Car Scenario

Throughout my Testing career one of the things I’ve learned (and learned early on in my career) was to never report or identify possible causes of a problem without proper knowledge based on investigation and facts to back up my claim.  This “rule” can be based on a “finding” during testing or discovering a defect during testing.  Part of my job (and approach to testing) is to explore, discover, learn about the application I’m testing and investigate it – this includes investigating behaviours and defects I come across so that I can provide knowledgable information about what I’m reporting. I’d never identify a defect and then list possible causes with uneducated guesses – without any investigation, or facts of some kind to explain why I believe something would be the cause of the problem.

While I’d never do this during any type of testing – this is exactly what I did in a scenario involving my car recently.  About a week ago I had identified a burning smell coming from my car after I had driven it.  I’ve had a few cars and encountered my fair share of different problems with them but I had never encountered this type of burning smell before.  I had just changed my wheels and tires but I knew this wasn’t the issue behind the problem – the tires were the correct size and the rims weren’t rubbing against the shocks nor the callipers in the front or back.  The days went by and the burning smell was still present.  For some reason I was convinced that the burning smell was a result of either: 1 – oil burning somewhere in the engine or 2 – an electrical wire burning somewhere.

I’m not sure what I based the possible causes on – I don’t have a mechanical lift or any other type of equipment to diagnose the cause of such problems.  My reasons were based on uneducated guesses. I decided to visit a buddy of mines who’s a mechanic with his own shop and told him the problem and what I believed the causes to be.  We took a road test, came back to the garage and he smelled the burning smell and decided to put the car onto the lift right away.  Took him less than 30 seconds to identify the cause of the issue (burning smell) – there was a plastic bag or some type of plastic that had gotten stuck under the car and melted onto the exhaust pipe towards the front of the car.

So much for what I thought and was almost sure the causes of the burning smell were. It was a good reminder for me to perhaps apply what I apply in my testing to other parts of life.

Lesson learned.

Is “Testing” a Bad Word?

There have been a few instances where I’d be working on a task related to my testing of a feature of application and I’d overhear something that would make me ask myself “Is Testing a bad word?”

Things like “this will have to be qa’d” or “we’re going to have to quality control this application” or “we’ll send this off to qa”. Once I even heard a software tester say “hey we’re not quality assurance”,  I smiled for an instant until I heard the second part of the statement “we’re quality control”.  While I do realize these statements are made due to miseducation or a misunderstanding of what these “software testers” actually do and the purpose they serve, it doesn’t stop me from asking myself if testing is a bad word.

There are some individuals with whom I’ve had conversations with to try to help them distinguish between the two and maybe even enlighten them to the fact that they aren’t assuring quality or controlling quality as Software Testers. I’ve even used some examples I’ve learned studying Michael Bolton’s work to illustrate my point “The role for us is not quality assurance; we don’t have control over the schedule, the budget, programmer staffing, product scope, the development model, customer relationships, and so forth.”  I’m often told “well here we are quality assurance”.

But on the other side, there are other individuals I’ve spoken to – smart, enthusiastic Software Testers who want to think when doing their job, who want to do meaningful work and have the results of their efforts serve a good purpose, who are enlightened by some of the content I’m sharing with them – I like to refer to this as the bright side.

So is Testing a Bad Word?  Hmm I guess that depends on who you ask.

Testing shouldn’t take more than …

There have been a few times in my career where I’ve had somebody (Test Manager, Test Coordinator, Development Manager etc)  tell me something along the lines of “testing this shouldn’t take more than n hours or n days.”  This can be conveyed (and interpreted) in two ways; we have n hours or n days to test, or it can mean that you should be able to complete your testing in n hours or n days.  More often than not, it means the latter. The statement conveyed in that manner completely disregards the fact that complete testing is not possible (I won’t be going into this detail in this post).  Furthermore, the statement is often made without a real understanding of the feature to be tested. Sometimes the test manager or coordinator can make the statement without knowledge of the technical details, the risks, and without any consideration given to the details for the specific feature or application.  Other times the statement is made based on what somebody else, for example a developer or business analyst has said.

In my opinion this is similar to somebody making a personal recommendation to try a restaurant based on what somebody else has said without ever having eaten at the restaurant themselves.

Either way it leads to unrealistic expectations, inaccurate estimates, and a lot of misunderstandings & confusion (not to mention a poor understanding of Software Testing) – unless the Software Tester takes action to prevent that. This can lead to a lot of explanations, disagreements, meetings and more which ultimately cuts into testing time.  The first time I was in this situation, and every subsequent time I’ve always made it a point to speak up and explain to the individual making the statement what the testing for the particular situation actually entails and why the statement may be inaccurate – those with a stake in the project (Product Owners, Business Owners, Project Managers) should be aware of what was developed, what was tested, what wasn’t tested, along with relevant and valuable information discovered during testing so that they can make the appropriate decision regarding the feature or application.

A good understanding of what Software Testing is and what it should set out to accomplish – the mission, goals, and purpose is something those with an influence on testing activities within an organization should be aware of – as it may dictate how valuable testing time is spent.

Testing is not Repetitive – Part 2

In Part 1 of this topic I wrote about the experiences that made me realize that Software Testing is not repetitive.  These are my experiences from over 7 years ago when I first started my career in Software Testing.

When I began to realize that skilled Software Testing was not a repetitive job, but one that required and made use of different types of skills & knowledge and was a thinking persons job – I immediately knew I wanted to become a skilled Software Tester.  I didn’t yet exactly know what that entailed, or how I would start my journey on becoming one, but that didn’t matter – I knew where I wanted to go so to speak and now I would work on getting there.

I saw two types of Testers (there are a lot more but for the scope of this post I’ll stick with two); those who wrote and ran the same test cases over and over & those who used different testing skills, knowledge, experience in their testing to find some great bugs and report valuable information.  The latter were the ones other individuals (testers, developers, product owners) would go to for input on how & what to test. They were the ones who would themselves go to ask developers for information, find information using different channels that were available to them.  They didn’t tell other testers what to do, but made themselves available to them as a good source of knowledge.

I was assigned to and chosen to test more “complex” projects – this is what led me to start working on testing different types of applications. By “complex” I mean testing applications for which there were no pre-existing test cases. Applications that nobody within the Testing Team knew much about in terms of functionality and technicality. I would talk to (ask questions) and work with developers and data analysts to determine what the application did, how it would be used and most importantly (and a learning experience for me) how I was going to test it.  I wasn’t just testing the application from the UI, I was now testing different components that made up the application.  I enjoyed being assigned and chosen for these projects.  Not only did I work hard to be the one the test manager chose for these projects but I showed that I wanted to learn and was both willing to and able to do so.

From this point on; about 7-8 months into my Testing career I knew that I wanted to learn more, get better at what I did and not just sit and execute the same tests over and over – a job that a robot would be able to do or even somebody with less knowledge & skill than me could do.

www.qualitycaptain.com