Testing is not Repetitive – Part 1

When I first started my career in Software Testing (about 7 and a half years ago now) one of the most common phrases I would hear is that “testing is a repetitive job”.  It wasn’t uncommon to read job descriptions with the statement “the candidate doesn’t mind doing repetitive work”.  At that time I didn’t know any better – I mean here were people with tons of experience and skills, who knew a lot more about the profession than me, so I believed it (at that point I didn’t have any real experience or knowledge about the profession).

This false belief/statement didn’t hold true in my mind for long.  At my second job (about 6-7 months into my career) I had picked up fast and started to work on many different types of applications.  I hadn’t yet read much content or books about Software Testing but I quickly realized that the uses of each type of application were different from each other and that the user would interact differently depending on the type of application, and that different applications may be subject to different rules and regulations. These were the applications I was now testing.  I was working at a company developing and distributing mobile content – games, ringtones, wallpapers, mobile web pages.  I was testing all of these applications and the features within them on the different projects I was working on and I realized it didn’t make sense to test the same test cases for each type of application because the use cases for one type of application wasn’t really applicable in a different type of application.  It didn’t even make sense for me to execute the same test cases repeatedly for the same application – while I understood the need for regression testing, I wasn’t sure how much and when it should be done and if spending numerous hours and days executing the same tests over and over was actually increasing the quality of the software.

At this point I wasn’t yet aware of the context-driven school of testing and that it was an extremely skilled approach to testing to which some great & knowledgable Software Testers have contributed and that there was community of skilled context-driven Software Testers. I was aware that while some testing may be repetitive (this may actually Checking and not Testing – I won’t go into this difference in this post); skilled testing that required one to think and use their brain to be creative and learn about the technology behind the application in order to better test software was anything but repetitive.

Even after I came to this realization I would still hear that “testing is repetitive” and would often see this statement in job offers. I was even asked in a few job interviews during those times if I was willing to do repetitive work – even though I knew I was a tester who did just more than execute tests over and over based on test scripts I wrote, I would still answer yes – at that time I didn’t yet have the confidence, skill set, experience and knowledge to say otherwise; to actually have a viewpoint of software testing that was different from somebody who had more experience than me who believed in that statement.

This was over 7 years ago now, a lot has changed – including my confidence level, skill set, experience and knowledge.  While I don’t read through as many job offer descriptions as I used to, I’ll still read over a select few from the ones I do receive – and one thing I don’t see often (if ever) is a statement along the lines of “repetitive work”.  I think a lot more people (although maybe not enough) within the Software/IT industry realize that Testing is actually a skilled & knowledgeable profession.

In Part 2 I’ll write about the things that made me realize early on in my career that I didn’t want to be a Tester that would execute test scrips over and over and define that as “Testing”.

Test Tools

I’ve used & learned how to use quite a few test tools over the course of my career thus far (and continue to do so) and one thing I’ve always believed in was that a test tool should aid you in your testing but shouldn’t define or limit the way you test or what you test.  About a week ago a fellow Software Tester I follow on Twitter, Benjamin Yaroch tweeted “A tool should help you accomplish a task, not dictate how the task is done.” I couldn’t agree more – but I will keep the scope and focus of this post on test tools (as opposed to tools in general) & bug/defect tracking tools in particular. I think my next post will be on SOA Test tools I’ve used.

I’ve been fortunate to have worked at companies and with teams that chose test tools that aided in what we did and what we were aiming to accomplish. I’ve also seen instances where because a particular test tool was available, it was chosen to be used within teams or projects where it wasn’t the best fit, which resulted in the way things were done to be built around the capabilities & features of the tool.

I’ve been asked “what is the best bug/defect tracking tool to invest in and use?”  My answer is that it depends on different things. Who will use it? How will they use it? Who will read the bugs? Who will update the bugs & the information in the bugs? Does the tool need to do more than track bugs?

A few years ago I was the principle software tester in a Software R&D team (one of the best teams I’ve ever had the opportunity to work with), we worked using an Agile SCRUM methodology and one of the tools we used was Rally Project Manager (which does a lot more than just serve as a bug/defect logging tool). For how we used it (user stories; associated bugs/defects; work effort estimates; time estimates; task breakdowns; acceptance criteria; sprint planning, burn down charts; measuring velocity etc …) I can’t think of a better tool I have used for the purpose we needed it for. The tool aided us in what we wanted & needed to accomplish – we didn’t modify the way we worked or the way we did things around the tool.

Currently I’m working on a project where I’m using HP Quality Center. The tool is a great fit for what we need it for (tracking new features & associated info; screen captures, bugs/defects). In this instance Rally Project Manager would not be the best tool – it would be overkill to use it.

I’ve worked on some independent projects with a small team where we used Microsoft Excel to keep track of bugs/defects.  It was easy for every member of the team to view, read, update information. It was great for how we intended to work and use it.

All this to iterate, test tools should serve a purpose – aid in what you’re doing, not define how you’re doing it.

 

Learning with colleagues

Depending on certain variables it can sometimes be rare or popular (at the opposite end of the spectrum) to meet somebody at work, on your team who is as passionate about the craft (and getting better at it) as you are. These dependent variables (among many) can be company, company culture, team, team structure, projects, communication methods, technology, and of course colleagues themselves.

Working as a consultant you meet many people. I recently met a fellow Software Tester very interested in learning more about certain approaches to testing and to become a better, skilled (building on current skill level) Software Tester – very similar to myself.

One of the areas of testing he’s worked with and is working on becoming better at is Performance Testing. Of course Scott Barber’s name came up as we’ve both read a lot of his work and apply what we’ve learned from the content where applicable. My colleague at work has more knowledge and skill than me in the area of Performance Testing – it’s actually an area of testing where I want to improve my skill level (and working towards that goal).

One technology I’ve worked with considerably and improved my knowledge & skill level (and continuing to improve) over the last 4-5 years is testing applications built on SOA. I’ve worked with both SoapUI Pro and SOAPSonar as my primary testing tools. I’ve created and executed operational + inter-operational tests based on what I was testing and the way in which the back-end service would be used (or be called) in a production environment by actual users. I have more knowledge and skill than my colleague testing applications built on SOA – testing applications built on SOA is something he’s very interested in and wants to improve his skill level.

We’ve agreed to share and exchange knowledge, test approaches, technical knowledge regarding different tools & how to use them, the different types of applications we’ve tested, and documentation we’ve created to aid in test setup & execution over the next few weeks. Of course there is the intersection where SOA based tools may be used to do performance tests.

Looking forward to the learning curve ahead!

 

 

Testers Signing Off

Not too long ago while working on a project at a company a certain practice that was common and expected of testers caught my attention, not because it was interesting but because it made no sense to me.  This practice was Software Testers “signing off” on a project or fixes.  Why would I sign off on a product or fixes for defects? I’m not a Product Owner, I’m a Software Tester.  I test (explore, discover, investigate) in order to provide the product owners/stakeholders valuable information about the state of the product so that they can make informed decisions about how to proceed with product delivery.  A blog post I had read about a year ago by Michael Bolton immediately came to mind (http://www.developsense.com/blog/2010/05/testers-get-out-of-the-quality-assurance-business/)

As I dug deeper to understand why this was being done and where it all started, I had a few conversations with Michael and he was kind enough to help me out and lead me down the path to find some of the answers.  He suggested that instead of viewing and presenting the task of tester sign off to management as wrong, figure out the goal and alternatives will arise.

This is exactly what I did and soon discovered that what management believed in regards to software testing – its goal, purpose, and role was quite different from what I believed and practiced. Management believed that the QA Team (or Test Team as I prefer to call it) was responsible for assuring the quality of the product; that we were the gatekeepers of quality; that testers could and should assure the quality of work done by others.

At that moment I understood (understood being different from agreeing) why testers at the company were required and expected to sign off on product and defect fixes.  As Michael states in his post “it’s time for our craft to grow up.”

Pride In Testing

So here we are on this Christmas morning – I’m finally writing my first blog post.  The idea and motivation to set something up where I could post blogs and ideas had been there for months, maybe even close to a year – finally got started about a week ago at 2am. Couldn’t sleep and the mind was flowing with ideas – the perfect mind state to get started.  It was fun, learned a lot, and was productive, although the next morning was anything but.

As I did a bit each night to setup I thought about what the topic for my first post would be. Had tons of ideas but decided to go with an idea for my first post based on one of the first thing’s a Professional Software Tester ever said to me. It was over 7 years ago at my first job in Software Testing. I was a young kid happy to have found a job but not satisfied – I was eager to learn, develop testing skills and strategies, develop test cases, test plans and bug reports, I wanted to learn how to use the different software tools and applications the company used and learn more about the products I’d be testing.

I was part of the QA Team (the Test Team that is); as QA was an incorrect term for the work the team did and the purpose it served.  This is a difference I learned a few years ago reading content and descriptions written by Scott Barber and Michael Bolton.  The team was composed of the manager, a few test leads and the rest of the software testers. I was new (to the job and to the industry) and fell into the latter category.  The attitude towards testers and what we did was terrible, especially towards the testers who weren’t “leads”. We were seen as the scumbags who slowed things down, didn’t understand anything, and made life miserable for everybody else. The attitude towards testing in the company was terrible. As Gerald Weinberg once said when answering a question regarding what the biggest weakness is in the way companies test software; “To me, the biggest weakness is not considering software testing anything but a (barely) necessary evil. Testing is seen as something that could be done by a troop of monkeys, so serious testers are treated like third-class individuals.”

Anyhow I was in the kitchen one morning and this particular software tester (who happened to be one of the leads) walked in and asked me how I enjoyed the job so far and I replied that I was enjoying it and learning a lot (I was).  He said that was good and he offered his help anytime I needed it and said something to me that always stayed in my mind “There’s a lot of pride in testing”.  His behavior,  skills, effort to help others (new testers included) to get better and build their own skill sets, the respect he garnered (from testers, developers and project managers) all exemplified professionalism and the value he was able to bring to the team as a proud and skilled Software Tester.  I stayed at the company for 6 months, learned a lot and then moved on to better things – haven’t stop learning and getting better ever since.

www.qualitycaptain.com