As I write on this Blog on a (somewhat) regular basis, I have only one topic that I consider to be truly off-limits, and that is politics. It’s not that I couldn’t write about politics if I wanted to (to the contrary, I could probably write way too much about politics,) but I make a conscious effort to avoid it as much as possible, mostly because there’s way too much of the stuff going around these days. Besides, I’m pretty sure that if I started writing about what I really think about some of the politicians we’ve got these days, 65% of my audience (Editor’s note: Completely made-up statistic) would think I’m a lunatic and go find a different Blog to read. But ultimately, that’s all beside the point.
Another thing I don’t write about here very much is what I do for a living. Sure, I may throw something about things I’ve been working on (dubious though they may be) into the occasional post here and there, but by and large I’ve avoided writing much about my job. This isn’t for any particular reason; it just doesn’t normally occur to me to blog about my job very often. I’ve actually been giving some thought to changing that and putting together a series of posts that explains some of the basics of how software gets tested, but I haven’t quite gotten around to starting that one yet. I’m sure I’ll be getting to that one at some point (noting, with some chagrin, the number of things that I was supposed to be “getting around to” on this Blog years ago that haven’t been touched yet) but I thought for the time being, I’d write a little bit about a situation that I encountered at work today that might be instructive to consider.
As you might know, in my employment I work as a Software Quality Assurance Engineer, which basically means that it’s my job to test things. Currently, I am working with a team that is developing a new version of our company’s advertising platform, which is designed primarily to facilitate ad services for mobile applications and websites that require them. In particular, I have spent the past several weeks doing tests on a new component of the product which generates reports that are used to supply statistical data to some of the various users within the system. Although this process has included some time spent testing the user interface along the way, mostly I have been directly testing the underlying webservice that takes requests and returns the raw data. This particular webservice uses REST (Technobabble warning on that link) which basically means that I put in a really long URL with all the parameters of the report I want back, and assuming I did everything right when generating the request and the server is reasonably happy at the time, I’ll get a (potentially very lengthy) page of raw XML with all the requested data in it. From that point, one of a number of different web interfaces takes the raw data and parses it into a (comparatively) much more user-friendly report.
Although it seems like a bit of a contradiction to say it, in a way I find it a lot easier to test things at this low level than it would be to test the same thing through a more proper user interface, mostly because things tend to be a lot more predictable at this level than they would be if you’re looking at the same data through some sort of front end. In short, there’s a lot less things that can create problems for you, so if you’re seeing something that doesn’t look right, it generally isn’t necessary to look too far in order to find the problem and fix it. To make a long story short, over the past few weeks I’ve familiarized myself with this report webservice, created and executed a test plan and a bunch of test cases for it, and I’d say that I’ve been through the whole thing pretty thoroughly by now, and I have a pretty good idea of what is and isn’t working, and what needs to be done to fix whatever isn’t working. Ultimately, a QA tester’s goal for any particular piece of software is to be able to develop this overall picture of the quality of the system being tested, which can then be used to make an informed judgment call on whether or not the system in question is suitable for the task at hand (note that it isn’t the job of a QA tester to ensure that the system is perfect, but that’s a topic for another post.)
So after all this planning and testing, I’m reasonably confident that I have a good handle on the reporting webservice that I’m working on, and where we stand with it. There’s just one little problem here: Last week, my test lead informed me and a couple other testers who have been working on this project that we would be making a presentation to the rest of the QA organization which I am a part of to give them a better idea of what we’re working on, and it will be my responsibility to discuss the stuff I’ve been working on, and what it’s used for. I had put this on the back burner for a while until the subject came back up today, with further clarification on what I would need to be able to discuss. It was at this point that I came to the sudden realization that even though I’ve been testing this webservice for weeks and I’ve familiarized myself to a significant degree with how it functions, I had hardly any idea of what any of the stuff I was working on was actually supposed to be used for. In all the time that I have spent doing functional testing on this particular webservice, somehow I had managed to go the entire time without giving more than a passing thought to its overall purpose.
Fortunately, in this case it wasn’t too difficult to go back and do some reading to fill in some of the gaps with the information I will need for the presentation, but as I think about this more, I begin to realize that it’s an easy trap to fall into. When you’re an engineer, whether you’re designing refrigerators, cars, rockets or software, one of the fundamental principles that you will use to design things is that the things you design must behave in predictable ways. By the same token, when you’re a software tester you need that predictability in order to be able to determine what kind of output you are supposed to get out of a system when you put a particular type of input into it. The trouble with this approach is that, as noted above, it can become very easy to develop a very narrow focus on one thing, and manage to completely miss the big picture in the process. This isn’t always necessarily a bad thing, as sometimes you need to employ a strategy like this to be able to solve a problem without getting too distracted by the other things surrounding it, but there are definitely drawbacks to the approach.
By their very nature, computers can only do exactly what you (or some programmer) tells them to do, and they can only repeat those instructions verbatim, for better or for worse. This degree of predictability is vital for both programmers and testers to do their jobs. One of the things you learn quickly when you start testing software is that when you create a bug and submit it to a developer, that developer won’t be able to do much with the bug unless the issue being described is usomething that can be reproduced on a consistent basis. You have to rely on this predictability in order to be able to determine whether or not a system is working in the manner it is intended. But at the same time, as a tester you also need to be able to think like someone who will actually be using the software that you’re testing. And it can be surprisingly easy at times to lose sight of that fact, and end up in a situation much like the one I encountered above. Fortunately in this case it ended up not being that big of an issue, but it is something that as a software tester you do need to watch out for.
As I have noted above, eventually I plan to start making a series of posts that try to explain in layman’s terms how software gets tested. I imagine for most people reading this Blog it’s one of those things you never really cared much about, but hopefully I might be able to get someone to learn a few things in the process.