Testing software is one of those topics that people often talk about, write about (books even), and evangelize during conferences, meet ups, and so on. And rightly so – testing is one of the most important aspects of building something that’s going to be used by others.
Sometimes though, I wonder if we don’t do more talking about testing than actual testing.
By that, I mean we all understand its importance and I’d venture to say that we’re relatively familiar with the tools that are available for testing, but the act of testing in and of itself is kind of a beast:
- It requires that you install additional software on to your computer
- It requires that you learn how to write tests using the above software
- It introduces more code thus more time into a project which doesn’t always translate well for stake holders
- …and more
In short, there’s a lot working against it. I get it. Even more so, we talk about all kinds of testing – anything from unit tests to beta tests to release candidates and so on. All of these are important and they all have their place, and testing in WordPress is no different.
That said, there’s at least one method of testing that I think is applicable but rarely employed when it comes to creating themes or plugins. It’s an intermediate step of testing that I would say fits between user testing and beta testing: Use case testing (perhaps there’s a better title for it, but that’s what I have for this post).
Use Case Testing in WordPress
First, I wanted to make sure I was clear in that this type of testing in WordPress is meant to be in addition to the usual steps. That is,
- This comes after unit tests (with various types of data)
- And this comes before – and perhaps even in between – beta tests
In short, this type of testing consists of using a spreadsheet to document all of the various use cases and features of a theme through which you work your way through when testing the project.
As you can see in the screenshot above, I’ve got nine columns each of which correspond to a different aspect of any given feature:
- Feature Group (such as installation, default options, etc.)
- Feature Name
- The expected output
- The actual output that occurs
- Did it pass?
- Did it fail?
- Notes regarding this specific case
- When the issue was reported
- When (as in, which date or which build or which version) was the reported issue fixed
It’s pretty simple, isn’t it? On top of that, the methodology allows us to create separate sheets each time we’re performing a new iteration of tests. Ideally, we’ll have a collection of tabs along the bottom of the spreadsheet the first of which likely includes a lot of fails and the final sheet which includes nothing but passes.
Who Does This?
In terms of who should do this, I think there’s value in having both yourself and an objective – or as an objective as can be – third-party.
When you test it yourself, you see your work through the lens of how others will be experiencing. This is true, at least, to some degree. This helps not only expose the issues that others will solve, but it also helps you experience the frustration they’ll have when trying to get something to work.
And if you use someone else to help you that didn’t help build the project, then that’s all the better. Ideally, they should be someone technical enough to know how to navigate around whatever it is that you’re having the test, but they don’t have to be a programmer (especially because those types tend to share, you know “Here’s what the bug is and here’s how I would fix it.” As if the latter matters right now. :).
It’s Too Simple
Yes, it’s a simple way of testing work. It’s even a little tedious, but it’s worth it. It’s something that I’ve had done on numerous occasions with several different projects and I’ve never regretted it.
There’s likely more that could be added to this approach, but if you’re not doing something like this at the bare minimum, then I think it’s worth starting somewhere – even if it’s as simple a foundation as the screenshot above.