In the first two parts of this series we spoke about the need to evaluate tester applicants, and why. We also covered the tell-tale signs we should be looking for in the run up to the test period.
This third and final part covers all the red flags we should be prepared to observe during the test-proper, and we'll close off by taking a look at some overarching housekeeping principles that will further help us to improve our results over time.
I recommend you start with
part 1 of this series if you haven't read the first two articles yet.
The lesser evil
At this point we've reached the start of the test period.
Decisions we make, or don't make, from this point forward will impact the test to some degree.
When noticing any of the issues we'll be discussing below, depending on the severity and timing of the occurrence, it's generally more beneficial to cut your losses and dismiss that tester from your test, rather than just blacklisting them for future ones.
However, many times you need to weigh this against the risk of causing a big scene in the group, which can impact morale. Of course you wouldn't intentionally call someone out publicly, for various reasons. However, even if you have that conversation in private, the tester might vent their frustrations loudly and make a dramatic exit.
All situations are different so use good judgement.
Also...and this is extremely important...no matter how grievous the problem may be, or how frustrated and angry you may feel, always always ALWAYS take the high road.
Never retaliate or be vengeful.
First of all, you are a nice person (...I hope). Asserting yourself and standing up for your interests does not mean you stop being respectful and well-mannered.
Secondly, you never know what's happening on the other side of that conversation that may have triggered the problematic behaviour in the first place. What you may perceive as a slight to your process may simply be the result of unfortunate circumstances on the tester's part.
Finally, you yourself are also being assessed. By your other testers. Make sure you're above reproach. For your own self-respect, and for the reputation of your brand.
The test period
All of these behaviours are not just red flags. They're crimson red.
Depending on when the behaviours occur, they might be grounds for immediate dismissal from the test. At the very least, the wrongdoer should not be chosen for future tests, as this kind of behaviour is often repeated.
Extremely late to start, without advance warning
There aren't any expectations for everyone to start on day 1, generally speaking. This behaviour refers to negligent tardiness, where testers wait until the last few days of, say, a 4-week period to start working on your pattern.
This person is not someone you'd want on your team. Not only will the validity of their test results be dubious at best, but their late entry will have an impact on your other testers' performance as well.
Think about any meeting you've attended where someone walks in half an hour late. They interrupt the meeting with their entrance, jostle people around as they find their seat, and have a quick, not-so-hushed chit-chat with their seat neighbours as they settle in. They'd probably expect to be filled in on the things they missed, too.
The same applies to your test.
Being rude and inconsiderate is their problem, but the disruption they cause is yours.
Of course, we should account for situations where the tester lets us know in advance, before the beginning of the test period, that they expect to have a bit of a late start. Life just gets in the way sometimes. Receiving a timely heads-up is indicative of the right attitude. Timely
being the key word here. Receiving such a message two weeks into the test period is not much of a forewarning.
There are ways to mitigate against extreme lateness by reducing its impact or even the chance of it happening in the first place.
For instance, you can set overt or covert check-in milestones (like asking them to send a rough work-in-progress photo after a certain step in the pattern, or asking them to confirm the yarn they're using right before they start, etc...). You can then ping people from whom you haven't heard within a few days from the test start.
There's quite a lot you can do, and neither require much (or any) effort, but that's beyond the scope of this particular article. Leave me a comment below if this is something you'd like to learn more about.
Making unapproved modifications to your pattern
This is an extremely big deal!
A test is run for a particular design, which could have multiple pattern variants, but all of which would be very specific. A test also has particular targets and a collective goal.
Changing the pattern would, at best, make that tester's output completely invalid and waste everybody's time. At worst, it could cause confusion within the test group and mess up the entire test.
Whether testers are paid or not has no bearing on this issue (not that I want to open that particular can of worms right now.) Being a volunteer does not give a tester carte blanche to take these kind of liberties.
Any changes a tester might want to make to your pattern, whether justified or not, should first be run by you and are subject to your approval.
Anything short of that is unacceptable.
Not communicating issues in a timely manner
By timely I mean as soon as reasonably possible. The sooner the better.
What this translates to is testers sharing their discovery on the test's chat group, or directly to you via whatever communication channels you have set up so that you, in turn, can share it with the other testers.
That way everyone can be on the lookout for it, saving them the frustrations of having to frog too much of their work to correct the issue in case they missed it. This is especially true if the problem is not immediately obvious in the written pattern as it's being worked, but becomes apparent at a later stage when things start to not add up.
With an early enough warning, you would even have enough time to correct the mistake and make any necessary design adjustments. You would ideally release a new pattern version to your testers to make sure everyone has and is working with the latest, correct version.
That is good. It's also the reason why not communicating issues in a timely manner is such a big problem.Pattern Orchard lets you create multiple pattern variants (eg. UK and US terms; multiple garment sizes; translations into different languages; etc...) and manage them separately. You can upload your updated pattern and the system will organise and track each version. This will ensure your testers will be directed to the latest pattern while allowing access to previous versions if necessary.
Unfortunately, you won't know that someone failed to speak up until long after the fact, when it's too late. That's because the red flag you're looking for is a tester reporting something they noticed in their end-of-test comments/feedback, but never did raise the issue during the test period itself.
Even then, you have to be careful. It's not a definite indicator that that tester neglected to report the issue. If someone else raised the issue with the group earlier, then they wouldn't have had much reason to speak up. Mentioning the issue in their test comments might just be them confirming that they, too, noticed the issue. Which makes it a good thing, not a bad one.
You can only be certain in the worst circumstances, which is when someone reports an issue in their test feedback which no one else picked up on during the test. This means it was not addressed during the test, plus your testers are not picking up (or reporting) issues. Hence the worst case scenario.
What adds insult to injury is that you would probably have explicitly asked your testers to speak up the moment they notice something isn't as it should be. If you don't already do this, make sure you start!
This highlights the importance of picking up on a couple of red flags as early as the test call, which I covered in
part 1 under the subheadings Ignoring/not reading instructions
, as they're all symptoms of the same underlying behaviour
And if you thought it stopped there...
What can exacerbate the matter is when a tester notices an issue and doesn't just fail to report it, but takes it upon themselves to alter your pattern to "make it work". Now we also have the add-on effects of altering the pattern, which we've covered above.
I abhor this issue with a passion. Such a simple act of omission that is hard to detect or prevent, but that can be so detrimental to your test and your testers' time.
The best defence against this is to make sure you have experienced makers in your test group. This somewhat lessens the risk of having an unreported mistake, as surely someone else would have picked up on it at some point.
Then again, wouldn't you be aiming for experienced testers anyway? And If you're not an established designer, you might not necessarily have that luxury either.
The best you can do is salvage what you can from the situation, part of which being that you should make sure to never choose that tester again for a future test.
Horrible, horrible issue to have. A darker shade of crimson, this one. Purely because of how convoluted and elusive it can be.
Test conclusion and submission of comments/feedback
Ghosting is the colloquial term commonly used to refer to situations where a tester abruptly cuts off contact without any explanation or prior notice.
It also implies unresponsiveness when reaching out to re-establish communication. So just because someone is uncommunicative, don't immediately jump to the conclusion that they've run out on you. You should first assume positive intent, as many times something may have come up beyond their control, or they just got distracted.
Sure, a small heads-up would be nice, but that's a separate issue and there may be other factors in play that you may not be aware of. A pattern test ranks low on the totem pole of priorities when compared to personal or family matters, as indeed it should be.
So reach out and have a bit of patience. Often times you get a response back and everything gets back on track.
Sometimes though, the wall of silence could indeed be malicious. Well, perhaps malicious is too strong a word, but we're deep into dark red territory now, so let's go with that.
The more experienced you get with identifying and acting on red flags earlier in the process, the less frequently you find yourself being ghosted.
Such people can normally be weeded out earlier, as deserting you in the middle of a test is often the result of an underlying attitude or character trait that would have manifested itself earlier on in the process. Furthermore, someone who intentionally plans on ghosting is not generally the type of person who is very communicative, nor diligent with their application details.
Still, it does happen from time to time, even to the more established designers among us.
Motivations for ghosting can be various. That person could just have been trying to get a free pattern, or maybe they got fed up or frustrated part-way through the test.
Nevertheless, the effects and implications of ghosting are similar to those stemming from unapproved changes to your pattern. Likewise, ghosting should be grounds for permanent exclusion from future tests.
At all times, be graceful. Communicate privately and do not create a scene. Certainly never, ever name and shame.
For clarity, ghosting is not the same as someone letting you know, well in advance of the test deadline, that they have to back out. Although frustrating, it should be noted as a positive behaviour. You should thank the tester for being upfront and for their (hopefully) timely communication. It is behaviour that you want to encourage. Your positive reaction is what builds trust and reputation, leading to a good sense of community. And make no mistake, word travels around!
Naturally, a last-minute message telling you they can't finish the test is not what I'm referring to here. Which brings me to the next red flag...
Just a colourful way of saying that a tester does not complete the test.
It's a superficial variation on ghosting where rather than going radio-silent, the tester lets you know that they'll be unable to finish.
This excludes genuine cases of personal issues for which you are given appropriate, advanced notice of them not being able to continue testing.
Rather, flaking out tends to happen towards the end of the test period, and is usually someone who did not make much progress earlier on (see the previous section on extremely late starts, which could be an early warning sign for this situation, making it somewhat preventable.)
Typically, you get a flimsy excuse or a simple "Sorry, but I have to back out from the test,"
which is rich considering it's sometimes sent to you the day before the deadline.
Otherwise, flaking is just like ghosting, including its impact on your test outcome. Your response should be the same, too.
Meaningless feedback"Everything OK.""No problems found.""Beautiful pattern. Thank you!"
That's what we all love to hear. Except when it's the only thing we hear.
Post-test comments should give you some insight into what was wrong and what could be improved. They're meant to be constructive feedback to help you improve your pattern and fix mistakes.
Even if we did a fantastic job of writing and thoroughly checking our pattern, I find it hard to believe there wasn't one single thing which could have been done better.
If something was already discussed in a chat group during the test period itself, you should still expect comments on it at the end of the test. If not by everyone, at the very least by the tester/s who brought up the issue in the first place. You should communicate this upfront to set proper expectations and avoid misunderstandings.
It may very well be the case that someone wants to avoid confrontation, or under the misguided notion of not being negative. This does not give you the value you need. Their intentions may be good, but you need testers who give you actionable feedback you can use to improve your pattern (or future patterns) with.
Depending on the situation and the degree of detail (or lack thereof), you might not feel like you should exclude someone from future tests just for lazy feedback. At the very least you should lower your internal rating for that tester and in future make sure to prioritise others who give wordy, useful feedback over those who just say "Good."
After all, what's the point of the whole testing exercise if you don't get any results back?
Send me the Challenges of Running Pattern Tests eBook
Learn how successful designers are dealing with the top five problems of running pattern tests.
Discover a novel solution to make the process simpler and more effective.
This is all so overwhelming. Is it worth it?
Don't be intimidated by the volume of information. It is meant to provide context and an understanding of the implications of picking the wrong testers.
Putting it all into action is really quite simple. You just have a list of red flags you need to look out for.
You've (hopefully) read all three articles in this series, you understand why it's a process of exclusion not inclusion, you know each of these red flags has a reason for being there, and what those reasons are. If you accept those reasons, all you need going forward are the red flags:Test call
- Ignoring/not reading instructions
Run up to test
- Missing answers
- Disqualifying answers
The test period
- Not communicative
Test conclusion and submission of comments/feedback
- Extremely late to start, without advance warning
- Making unapproved modifications to your pattern
- Not communicating issues in a timely manner
- Flaking out
- Meaningless feedback
They're quite intuitive and will quickly become second nature. If you ever have any doubt about what each means, you can always come back to these articles to refresh your memory.
Putting it all in motion
use a tester pool! You'll have a list of pre-vetted testers, assuming you regularly maintain and curate your pool. That means an established audience that sees your calls, whom you already know, have a relationship with, and don't need to spend time evaluating. Also, fewer unknowns, which means lesser risk.
Secondly, effective designers track whom they've rejected from previous test calls, so they don't get past the filter in future tests. So keep track of whom you've previously worked with even if they're not in your tester pool, or mailing list, or whatever you want to call that special list of trusted testers.
Have a list. It could even be identical in structure to your tester pool, except it's a list of other, known testers. Keep notes on them. Mark those you've blacklisted or said no to, and why.
Don't just (and I'm quoting lots of designers here) "delete them from your list."
That's not only ineffective, it's a sure-fire way for someone to slip through your nets next time round.
Make this list easy to use and accessible, then use it!
Pattern Orchard can help you with that.
Thirdly, keep referring to the list of red flags at every step of your evaluation process.
You can bookmark this article or copy the list to your computer or phone and keep it somewhere accessible.
Better yet, print it out! No, I'm not a luddite. You just can't argue with the immediacy of a printed list on your desk that you can just glance at, versus having to fish it out from some digital folder somewhere.
Finally, use a tester pool. Yes, this is the same as point #1. It's just that important.
As always, you can reach out to me at any time if you ever have any questions or need some help or advice. I'm here to help!Stephen