Join a community of pattern designers getting our weekly posts about running pattern tests

Don't miss out on the latest tips, tools, and tactics at the forefront of running effective pattern tests.
    There was a problem processing your request. Please try again.

    Blog

    How to Choose the Right Testers - Part 2

    by Stephen - 6th October 2022


    In part 1 of this article series we spoke about why we need to evaluate potential testers, and how it's a process of exclusion rather than inclusion. I recommend you start there if you haven't read it already.

    In this second part we'll jump right into the tell-tale signs we should be looking for, grouping them into the different phases of the testing process.

    Test call

    It all starts at the very beginning, during a potential tester's first (test-related) contact with you.

    Ignoring/not reading instructions

    Every test call has some type of instruction included in it. At the very least, the test call specifies how people should apply for the test.

    Now think about how many times you've received "applications" (air quotes intentional) outside of the requested channels. Let me give you some examples:

    Scenario #1: You announce your test call in a social media post. You ask testers to fill in an online form to apply. Someone leaves you a comment on that post, or perhaps sends you a private message, that says "I'd like to test please," or my personal favourites: "Interested" or "👍."

    Scenario #2a: You send an email out to your mailing list and provide a link to an online application form with clear instructions that this is how they should apply. Someone replies to your email instead, with a vague expression of interest.

    Scenario #2b: You send out an email as above, but conversely you might ask for email replies. For reasons that I can never comprehend, instead of hitting the reply button, someone chooses to perform all sorts of inconvenient digital gymnastics to seek you out on social media and leave you a comment on some random post from the previous week.

    Would you consider choosing these individuals for your test group?

    Your answer should be a resounding "No."

    Of course, the same concept applies to any other instructions you might provide in the call.

    Some people can't follow simple instructions, not even when they want something from you (in this case, for you to include them in the test group).

    Why, then, would we assume that they'll be meticulous during the test, when instead it's us asking them to carefully test our pattern and provide quality feedback at the end?

    This is your first filter, and it's a very good one because it reveals natural behaviour and it does so early on in the process. Plus, it requires no action from your part.

    Inattentive

    A derivative of the above situation.

    You provide certain details (such as dates, deadlines, the type of yarn you require, etc...) and people contact you to ask for that very information you have provided.

    I'm not being snobbish here. I'm not looking down on requests for further clarification, or something like that. If anything, those are good signs! Rather, it's questions that give away the fact that the person did not take a minute to properly read the details you provided, or that they just couldn't care less.

    When specifically stating that you need a particular type of yarn to be used in the test, you wouldn't expect people to then ask you "What type of yarn can I use?" And yet they do.

    Sometimes these questions disguise themselves as others, such as "I have such-and-such yarn in my stash. Can I use that instead?"

    It's this bad attitude (lack of attention, lazy, or perhaps a misguided sense of entitlement) that gives rise to eventual issues like changing the pattern without asking, ignoring mistakes and/or not pointing them out, and missing deadlines.

    I know it sounds petty and intolerant, but these are the small signs we should be looking for. Not because we want to be spiteful, but because they are indicative of much larger problems further down the line.

    Applications

    I'll be direct - you need to have some sort of application form.

    What details you should be asking for, and how you should ask for them, is a larger discussion deserving of its own dedicated article.

    The combination of certain questions and the way they're structured, besides each serving their own specific purposes (which we'll cover in that separate article I mentioned), will allow you to monitor for a few more red flags.

    Missing answers

    Optional fields left blank, or dismissive replies in required fields.

    Asking a question such as "What types of projects have you tested before?" might not seem like a very useful question. People tend to write what they think you want to hear, and you can't really verify it. However, the way they reply is quite telling of the type of person they are. Particularly if the reply is optional.

    Do they leave it blank? You asked for that information, so it must be important to you. It's relevant to the matter at hand, not something private or personal. Why then wouldn't they want to let you know that they're experienced testers? Perhaps they haven't tested before, which is perfectly fine. Do they say that, or is the field left ambiguously empty?

    If someone were to ask you this question in person, face to face, would you just stare at them without saying anything?

    There's always context to give, and it can always be given nicely, even if it's a simple "I haven't tested any patterns before."

    Missing answers are always a red flag.

    Unnecessarily short or lazy answers

    Similar in concept to the above. Asking for testing history and receiving a one- or two-word vague reply such as "Socks" is indicative of what you can expect from that person in their testing feedback. That is, if you were to select them for your test group, which I hope I'm being clear in my recommendation against that decision.

    Disqualifying answers

    Again, most people will tell you what they think you want to hear. So if you ask them whether they agree to respect deadlines, not alter the pattern without consent, and to regularly communicate with you, they will almost always say that they will.

    Usually tester agreements like this are presented as a checkbox they need to tick. A formality, rather than an actual question. Instead, what if you were to give them the option to refuse? A yes/no option, rather than a required tick?

    Of course, almost everyone will say yes to each one of those agreements.

    Someone might say no to, say, you using their photos on your social media accounts. You could be OK with that, since it's not a critical aspect of the test. Or you might not, and if so it would be better to know that upfront, rather than discover it later just because they couldn't skip the obligatory tick box.

    On the other hand, if someone refuses to agree to respect deadlines, you might want to double check with them in case they mis-clicked. On the off chance that it was intentional, I don't think there can be a clearer reason to exclude someone from your test.

    Send me the Challenges of Running Pattern Tests eBook

    Learn how successful designers are dealing with the top five problems of running pattern tests.
    Discover a novel solution to make the process simpler and more effective.
      There was a problem processing your request. Please try again.

      A threshold (prologue)

      At this point you've reached the stage where you need to select your test group. This means picking the testers you want to work with and informing all applicants of the outcome.

      Quick detour to rant on acceptance and rejection notices

      Don't be one of those designers who are "too busy" to contact applicants who have not been picked for the test.

      Too often do I see test calls declaring that only chosen testers will be contacted, and that if applicants don't hear back by a certain date, they should assume that they didn't make the cut.

      It's disrespectful.

      If this is something you do in your test calls, I would urge you to reconsider.

      People spent time applying for your test. The fact that they may have been rejected because of multiple red flags should have no bearing on this matter whatsoever. Not to mention those people you would have otherwise included in your group but couldn't, simply because there were so many to choose from and you couldn't include everyone.

      If you're so busy that you can't send the simplest of canned replies, then you need to fix your workflow or stop running pattern tests. You have other options too, such as our own platform, Pattern Orchard, which can do the communicating for you. I'm not trying to find an excuse to plug our tool here; I do that shamelessly enough plenty of times, and this is not one of them. I'm genuinely saying that you must contact every single one of your applicants back.

      You don't have to give reasons for rejecting them if you don't want to. Just let them know of the outcome. Think of how you'd feel if a designer didn't contact you back. Remember that time you interviewed for that job but never heard back?

      You only need to draft a standard reply once, and you can use it in all your future tests for all your rejections/people you didn't pick. Keep it short, don't sugar coat or faff around, and be polite.

      It's good for your brand's image and, much more importantly, it's the decent thing to do.

      Rant over. Back to our scheduled programming...

      The threshold

      You've committed to your selection and made it official by informing all of your applicants. Great job!

      So is that it? Are those the much-ballyhooed red flags to which we dedicated a whole article (part 1 of this series)? We've made our selection now. What else could there possibly be left to evaluate?

      As it so happens, quite a lot!

      You don't stop evaluating your testers just because you've put together your test group. The test hasn't even started, let alone ended, not to mention any future tests these same people might apply for.

      As they say, the proof of the pudding is in the eating, and you're about to start interacting with your testers in a more direct way. You'll be experiencing their behaviour first hand. The clearest indicators are yet to come.

      The one thing that's definitely changed at this point is that any bad group selections you made are going to have some kind of cost on your pattern test. Big or small. That's the threshold you've just crossed.

      On the simpler side of things, you might just have someone who doesn't submit very detailed feedback. No real knock-on effects if your pattern doesn't have many issues and you otherwise have a strong group. The issue is localised to that particular individual, and is quite clean cut.

      Conversely, things could be more extreme. A couple of testers could be causing trouble within the group and impacting other testers' experience. (In case you were wondering, you should absolutely remove that person from the group, even mid-test. We'll get to that later on.)

      Either way, the impact could be huge, depending on your kind of test and the size of your group.Having one fewer test result could nullify your test.

      For example, it's not uncommon for certain pattern variants (eg. larger sizes for garments) to only have one or two testers. In that case, losing one of the two is quite significant. You might not be able to find a replacement.

      The bottom line is that changes at this stage are disruptive to your test, are not without repercussions and are going to require more of your time.

      And there absolutely is value in continuing to evaluate your testers!

      First off, the sooner you pick up on and deal with a potential issue, the less it's going to impact you.

      Secondly, remember that a you will have a lot of repeat applicants in future tests. So you'll be protecting those tests by observing behaviour in your current one.



      Run up to test

      There exists a kind of limbo of relative inactivity (depending on your workflow) between notifying your testers that they've been selected and the actual start of the test.

      Which means you might be able to make some adjustments without too much of an impact. It all depends on whether you happen to pick up on a couple of red flags that can pop up at this juncture.

      Not communicative

      Perhaps not quite a red flag. A yellow one maybe, if we can extend the metaphor a little bit.

      You ask a quick question via email, or in the chat group, and you don't hear back from someone.

      It's not a big deal, and I'm assuming you're not bombarding them with emails and expecting them to respond every time. After all, the expectation you've set is for the test to start at a particular date that you've not yet reached.

      All I'm saying is that some testers might have applied for the test not yet knowing whether they want to (or actually can) commit to it or not. Shocking, I know! 🙄

      Someone who doesn't pop up in the test's chat group, or never acknowledges an update email from you, might (and I stress it again, might) be a bit less excited to participate in the upcoming test.

      Just something to keep an eye out for, nothing more.

      Inattentive

      You may be asked for details that you've been providing since your initial test call.

      This is the same issue as inattentiveness during the test call, except at this stage it's egregious.

      Again, I'm not talking about clarifications, but explicit things you repeat often. Obvious things like the date the test starts.

      This particular flag is potentially a darker shade of crimson than the rest.

      Asking these sort of questions now, well after applying and being selected for the test, is not just inattentive, but negligent and disrespectful.

      It might seem trivial, but it exposes an underlying attitude problem which will almost surely manifest itself in all sorts of troublesome behaviour down the line.

      A threshold (reprise)

      At this point you're about to reach the start of the test period. You're moving out of limbo and into the test-proper.

      Everything that happens from this stage on will have some sort of impact. Which is why it's very important to keep your eyes open.

      The stakes for you will get higher, and your patience will start to run thin.

      Things are about to get interesting!

      We'll pick this up in How to Choose the Right Testers - Part 3

      Stephen

      Comments powered by Talkyard.

      Weekly Pattern Orchard insights right in your inbox

      Everything we've learned (and are still learning) about running successful pattern tests.
      And don't worry, we don't like spam either!
        There was a problem processing your request. Please try again.