Join a community of pattern designers getting our weekly posts about running pattern tests

Don't miss out on the latest tips, tools, and tactics at the forefront of running effective pattern tests.
    There was a problem processing your request. Please try again.

    Blog

    How to Choose the Right Testers - Part 1

    by Stephen - 22nd September 2022


    To run regular pattern tests, we constantly need to be attracting testers. We also need to evaluate them so we can pick the right ones.

    "But I already have enough testers, and they're all great! I'm all set."

    Our pattern testing data show otherwise.

    We've spoken with dozens of designers as part of our ongoing research into the challenges of pattern testing. Issues that are the direct result of choosing the wrong testers were a stand-out constant.

    The ones mentioned the most are:
    • Late feedback/ghosting
    • Inadequate feedback
    • Not following instructions
    • Modifying the pattern without permission
    • Using the wrong yarn type/weight
    These issues were consistently ranked amongst the 3 biggest pains of running pattern tests.
    • One of the top 3 problems for 96% of the designers
    • Two out of the top 3 problems for 73% of the designers
    • #1 problem for 68% of the designers

    Send me the Challenges of Running Pattern Tests eBook

    Learn how successful designers are dealing with the top five problems of running pattern tests.
    Discover a novel solution to make the process simpler and more effective.
      There was a problem processing your request. Please try again.
      What if you're a new designer with a limited reach? Can you afford to be picky? You're probably feeling relieved that a tester has noticed you. After all, beggars can't be choosers, right? You just know that things will be different once the ball gets rolling. Testers from far and wide will be throwing themselves at you.

      Right?

      Wrong.

      When you're just starting off, it's very tempting to accept anyone and everyone under the sun. This is a dangerous attitude to be taking.

      (Well, maybe that last bit about testers throwing themselves at you might be true, once you've made a name for yourself. And, as we'll see below, more rarely means better. Most times it just creates extra noise.)

      Picking the wrong testers will prove costly in terms of time, effort and frustration.

      Whether you're an established designer or a newcomer, this problem is clear and ever present, and is identical to both. It just manifests differently, but it's two sides of the same coin.

      Established designers vs. newcomers

      So how exactly do these issues present themselves?

      Known designers generally tend to have a much larger audience than newcomers. They have established a reputation and therefore potential testers see them as more reliable. Indeed, testers are also evaluating you. At least the discerning ones are, and that's the calibre of tester you want to have in your group anyway.

      The downside of that wider reach and popularity is that it also attracts individuals from the other end of the spectrum, and lots of them. Which means a lot more noise that needs to be filtered through. Not only do you need to be better at evaluating potential testers, you also need to be efficient at it. This article will help you do both.

      On the other hand, the newcomer will have less testers to sift through, but each mistake might prove costlier:
      • One person could make up a third or a quarter of the test group. That's high. A wrong fit can potentially derail the whole test.
      • Less experience in running pattern tests means having a harder time putting out fires or anticipating them before they happen.
      There is one other, very big differentiator between someone who's new to the game and those who've been around for a while.

      Over time, smart designers curate a list of reliable testers they can pull from every time they test. A tester pool takes most of the pressure off each individual test.

      However, a pool of testers tends to diminish over time if not maintained properly. Some testers stop testing, others opt out, and a few are intentionally removed by the designer. So it needs to be replenished from time to time. Additionally, you want to keep it fresh by constantly adding new people into the mix.

      Which still means, you guessed it, evaluating testers.

      A baseline understanding

      When speaking with designers, I hear a lot of (sometimes quite explicit) variations of the following:

      "Testers are unreliable."

      "I have bad testers."

      "I don't know where to find good testers."

      Unfortunately the phrasing generally implies, or at least invites the attitude that not only are testers the ones to blame, but that they are the only ones to blame. I understand where it's coming from and the frustration behind it. And it is complete nonsense.

      Yes, some testers are indeed inexperienced, unreliable, misguided, self-serving or downright dishonest.

      I can say the exact same thing about some designers who do not know how, or do not want to run a pattern test properly, fairly and ethically.

      There are also testers who go above and beyond, who are eager to learn, who have a great attitude, who are expert crafters, who elevate each and every test they are involve in, who rally the troops...

      Just like there are designers who are there to serve their testers, who help them at every step, show up every day, give the benefit of the doubt and assume positive intent, champion their testers...

      You find all kinds of people on both sides of the equation. There are good and bad actors, and everything in between. But this is not a blame game.

      So when talking about evaluating testers, I'm speaking from the baseline understanding that the designer is taking care of their part of the deal. As designers we should first take a look at ourselves and our processes and make sure we're a) not evil, and b) constantly trying to make the testing experience better for ourselves and our testers. If you follow this blog regularly, then you're probably already doing that. Good job! 😉 Keep checking in regularly...I've got you covered!

      Then, choosing the right testers for your pattern test becomes all about finding the right match.



      Finding the right match

      We tend to look for reasons to greenlight a tester. We try to identify some attribute or aspect that singles them out and puts them ahead of the pack. The reasoning is that in a sea of unknowns, identifying a positive trait or two tips the scale in the right direction, making that tester a safer bet.

      This is incorrect logic, and it's the reason many designers have such a hard time finding good testers.

      We fail at this task so many times (because we've essentially reduced it to a gamble) that we come to the conclusion that we're either not good at finding the right match, or that the process is too intricate and complex with too many unknowns.

      It all feels so ephemeral. Something that we can't fully understand, let alone master. A Hail Mary shot, where we try to play the odds and keep our fingers crossed that a hunch turns out to be right.

      So we become disheartened as we resign ourselves to this false reality.

      The good news is that there is a right way of doing this, and it's very simple: you should be doing the exact opposite of the above!

      At the risk of sounding like a negative nelly, evaluating testers needs to be a process of exclusion, not inclusion. Instead of identifying positive traits, what you really need to be doing is looking for reasons to rule out ill-suited testers.

      This is a simplification of Karl Popper's falsifiability principle, and an easy rule of thumb that you can use over and over again. It's cutthroat and extremely effective.

      Here's the rough theory behind it.

      It's easy to obtain confirmations for just about everything, if we look for them. Put differently, we can always find some reasons to say "yes" when evaluating a tester:
      • They take great photos
      • They're nice people
      • They were recommended
      • You like them
      • They seem experienced
      • They said such-and-such in their tester application
      All of that is anecdotal, at best. You can't really determine whether they'll be a good fit or not. They could be the best at spotting mistakes, but they might not be able to meet a deadline if their life depended on it. You simply don't have all the information at your disposal, and you need ALL of it to verify that fit.

      By contrast, you only need to find one single reason to say "no" in order to falsify the claim that a particular tester is a good fit for you.

      This is vulgarised in an oft-quoted example: The hypothesis that all swans are white can be falsified by observing a single black swan.

      Effective designers look for problems, not strengths. If you start looking for strengths, you'll find them. Everybody has them, including the bad fits.

      There is nothing ambiguous about identifying problems. It is not a gamble. You successfully avoid testers who've already shown a propensity for causing issues, intentionally or otherwise.

      This does not mean that we'll be able to eliminate all chances of having issues crop up in our tests. However, by excluding testers who exhibit undesirable behaviour or traits, what we're left with are good testers and the possibility that a few bad fits may have slipped through our net.

      Which is a far cry from the other, flawed approach in which we resign ourselves to the certainty of having to deal with a plethora of issues every single time.

      Red flags

      You'll be surprised at how many things you'll be able to pick up on, from very early on.

      In the rest of this three-part series, I will walk you through all the red flags you can pick up on and where to look for them. Where relevant, I'll also point out how you can make small adjustments to your testing workflow so that it's easier for you to identify any issues as early as possible.

      Continued in How to Choose the Right Testers - Part 2

      Stephen

      Comments powered by Talkyard.

      Weekly Pattern Orchard insights right in your inbox

      Everything we've learned (and are still learning) about running successful pattern tests.
      And don't worry, we don't like spam either!
        There was a problem processing your request. Please try again.