Here are some guidelines to help you decide how many participants will be enough for you to get insights and ensure the analysis is manageable.
On our paid plan, there's no limit to the number of participants you can have for each study. If you're on the free plan, remember that you can still get useful insights from the 10 participants you can recruit.
Please note, the number of participants is capped at 20,000. If you reach 20,000 participants, your study will be automatically closed.
For Treejack and Chalkmark, 50–100 participants per task will give you trustworthy data (the more the better)
For Treejack and Chalkmark, aiming for 50–100 completed studies is likely to give you trustworthy data. From what we've seen, the trends in participant responses should start to come clear with around 50 participants.
The more participants you recruit from this point, the more confident you can be that the data is representative of your users.
Keep in mind that 50+ completed tasks is the ideal — if you get fewer than 50 completed tasks, you'll still get plenty of actionable insights when you dig into the results, especially especially when looking at participant paths in detail.
In Treejack, the smaller the confidence interval, the more accurate the results
In Treejack task results, you'll see a line on the Success and Directness scores that represents the confidence interval, and by clicking the + you'll see the lower and upper limits. The smaller the confidence interval (the shorter the line), the more you can be certain of statistical accuracy.
For example, 9 people completed the following task with a 44% success rate. As you can see the line is quite long: the lower limit is 19%, and the higher limit is 73%.
In contrast, 108 people completed this task with a 43% success rate, and the confidence interval is much smaller: the lower limit is 34% and the upper limit is 52%:
For OptimalSort, 30–50 participants makes for manageable analysis (but don't be afraid of more)
Running an open card sort with OptimalSort is a generative exercise: the results give you lots of ideas of how you could label and organize your website content. As such, the quantitative numbers you need for Treejack and Chalkmark may not be your main goal to collect.
Also, keep in mind that the more participants you have completing your card sort, the potential for more complexity in your analysis increases as well. This is simply because narrowing down the most effective structure from 30 different suggested categorizations will probably be an easier task than from 200 different suggestions.
If you want to gather as many suggested categorizations as you can, though, don't be afraid to recruit more than 50. It will take longer to standardize categories, and you may not see a definitive pattern as clearly, but you'll still have access to more ideas and a wider cross section of the population, which you may find invaluable.
If you want statistical significance per your demographic population
While we provide error calculations for Treejack, you may want to base your participant numbers on your own calculations. The number of participants you need for statistical significance can depend on the size of the population you want to survey, and the confidence interval (margin of error) and confidence level you are comfortable with. Or if you don't have a set population size, just the confidence interval and confidence level.
For example, if you want to test the 500 users on your mailing list, and will accept a margin of error of 10% and a confidence level of 95%, then you will need 81 people to complete your tasks. Or, if you don't have a set population size, and are happy to accept a confidence interval of 15%, you'll need 43 people to complete your tasks.
You can calculate the ideal sample size by using a simple online calculator.
How many participants to invite so you get enough completed studies
Once you've decided the minimum number of participants you want for your study, you can then start recruiting.
If you recruit people with specific demographics through our recruitment service, then go ahead and invite the number of participants you've decided will be enough for your study. What you order is what you get.
If you're recruiting people via your own channels, it's a good rule-of-thumb to invite more participants than you need — sometimes many more, depending on the channel. For example, response rates are notoriously low for email invitations (as low as 5% or less), so if you want 100 completed responses, you may need to send up to or more than 2,000 invitations.