-
AI-Powered User Research: Fraud, Quality & Ethical Questions

This article is part of a series of articles focused on AI in user research. To get started, read about the State of AI in User Research and Testing in 2025.
AI is transforming how companies conduct user research and software testing. From automating tedious analysis to surfacing insights at lightning speed, the benefits are real—and they’re reshaping how teams build, test, and launch products. But with that transformation comes a new layer of complexity.
We’re entering an era where AI can write surveys, analyze video feedback, detect bugs, and even simulate participants. It’s exciting—but also raises serious questions: What happens when the testers aren’t real? Can you trust feedback that’s been filtered—or even generated—by AI? And what ethical guardrails should be in place to ensure fairness, transparency, and integrity?
As AI grows more human-like in how it speaks, behaves, and appears, the line between authentic users and synthetic actors becomes increasingly blurred. And when the research driving your product decisions is based on uncertain sources, the risk of flawed insights grows dramatically.
Here’s what you’ll learn in this article:
- Trust and Identity Verification in an AI-Driven World
- Loss of Creativity & Depth in Research
- Bias in AI-Driven Research & Testing
- Transparency & Trust in AI-Driven Research
- Job Displacement: Balancing Automation with Human Expertise
- The Risk of Fake User Counts & Testimonials
- The Ethics of AI in Research: Where Do We Go From Here?
Trust and Identity Verification in an AI-Driven World

Note: This person does not exist! As AI gets smarter and more human-like, one of the biggest questions we’ll face in user research is: Can we trust that what we’re seeing, hearing, or interacting with is actually coming from a real person? With AI now capable of generating human-like voices, hyper-realistic faces, and entire conversations, it’s becoming harder to distinguish between authentic human participants and AI-generated bots.
This isn’t hypothetical—it’s already happening. Tools like ChatGPT and Claude can hold detailed conversations, while platforms like ElevenLabs can clone voices with startling accuracy, and This Person Does Not Exist generates realistic profile photos of people who don’t exist at all (ThisPersonDoesNotExist). As impressive as these technologies are, they also blur the line between real and synthetic behavior, and that poses a significant risk for research and product testing.
“Amazon is filled with fake reviews and it’s getting harder to spot them”, from CNBC. And that was from 2020, before the rise of AI.
Across the web, in platforms like Amazon, YouTube, LinkedIn and Reddit, there’s growing concern over bots and fake identities that engage in discussions, test products, and even influence sentiment in ways that appear completely human.
In research settings, this could mean collecting feedback from non-existent users, making flawed decisions, and ultimately losing trust in the insights driving product strategy.
That’s why identity verification is quickly becoming a cornerstone of trust in user research. Tools like Onfido and Jumio are leading the charge by helping companies verify participants using government-issued IDs, biometrics, and real-time facial recognition (Onfido, Jumio). These technologies are already standard in high-stakes industries like fintech and healthcare—but as AI-generated personas become more convincing, we’ll likely see these safeguards expand across every area of digital interaction.
For companies conducting user research and testing, it’s critical to have confidence that you’re testing with the right audience. At BetaTesting, we’ve implemented robust anti-fraud and identity controls, including identity verification, IP validation, SMS validation, no VPNs allowed for testers, behavioral analysis, and more. We’ve seen fraud attempts increasing first hand over the years, and we have built tools ensure we address the issue head-on and continue to focus on participant quality.
Looking ahead, identity verification won’t just be a nice-to-have—it’ll be table stakes. Whether you’re running a beta test, collecting user feedback, or building an online community, you’ll need ways to confidently confirm that the people you’re hearing from are, in fact, people.
In a world where AI can walk, talk, type, and even smile like us, the ability to say “this is a real human” will be one of the most valuable signals we have. And the platforms that invest in that trust layer today will be the ones that thrive tomorrow.
Loss of Creativity & Depth in Research

While AI excels at identifying patterns in data, it struggles with original thought, creative problem-solving, and understanding the nuance of human experiences. This is a key limitation in fields like user research, where success often depends on interpreting emotional context, understanding humor, recognizing cultural cues, and exploring new ideas—areas where human intuition is essential.
Text based AI-analysis tools can efficiently categorize and summarize feedback, but they fall short in detecting sarcasm, irony, or the subtle emotional undertones that often carry significant meaning in user responses. These tools rely on trained language models that lack lived experience, making their interpretations inherently shallow.
“Is empathy the missing link in AI’s cognitive function? Thinking with your head, without your heart, may be an empty proposition.” (Psychology Today)
Organizations that lean too heavily on AI risk producing surface-level insights that miss the richness of real user behavior, which can lead to flawed decisions and missed opportunities for innovation. True understanding still requires human involvement—people who can read between the lines, ask the right follow-up questions, and interpret feedback with emotional intelligence.
Bias in AI-Driven Research & Testing
AI models are only as objective as the data they’re trained on. When datasets reflect demographic, cultural, or systemic biases, those biases are not only preserved in the AI’s output—they’re often amplified. This is especially problematic in user research and software testing, where decisions based on flawed AI interpretations can affect real product outcomes and user experiences.
Amazon famously scrapped its AI recruiting tool that showed bias against Women.
“If an algorithm’s data collection lacks quantity and quality, it will fail to represent reality objectively, leading to inevitable bias in algorithmic decisions.“ This research article from Nature reports on how discrimination in artificial intelligence-enabled recruitment practices exist because their training data is often drawn from past hiring practices that carried historical bias.
Similarly, Harvard Business Review highlighted how AI sentiment analysis tools can misinterpret responses due to an inability to understand nuances with language, tone, and idioms. This leads to inaccurate sentiment classification, which can distort research insights and reinforce cultural bias in product development (Harvard Business Review).
To reduce bias, companies must regularly audit AI systems for fairness, ensure that models are trained on diverse, representative data, and maintain human oversight to catch misinterpretations and anomalies. Without these checks in place, AI-powered research may reinforce harmful assumptions instead of surfacing objective insights.
Transparency & Trust in AI-Driven Research
As AI becomes more deeply integrated into research, transparency is no longer optional—it’s essential. Participants and stakeholders alike should understand how AI is used, who is behind the analysis, and whether human review is involved. Transparency builds trust, and without it, even the most advanced AI tools can sow doubt.
Among those who’ve heard about AI, 70% have little to no trust in companies to make responsible decisions about how they use it in their products. (Pew Research).
To maintain transparency, companies should clearly state when and how AI is used in their research and user testing processes. This includes disclosing the extent of human involvement, being upfront about data sources, and ensuring participants consent to AI interaction. Ethical use of AI starts with informed users and clear communication.
Job Displacement: Balancing Automation with Human Expertise
One of the most prominent concerns about AI in research and software testing is its potential to displace human professionals. AI has proven to be highly effective in automating repetitive tasks, such as analyzing large datasets, summarizing survey results, detecting bugs, and generating basic insights. While this efficiency brings clear productivity gains, it also raises concerns about the long-term role of human researchers, analysts, and QA testers.
A 2023 report from the World Economic Forum projected that AI and technology automation will be the biggest factor in displacing up to 83 million jobs globally by 2025 – Read full report here
However, the same report highlighted a more optimistic side: an estimated 69 million new jobs could emerge, with popular growing roles including: Data Analysts/Scientists, AI and Machine Learning Specialists, and Digital Transformation Specialists
This duality underlines an important truth: AI should be seen as a collaborative tool, not a replacement. Companies that effectively balance automation with human expertise can benefit from increased efficiency while preserving critical thinking and innovation. The most successful approach is to use AI for what it does best—speed, scale, and consistency—while entrusting humans with tasks that demand creativity, ethical reasoning, and user empathy.
The Risk of Fake User Counts & Testimonials

AI can generate highly realistic synthetic content, and while this technology has productive uses, it also opens the door to manipulated engagement metrics and fake feedback. In research and marketing, this presents a significant ethical concern.
A 2023 report by the ACCC found that approximately one in three online reviews may be fake, often generated by bots or AI tools. These fake reviews mislead consumers and distort public perception, and when used in research, they can invalidate findings or skew user sentiment. The FTC also recently banned fake review/testimonials.
In product testing, synthetic users can create false positives, making products appear more successful or more user-friendly than they really are. If left unchecked, this undermines the authenticity of feedback, leading to poor product decisions and damaged customer trust.
To maintain research integrity, companies should distinguish clearly between real and synthetic data, and always disclose when AI-generated insights are used. They should also implement controls to prevent AI from producing or spreading fake reviews, testimonials, or inflated usage data.
The Ethics of AI in Research: Where Do We Go From Here?
As AI becomes a staple in research workflows, companies must adopt ethical frameworks that emphasize collaboration between human expertise and machine intelligence. Here’s how they can do it responsibly:
Responsible AI Adoption means using AI to augment—not replace—human judgment. AI is powerful for automation and analysis, but it lacks the intuition, empathy, and real-world perspective that researchers bring. It should be used as a decision-support tool, not as the final decision-maker.
AI as a Research Assistant, Not a Replacement is a more realistic and productive view. AI can take on repetitive, time-consuming tasks like data aggregation, pattern detection, or automated transcription, freeing up humans to handle interpretation, creative problem-solving, and ethical oversight.
Ethical Data Use & Transparency are critical to building trust. Companies must ensure fairness in AI-driven outputs, openly communicate how AI is used, and take full accountability for its conclusions. Transparency also involves participant consent and ensuring data collection is secure and respectful.
AI & Human Collaboration should be the guiding principle. When researchers and machines work together, they can unlock deeper insights faster and at scale. The key is ensuring AI tools are used to enhance creativity, not limit it—and that human voices remain central to the research process.
Final Thoughts
AI is reshaping the future of user research and software testing—and fast. But for all the speed, automation, and scalability it brings, it also introduces some very human questions: Can we trust the data? Are we losing something when we remove the human element? What’s the line between innovation and ethical responsibility?
The truth is, AI isn’t the villain—and it’s not a silver bullet either. It’s a tool. A powerful one. And like any tool, the value it delivers depends on how we use it. Companies that get this right won’t just use AI to cut corners—they’ll use it to level up their research, spot issues earlier, and make better decisions, all while keeping real people at the center of the process.
So, whether you’re just starting to experiment with AI-powered tools or already deep into automation, now’s the time to take a thoughtful look at how you’re integrating AI into your workflows. Build with transparency. Think critically about your data. And remember: AI should work with your team—not replace it.
Ethical, human-centered AI isn’t just the right move. It’s the smart one.
Have questions? Book a call in our call calendar.
-
When to Launch Your Beta Test

Timing is one of the most important things to consider when it comes to launching a successful beta test, but maybe not in the way that you think. The moment you introduce your product to users can greatly impact participation, engagement, and your ability to learn from users through the feedback you receive. So, how do you determine the perfect timing? Let’s break it down.
Start Early Rather Than Waiting for the Perfect Moment

The best time to start testing is as soon as you have a functional product. If you wait until everything is fully polished, you risk missing out on valuable feedback that could shape development. Before you launch, there’s one crucial decision to make: What’s your primary goal? Is your beta test focused on improving your product, or is it more about marketing? If your goal is product development, iterative testing will help you refine features, usability, and functionality based on real user feedback.
Beta testing is primarily about making improvements—not just generating hype. However, if your goal is to create buzz, a larger beta test before launch can attract attention and build anticipation. This marketing-driven approach is different from testing designed to refine your product (see Using Your Beta Launch to Go Viral, below).
Make Sure Your Product’s Core Functionality Works
Your product doesn’t need to be perfect, but it should be stable and functional enough for testers to engage with it meaningfully. Major bugs and usability issues should be addressed, and the product should offer enough functionality to gather valuable feedback. The user experience must also be intuitive enough to reduce onboarding friction. Running through the entire test process yourself before launching helps identify any major blockers that could limit the value of feedback. Additionally, make sure testers can access the product easily and get started without unnecessary delays.
At BetaTesting, we emphasize iterative testing rather than waiting for a “seamless user experience.” Our platform is designed to help you gather feedback continuously and improve your product over time.
Iterate, Iterate, Iterate…

Testing shouldn’t be a one-time event—it should be an ongoing process that evolves with your product. Running multiple test cycles ensures that improvements align with user expectations and that changes are validated along the way. At BetaTesting, we help companies test throughout the entire product development process, from early research to live product improvements. Since we focus on the beta testing phase, we specialize in testing products that are already functional rather than just mockups. Testing is valuable not just before launch but also on an ongoing basis to support user research or validate new features.
Have The Team Ready

A successful beta test requires a dedicated team to manage, analyze, and act on feedback. You should have a team ready to assist testers, a feedback collection and analysis system should be in place, and developers should be on standby to address critical issues. Assigning a single point of contact to oversee the beta test is highly recommended. This person can coordinate with BetaTesting, manage schedules with the development team, and handle tester access.
We also encourage active engagement with testers, as this helps increase participation and ensures quick issue resolution. However, BetaTesting is designed to be easy to use, so if your team prefers to collect feedback and act on it later without real-time interaction, that’s completely fine too.
Align with Your Business Goals
Your beta test should fit seamlessly into your overall product roadmap. If you have an investor pitch or public launch coming up, give yourself enough time to collect and analyze feedback before making final decisions. Planning for adequate time to implement feedback before launch, considering fixed deadlines such as investor meetings or PR announcements, and avoiding last-minute rushes that could compromise testing effectiveness are all essential factors. For situations where quick insights are needed, BetaTesting offers an expedited testing option that delivers results within hours, helping you meet tight deadlines without sacrificing quality.
Using Your Beta Launch to Go Viral
For some companies, a beta launch is viewed more as a marketing event: an opportunity to generate hype and capitalize on FOMO and exclusivity in order to drive more signups and referrals. This can work amazingly well, but it’s important to separate marketing objectives from product-focused objectives. For most companies, your launch is not going to go viral. The road to a great product and successful business is often fraught with challenges and it can often take years to really find product-market fit.
Final Thoughts

Don’t wait to choose the perfect time to start testing. While you can use your beta launch as a marketing tool, we recommend instead focusing most of your effect on testing for the purpose of gathering feedback and improving your product. Think about your product readiness, internal resources, and strategic goals. Iterative testing helps you gather meaningful user feedback, build relationships with early adopters, and set the stage for a successful launch. Start early, stay user-focused, and keep improving—your product (and your users) will thank you!
Have questions? Book a call in our call calendar.
-
Global creative agency adam&eve leads with human-centered design
Award winning creative agency adam&eve (voted Ad Agency of the Year by AdAge) partners with BetaTesting to inspire product development with a human centered design process.
In today’s fast-paced market, developing products that resonate with users is more critical than ever. A staggering 70% of product launches fail due to a lack of user-centered design and insight. This statistic underscores a fundamental truth: understanding and prioritizing the needs and experiences of users is essential for success.
As adam&eve works with enterprise clients to create and market new digital experiences, they have often turned to BetaTesting to power real-world testing and user research.
Understanding Traveler Opinions & Use of Comparison Booking Tools

For a large US airline client, BetaTesting recruited and screened participants across a representative mix of demographic and lifestyle criteria. Participants completed in-depth surveys and recorded themselves answering various questions on selfie videos. Later users recorded their screen and spoke their thoughts out loud while using travel comparison tools to book travel. The BetaTesting platform processed and analyzed the videos with AI (along with transcripts, key phrases, sentiment, and summarization) and the professional services team provided an in-depth custom summary report with analysis and observations.
Sara Chapman, Executive Experience Strategy Director, adam&eve:
“Working with BetaTesting has allowed us to bring in a far more human centered design process and ensure we’re testing and evolving our products with real users across the whole of our development cycle. The insights we’ve gained from working with the BetaTesting community have been vital in shaping the features, UX and design of our product and has enabled us to take a research driven approach to where we take the product next.”
Beta Testing for an Innovative Dog Nose Scan Product

Every year, millions of pets go missing, creating distressing situations for families and pet owners alike. In fact, it’s estimated that 10 million pets are lost or stolen in the United States annually. Amid this crisis, innovative solutions are essential for reuniting lost pets with their families. Remarkably, recent advancements in pet identification have highlighted the uniqueness of dog nose prints. Just as human fingerprints are one-of-a-kind, each dog’s nose print is distinct due to its unique pattern of ridges and grooves.
Adam&eve worked with an enterprise client to develop a new app which leveraged the uniqueness of dog nose prints as a promising solution to the problem of lost pets.
BetaTesting helped organize numerous real world tests to collect real-world data and feedback from pet owners:
- Participants tested the nose scan functionality and provided feedback on the user experience scanning their dog’s nose
- The software was tested in various lighting conditions to improve the nose print detection technology
- Hundreds of pictures were collected to improve AI models to accurately identify each dog’s nose
Learn about how BetaTesting.com can help your company launch better products with our beta testing platform and huge community of global testers.
-
Sleepwave Iteratively Improves Sleep Tracking App with BetaTesting
Sleepwave earns a 4.7 / 5 rating in the App Store and voted “Best Alarm App” of 2023.

The breakthrough sleep tracking app Sleepwave tests its product in the real world with real people through BetaTesting.
Problem
Sleepwave, a sleep tracking app that tracks your sleep directly from your phone, needed to validate its data and refine its product before launching in the App Store and Google Play store.
Sleepwave developed an innovative solution to track your sleep without wearing a device – by using your smartphone. Sleepwave’s breakthrough technology transforms your phone into a contactless motion sensor, enabling accurate sleep tracking from a phone beside your bed.
Sleepwave needed to test the accuracy of its technology in the real world, to see how it performed across a range of environments, such as sleeping alone, with a partner, with pets, fans, and other criteria that could affect its data.
Solution
Sleepwave turned to BetaTesting to help them recruit a wide range of people to test the product across multiple iterations, to run week-long tests where people tracked their sleep using the Sleepwave app and reported results related to the user experience and accuracy of their actual sleep each night.
BetaTesting recruited testers across a wide mix of ages, locations, and used screening surveys to find people with a variety of sleep environments.
Testers uploaded their sleep data each night, and answered questions about their actual sleep experience and how accurate the results were. They also answered a series of user experience questions related to how the app helped improve their sleep quality, and shared bugs for any issues during the week-long tests.
According to Claudia Kenneally, Sleepwave’s User Experience Manager:
“BetaTesting provides access to a massive database of users from many different countries, backgrounds and age groups. The testing process is detailed and customisable, and the dashboard is quite easy to navigate. At Sleepwave, we’re looking forward to sharing our exciting new motion-sensing technology with a global audience, and we are grateful that BetaTesting is helping us to achieve that goal.”
Results
In the first few months of using BetaTesting, Sleepwave’s product improved considerably and was ready for a public launch on iOS and Android. The series of tests helped to:
- Refine the accuracy of the app’s motion sensing across different sleep environments
- Improve the onboarding and overall mobile experience by identifying pain points
- Rapidly iterate on new features, such as smart alarm clocks, soundscapes, and more.
Claudia Kenneally, Sleepwave User Experience Manager, said:
“Based on insightful and constructive feedback from our first test with BetaTesting, our team made significant improvements to our app and added new features. We saw an increase in positive feedback on our smart alarm, and more users said they were likely to choose our app over other competitors. These results proved to us that testing with real users in real-time, learning about their pain points, and improving the user experience based on that direct feedback is extremely valuable.”
Over the past 2 years, Sleepwave has continued running tests on BetaTesting’s platform, from multi-day beta tests to shorter surveys and live interviews to inform their user research. Sleepwave has become the #1 alarm app in the App Store, with a 4.7 star rating and over 3,000 reviews.
According to Claudia Kenneally: “The support team at BetaTesting has always been very helpful and friendly. We have been working closely with them every step of the way. They are always willing to provide suggestions or advice on how to get the most out of the testing process. Overall, it’s been a pleasure working with the team, and we look forward to continuing to work with them in the future!”
Learn about how BetaTesting.com can help your company launch better products with our beta testing platform and huge community of global testers.
-
BetaTesting Manages TCL In-Home Usage Testing for New TV Models Around the World

TCL is a global leader in television manufacturing. With new models being released every year, TCL needed a partner to help ensure their new products worked flawlessly in unique environments in real homes around the world. BetaTesting helped power a robust In-Home Usage Testing program to uncover bugs and technical issues, and collect user experience feedback to understand what real users like and dislike about their products.
Televisions manufactured for different geographic markets often have very different technology needs, including cutting edge hardware, memory, graphics cards, and processors to provide the best picture, sound, and UI for customers. Additionally, each country has its own unique mix of cable providers, cable boxes, speakers, gaming systems, and other hardware that must be thoroughly tested and seamlessly integrated to provide high quality user experiences.
BetaTesting and TCL first worked together on a single test in the United States. The BetaTesting test experts worked hand-in-hand with the TCL team to design a thorough testing process on the BetaTesting platform, starting with recruiting and screening the right users and having them complete a series of specific tasks, instructions, and surveys during the month-long test.
First, BetaTesting designed screening surveys to find and select over 100 testers, focused on which streaming services they watched, and what external products they had connected to their TVs – such as soundbars, gaming devices, streaming boxes, and more. Participants were recruited through the existing BetaTesting community of 400,000 testers and supplemented with custom recruiting through partner networks.
TCLs Product and User Research teams worked closely with BetaTesting to design multiple test flows for testers to complete. After TVs were shipped to qualified and vetted testers, the test process included testing and recording the unboxing and first impressions of the TV, followed by specific tasks each week, such as connecting and testing all external devices, playing games, testing screencasting and other functionality from phones, and more.
Testers also collected log files from the TV and shared them via detailed bug reports, and completed in-depth surveys about each feature. In the end, TCL received hundreds of bug reports and a wide mix of quantitative and qualitative survey responses to improve their TVs before launch.
Following the success of the US Test, TCL began similar tests in Italy and France, and ran additional tests in the US – often expanding the test process over multiple months to continue collecting in-depth feedback about specific issues, advanced TV settings, external devices, and more.
TCL is now expanding the testing relationship with BetaTesting to begin testing in Asia, as well as continuing their testing in the US and Europe as new products are ready for launch.
The TCL hardware tests are a comprehensive testing process that underscores the robust capabilities of BetaTesting platform and managed services. The BetaTesting team coordinated with different departments and stakeholders within the TCL team, and the test design focused on everything from onboarding to back-end data collection. Finally, testers provided hundreds of bug reports and qualitative and quantitative data to make this test – and the new product launch – a success for TCL.
Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.
-
MSCHF Runs Manual Load Tests with BetaTesting Before Viral Launches: Case Study
MSCHF (pronounced “mischief”) is an American art collective based in Brooklyn, New York, United States. MSCHF has produced a wide range of artworks, ranging from browser plugins to sneakers, physical products, social media channels

Goal:
MSCHF, a Brooklyn-based art collective and agency for brands, has a cult-like following that results in viral product launches – ranging from real-life products for sale to web apps designed to get reactions and build social currency.
For two of MSCHF’s recent product launches, the MSCHF team needed to load test their sites for usability, crashes, and stability before a public launch. Testing with real users required finding a mix of devices, operating systems, and browsers to identify bugs and other issues before. Launch.
The first product was a Robo-caller, described as an Anti-Robocalling Robocalling Super PAC. The product was similar to election style super PAC dialers that consumers are subjected to during election season, but designed to “Make robocalls to end robocalls”.
The 2nd product was an AI-generated bot to rate your attractiveness and match you with other users via chat. Both products were designed to poke fun at common social trends.
Results:
MSCHF turned to BetaTesting to help them recruit a wide range of people to test their products at the same time to understand load issues. BetaTesting recruited testers across a wide mix of demographics and device types, who agreed to join the test at the same time across the US.
Testers prepared for the test in advance by sharing device information and completing pre-test surveys, and were prepared to join the live test in the exact same time.
During the first live test, testers uploaded pictures to the AI bot, matched with other testers to chat online, and received feedback about their pictures. During the full hour, they were given specific tasks to upload different types of pictures and interact with different people and features across the site.
For the 2nd test, testers received phone calls throughout the hour, and reported feedback about the call clarity, frequency of calls, and more to help refine the super-PAC dialer product.
During the live tests for MSCHF, the team identified:
- Crashes and product stability issues across various browsers and devices
- Pain points related to the usability of each product
- User experience feedback about how fun, annoying, or engaging the product was to use.
Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.
-
Osmo / Disney Launch Innovative Children’s Worksheet Product: Case Study
BetaTesting powered testing and research for an innovative children’s worksheet product, earning 10M+ downloads and a successful launch on Amazon.

“Magic Worksheets featuring Disney were launched in the App/Play Store and on Amazon with dramatic success, earning a 4.7 / 5 stars on iOS and > 10M downloads on the Play Store”
Recent studies show that the educational games market is growing 20%+ per year, driven by the rise of e-learning during the pandemic, inexpensive global internet availability, and parents seeking engaging ways to teach their young children.
Disney has been a leader in this category for years, and has been able to make unique games based on their large library of intellectual property and characters that kids love. Osmo is an award-winning educational system for the iPad and Android tablets. Now the two companies have partnered together to create an innovative product combining the Osmo tablet with real-world worksheets powered by the Early Learn app (by Osmo parent company BYJU’s) along with engaging content and IP from Disney.
Osmo / BYJU’S first approached BetaTesting multiple years prior to the launch of the worksheet product. Already a staple in India, BYJU’S was interested in bringing their popular product to the US and further expanding globally. Initial testing was focused on testing the BYJU’s Early Learn app.
Initial Early Learn App Testing & User Research
The primary goal for the test was to connect with parents/children in the US to measure engagement data and collect feedback on the user experience and ethnocentric issues around the content and quizzes in the app. If there were any issues like confusing language or country-specific changes required (locations, metric system vs imperial system measurements, etc), it would be important to address those issues first.
Recruiting
The Early Learn team worked with BetaTesting to recruit the exact audience of testers they were looking for: Parents with 1st, 2nd, and 3rd graders with quotas for specific demographic targeting criteria – household income, languages spoken at home, learning challenges, and a geographic mix of regions across the US.
BetaTesting used its existing community of over 400,000+ testers, along with custom recruiting through our market research partners to find over 500 families interested in participating in the test. BetaTesting worked closely with the Early Learn user research team to design and execute the tests successfully.
Test Process & Results
Each child was asked to complete various “quests” in the app, which were educational journeys taken by their favorite characters. Each quest included videos, tutorials, and questions around age-appropriate modules, such as math, fractions, science, units of measurement, and more.
Children completed quests for over 3 weeks, and parents facilitated the collection of comprehensive feedback about the quality of the content, age appropriateness, and any parts of the user experience they found confusing or lacking in engagement.
The Early Learn team also collected in-depth feedback from parents about their opinions regarding technology, educational apps, and perceptions towards different educational approaches. The test helped the team develop insights into how different user personas approach their children’s education and how various factors impact parent’s decision-making regarding technology and educational usage and purchase behavior.
The feedback was overwhelmingly positive, and children seemed to love their favorite characters taking them on an educational quest. The Early Learn team also found dozens of bugs that needed to be fixed, and changes to the user experience and onboarding experience to make the games and quizzes easier and more enjoyable to play.
Ongoing User Feedback & QA Testing
After the initial test proved successful, BYJU’s continued to iteratively test and improve the app over 12+ months through the BetaTesting platform. Testing focused on both UX and QA: User experience tests were conducted with parents/children, while a more general pool of testers focused on QA testing through exploratory and functional “test-case driven” tests.
“The testing results have been very helpful to fix bugs and issues with the app, especially in cases where we need to replicate the real environment in the USA for the end user” – BYJU’s Beta Manager
Here are some examples of the types of tests run:
- Collect user experience feedback from parents/children as they engage with the app over 1 week – 4 weeks
- Ensure video-based content loaded quickly and played smoothly without skipping, logging bug reports to flag any specific videos or steps that caused issues
- Test the onboarding subscription workflow for free trials and paid subscriptions
- Complete specific quizzes at various grade levels to explore and report any bugs / issues
- Test content on WIFI or only on slower cellular coverage (3G / 4G)
- Testing on very specific devices in the real world (e.g. Samsung S22 Galaxy devices) to resolve issues related to specific end-user complaints
Testing continued on a wide range of real-world devices, including iPhones, Android phones, iPads, Android tablets, and Fire tablet devices, which are popular for families with young children.
Development and Testing for an innovative Worksheet product
As the Early Learn app neared readiness, another team at BYJU’s was responsible for the development and testing of an innovative new worksheet product. The product would combine the Osmo system with a new mode in the Early Learn app (worksheet mode) to allow children to complete real-life worksheets which were automatically scored and graded. Engaging characters provided tips and encouragement to provide an engaging environment for learning.
Initial testing for the Worksheets product included:
- Recruiting 50+ families with children spanning various grade levels (PreK, K, 1, 2, and 3).
- Coordinating and managing product shipment to each family
- Recording unboxing videos and the initial setup experience on video
- Daily / weekly feedback and bug reporting on engagement and technical issues
- Weekly surveys
Results:
Testing revealed that families loved the worksheets product, and children were captivated by the engaging learning environment. However, there were bugs and user experience issues to resolve related to setup, as many users were confused about how to set up the product and use it for the first time. There were also technical issues related to the accuracy of the camera, and correctly detecting and grading worksheets. Lastly, a percentage of the magic worksheet markers seemed to be damaged in shipment or simply not strong enough to draw lines that were readable by the camera.
All these issues were addressed and improved through subsequent iterative testing cycles. Additional tests also included the following:
- Tests to capture the entire end-user experience, from reviewing product details online (e.g. on Amazon), purchasing the product, and receiving it in the mail.
- Tests with premium users that opted into the paid subscription.
- Continued QA and UX testing with targeted groups of real-world users.
Ongoing testing revealed that the product was ready to launch, and the launch was a massive success.
Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.
-
100+ criteria for beta tester recruiting. Test anything, anywhere with your audience.

At BetaTesting, we’re happy to formally announce a huge upgrade to our participant targeting capabilities. We now offer 100+ criteria for targeting consumers and professionals for beta testing and user research.
Our new targeting criteria makes it easier than ever to recruit your audience for real-world testing at scale. Need 100 Apple Watch users to test your new driving companion app over the course of 3 weeks? No problem.
Here’s what you’ll learn in this article:
- How to recruit beta testers and user research participants using more than 100+ targeting criteria
- Learn about standard demographic targeting vs Advanced Targeting criteria
- How specific should I make my targeting criteria?
- How and when to use a custom screening survey for recruiting test participants
- How niche recruiting criteria impacts costs
- When custom incentives may be required
About the New Platform Functionality
Video overview of BetaTesting Recruiting functionality:
Check out our help video for an overview on using BetaTesting recruiting and screening functionality. Where can I learn about using the new targeting features? Check out our Help article Recruiting Participants with 100+ Targeting Criteria for details and a help video on exactly how you can use our new Advanced targeting criteria.
What are the different criteria we can target on? See this help article (same referenced above) for all of the specific criteria you can target on!
Do we have access to all the targeting functionality in our plan? We are providing ALL plans (including our Recruit / pay-as-you-go plan) access to our full recruiting criteria when planning tests targeting our audience of participants.
Does the targeting criteria impact costs? Normally it does not! But there are a few instances where costs can increase based on who you’re targeting. If you define targeting criteria that we define as “niche”, your costs will typically be 2X higher. Audiences are considered niche if there are fewer than 1,500 participants estimated to meet your targeting criteria. In those cases, it’s much more difficult to recruit, and that is reflected in pricing. Also, if you are targeting professionals by job function or other employment targeting, this is also considered “niche” targeting and costs 2X more. This is because it’s a more difficult audience to recruit (we’ve spent years building and vetting our audience!) and these audiences typically have higher salaries and require higher incentives to entice them to apply and participate.
How to recruit beta testers and user research participants using more than 100+ targeting criteria

At BetaTesting, we have curated a community of over 400,000 consumers and professionals around the world that love participating in user research and beta testing.
We allow for recruiting participants in a number of different ways:
Demographic targeting (Standard & Advanced)
Target on age, gender, income, education and many more. We offer standard targeting, and we recently added new features to allow for targeting 100+ criteria (lifestyle, product ownership, culture and language, and many more) using Advanced Targeting.
Standard targeting screenshot:

New Advanced Targeting criteria screenshot. You can expand each section to show all the various criteria and questions participants have answered through their profiles. Each section contains many different criteria.

See expanded criteria for the Work Life and Tools section. Note, you can use the search bar to search for specific targeting options.

As you refine your targeting options, you’ll see an estimate on how many we have available in our audience:

What is the difference between the Standard targeting and the Advanced targeting criteria?
The advanced targeting criteria includes a wide variety of expanded profile survey questions organized around various topics (e.g. Work Life, Product Usage and Ownership, Gaming preferences, etc). See above for some examples. The Advanced Criteria is part of a tester’s expanded profile that they can keep updated to provide more information about themselves to connect with targeting testing opportunities.
The standard criteria is part of every tester’s core profile on BetaTesting and includes basic demographic and device information.
In general, using the Advanced Criteria provides more fine-tuned targeting for your audience, in the cases there this is needed.
How specific should I make my targeting criteria?
We would recommend keeping your targeting and screening criteria as board as possible to recruit the audience you need. Think about the most important truly required criteria, and start there. Having a wider audience usually leads to more successful recruiting for several reasons:
- We can recruit from a wider pool of applicants, which means there are typically more available Top Testers within that pool of applicants. Generally this will lead to higher quality participants overall, because our system invites the best available participants first.
- The more niche the targeting requirements are, the longer recruiting can take
- More niche audiences typically require custom (higher) incentives (available through our Professional plans).
How and when to use a custom screening survey for recruiting test participants
A screening survey allows you to ask all applicants questions and only accept people that answer in a certain way. See this article to learn more about using a screening survey for recruiting user research and beta testing participants. You can learn about using Automatic or Manual participant acceptance.
There are a few times it makes sense to use a screening survey:
- If there are specific requirements for your test that are not available as one of the standard or advanced targeting criteria.
- If you need to collect emails upfront (e.g. for private beta testing). When you select this option in the test recruiting page, we’ll automatically add a screening question that collects each tester’s email. Once you accept each tester, you will have access to download their emails and invite them to your app.
- If you need to distribute an NDA or other test terms
- If you need to manually review applicants and select the right testers for your test, for example, if you are shipping a physical product out to users. In that case, you can use our Manual Screening options to collect open-text answers from applicants.



How niche recruiting criteria impacts costs
If your defined recruiting criteria is very specific, you may see that we estimate < 1,500 participants in our audience that match your criteria. In this case, we would consider the targeting “niche”. On our Recruit (pay as you go) plan, your per-tester pricing would then show as 2X the normal cost. You can always see the price defined as you change the criteria.

We also consider a test “niche” if you’re using the professional employment targeting options (e.g. job function).
In the case that your company has a Professional or Enterprise/Managed plan, you may have the ability to custom define the incentives that you offer to participants. In these cases, you’ll see a higher recommended incentive any time we notice that you may have niche targeting requirements.
When custom incentives may be required
There are a couple cases where it will be important to customize the incentive that you’re offering to participants:
- You are targeting a very niche audience (e.g. programmers) that have high incomes. In this case, you probably need to increase your incentive so your test is more appealing to your audience.
- You are planning a test with hundreds or thousands of participants.
In both those cases, we recommend getting in touch with our team and we can prepare a custom proposal for our Professional plan or higher. This plan will allow you to save money and to recruit the participants you need with custom incentives.
Have questions? Book a call in our call calendar.










