• 6 Reasons Why The Chrome Link Underline Change is Not So Hot

    Google embraced a new CSS text property and made it the default Chrome link underline style in its Chrome browsers. This may not have been the right move.

    A few months ago, I was looking at a web page (minding my own business) when I noticed that the links looked odd to me. They were just simple blue text links with underlines… but the underline style was different than it had been the day before.

    Instead of one continuous line, the underline stopped and restarted to avoid crossing over the descenders of lowercase letters like g and y, as in this example:

    Depending on text size, font, etc., the broken underlines were sometimes innocuous, but sometimes really snagged my eyes while reading across the page.

    What was up?

    text-decoration-skip-ink

    It turned out that, starting with Chrome 64 (on desktop and mobile), the browser supports a new CSS text property called text-decoration-skip-ink. This property allows for the underline style that “skips” letter descenders, punctuation, etc. that cross below the text baseline.

    Chrome not only supports the new style (let’s call it “skip-ink”) but made it the default for all text underlines.

    I’m not convinced that Google’s change is a good one. I’ll explain why.

    But first, a few qualifications:

    • To be clear, the Chrome link underline style change not a big huge deal. It’s just a deal.
    • Skip-ink not a new visual approach. Some website implementations that have accomplished the same or similar effect for several years.
    • Chrome is not even the first browser to do this. Safari and iOS made this visual style the default by the end of 2014.

    Still: it’s a design change, it’s new to many, and its benefits are murky. This is worth thinking about, both for what it is and as an example of how we don’t want browser changes handled in the future.

    6 reasons to question the Chrome link underline change

    1. Additional visual distraction

    When it comes to cognitive load, a solid line is less information than a broken line. Irregular breaks in a line garner more attention around the locality of the breaks. In some cases, bits of underline may look out of place, or resemble other symbols.

    The level of additional cognitive load will vary based on user, font choice, text size, browser implementation, etc.

    Of course, not all web designs use underlined links. If your website’s links don’t use underline at all, skip-ink does not affect you. If your links only display underline on mouseover, then I’d wager the effects of skip-ink are minimized (just as the negative impact of the traditional underline is minimized in the same circumstance).

    2. An unproven approach

    Use of the link underline on the web has always been a compromise. From a typography point of view, underlining is considered a scourge that arose from the age of typewriters and must be stamped out. On the other hand, underlined hyperlinks have been helpful and even necessary for web page usability since the beginning of the world wide web.

    Underlined text is not as clean non-underlined text, and is somewhat more difficult to read. Research indicates that old-school underlined links decrease readability by a small but statistically significant amount. We have, however, been addressing these concerns over the years with different approaches to link styling (see below).

    An underline crossing a descender (the bottom of a “y” for example) does make the character less recognizable to some degree. But it’s also worth remembering that we don’t read individual characters, we read words as chunks, and mentally correct in context as we go.

    So, compared to traditional underlining, skip-ink certainly makes certain individual letters and symbols easier to make out when inspecting them. But does the skip-ink approach help or hinder readability overall? It’s unclear. And without data on the suspected pros and cons, I am unconvinced that skip-ink is a great solution, let alone so clearly superior that it should be our default.

    3. There are other approaches

    Here are a handful of the possible visual treatments for links:

    Possibilities include things like: removing the underline, styling it separately from the text, and using alternatives like border-bottom, which the CSS standards team happens to do for links on their own website.

    Skip-ink, which is included in the figure for comparison, is not obviously better than any of the other approaches for all applications.

    4. Defaulting to this approach may discourage others

    Making skip-ink the default Chrome link underline style may signal to designers that the problems of link underlines have been empirically solved in a one-size-fits-all fashion. This may discourage the continuation and exploration of better approaches.

    5. It causes new problems (aside from visual distraction)

    Users should be able to determine at a glance what is part of a link and what isn’t, and whether it’s a single link, or more than one.

    By breaking up the Chrome link underline, skip-ink can insert ambiguity into those rapid assessments. The user can stop to inspect a link more closely for clues—e.g., adjacent underlined space characters—to determine the answer, but users can’t be and shouldn’t be expected to do that.

    6. An external entity changed your site design instead of giving you an option

    Google, as did Apple a few years back, effectively changed your design on you. Whether this is a significant change or a minor change is up to you to decide. But Google did change an aspect of your web design without asking or warning, and without having your users opt in.

    Google could have instead provided you with a new tool to use (skip-ink), and let you apply it as you see fit. Or Google could have given their browser users skip-ink as an accessibility option to use if they wish.

    Google did neither. They provided users no option to opt in or opt out of their new link decoration effect. As a designer or developer, you are able to override their design choice, but you must modify the code of your current and old websites to do so.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • What is a Longitudinal Study in Software User Testing?

    If a regular user test gives you a snapshot, then a longitudinal study provides you with a time-lapse video.

    If you want a not-clever mnemonic for remembering what makes a test “longitudinal”, just focus on the word “long”.

    A longitudinal study (or longitudinal testing) is a type of research that collects data from the same set of users on multiple occasions over an extended period of time.

    The term “longitudinal study” is commonly used in psychology and sociology contexts, where tests might run for many years or even decades. But here we’re talking about software user testing, where longitudinal tests don’t run so crazy-long. After all, you’re not aiming to uncover new universal truths about the human condition to publish in a scientific journal.

    You just want to validate and improve your software product. And the longitudinal study is a tool that can help you do that, in some ways that regular user testing cannot.

    Longitudinal study vs. regular user test

    User testing—whether it’s directed toward usability or user feedback more broadly—tends to occur over a very limited period for each participant. It’s often a single session. For the purposes of this article, we’ll call this “regular” user testing, and we’ll assume you are already familiar with the concept.

    Regular user tests provide data about the user’s experience with your product during the test. Feedback focuses on the initial learnability of a product or feature, and the user’s reactions to it, typically while performing set tasks in a limited timeframe. These types of tests and the feedback they yield are extremely valuable. There are, of course, limitations on what these tests tell you.

    Longitudinal studies, by contrast, last well beyond a single session, and collect a multitude of data sets from each participant over time. These two fundamental features of the longitudinal study —1) abundant time, and 2) multiple data sets—provide us the ability to discover more about our product and our users than we can with regular user testing.

    Some advantages of longitudinal testing

    A regular user test shows you a single “snapshot” of the user’s experience with your product. A snapshot can tell you what’s happening in the moment, but isn’t very good at indicating where things are headed. And if there are any twists and turns over the horizon? Forget it. You just can’t see them.

    Longitudinal tests collect a series of snapshots to create a time-lapse video of a user’s experience (so to speak). The two fundamental features of the longitudinal study —1) abundant time, and 2) multiple data sets—provide us the ability to discover additional things about our product and our users.

    Longitudinal studies can help you to find out things like:

    • How users’ attitudes toward your product change over time
    • Users’ usage patterns, and how they may change over time
    • Long-term learnability and forgettability issues in your product
    • How users handle the same tasks as they become more experienced with the product
    • How users naturally use infrequently-used features
    • How users handle long-term tasks that would not naturally occur in a single session
    • How users’ preferences and activity changes after they add a critical mass of their own data
    • Etc.

    Flexibility

    A wide variety of insights are possible due to an abundance of time and data sets, but also because the longitudinal study is a pretty flexible tool. You can get creative and design your longitudinal test however you want to get the data you’re after.

    Your longitudinal study may use specific test procedures and/or general usage directives. You might periodically interview the participants, or observe test sessions, or neither. You may collect data on-site or remotely. You might instruct your participants to use the product only on a defined periodic schedule, or you might encourage them use the product as much as they want. You might have testers perform the same tasks at every interval, or you may plan out a progression of different test procedures over time. Data collected may include surveys, or maybe you have each participant keep a usage diary.

    And so on.

    Some disadvantages of longitudinal testing

    Longitudinal studies take longer to produce conclusions than regular user studies do. Not a shocker. A longitudinal study is also, not surprisingly, more difficult and more expensive to execute—but not just because it requires more planning and data collection. Longitudinal studies also require additional work in selecting participants (particularly if you are recruiting participants yourself) and are sensitive to participant attrition.

    Participant selection and participant attrition are two sides of the same coin when it comes to a longitudinal study. Getting a long-term commitment from potential participants is always going to be more difficult than securing limited, one-time participation. In a regular user test, you can make up for a no-show participant by adding another relatively quickly (if you decide it’s even necessary). For a longitudinal study you’ll need to recruit more participants than your ideal minimum right at the outset—enough to account for drop-outs. The longer the study, the more participants you’ll lose by the end. And it’s not just a quality-of-participants issue; there are innumerable reasons why good people may drop out of your study somewhere in the middle. Life just gets in the way sometimes.

    It is also worth noting that participation in a longitudinal study may itself influence the behavior or attitudes of a user. This is not a concern that is particular to longitudinal studies. But keep in mind that your participants, intentionally or unintentionally, might act as they believe they are expected or obligated to act. And that may include continuing to use your product well beyond the point they would have stopped, or other aspects you may be trying to measure.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • A Harrowing Tale of User Acceptance Testing

    My first encounter with user acceptance testing will scare you straight.

    Recently, I wrote an article about user acceptance testing (UAT) and was reminded of my very first encounter with validation testing on a software project. Here is that story as I remember it.

    WARNING: You may want to skip this story if the thought of wasted money makes you feel faint, or if you are severely allergic to anecdotes that sort-of fizzle out at the end.

    A story of high-stakes validation

    Long ago, as a fresh-faced college graduate, I was assigned to a software project for a big international company that built engines. The company was developing a new Windows application to replace the text-only PC tool they were currently using to calibrate and test their truck engines.

    I joined the project relatively late in their long software development cycle. Most of my teammates had been on the project a little over a year, others much longer. We were mere months away from release, and I had been added to the project to help the team close out development on time.

    In one of our weekly team meetings, the project manager detailed what the final stages of development and testing would look like: we would finish the development scope, support functional testing, and fix bugs as needed. Then the application would go to validation testing. After that, a decision would come: release the application, or shelve it.

    Over the next few minutes, it became clear to me what “shelve it” actually meant. It did NOT mean “set it on a shelf for a while and then we’ll release it when the time is right.”

    No, this was “shelve it” in the Raiders of the Lost Ark sense: box it up and shove it in a warehouse, never to be seen again.

    Wait, what?

    So… after several years of effort and tons of money spent (I didn’t know if it was millions, but it was at least hundreds of thousands) we’re going to finish the software… and then maybe just toss it out?

    I remained astounded and baffled for the remainder of the meeting, as contingency plans were begun about how to responsibly document, package up, and store project materials if the decision was indeed to “shelve it”. Budget was set aside for a skeleton crew to handle the arrangements if the time came. (All of these plans were discussed with a calmness I could barely fathom at the time, but would later recognize as soul-crushed resignation.)

    After the meeting, I asked a manager what he thought the chances were that the product would be shelved. I expected him to say something like 15 or 20 percent—a sort-of low, but still-disturbing number.

    He shrugged and said, “like 50-50.”

    Noticing my reaction to this, he told me it would be fine. All we could do is finish the project, just like we planned, and then see how it fares. He then explained to me why our new application may or may not be “shelved”.

    Why we might throw away functioning software we just spent several years building

    The answer was validation testing.

    After the application was complete and it passed functional testing (verification) with flying colors, it would then be given to a sub-set of real users in the company to use. You may know this phase as beta testing, or user acceptance testing.

    If validation testing didn’t produce positive enough results, it would mean that the product wasn’t doing a good enough job fulfilling its purpose and meeting the needs of the actual users for whom it was created. This would indicate one or more fundamental flaws in the application, and the company had already decided it would pull the plug if that were the case. And for good reasons:

    1. The cost of rolling out and maintaining a new tool was significant. Rolling out a fundamentally flawed tool, however, would be even more expensive and could negatively affect production.
    2. The new application was replacing an existing application that, while clunky and out of style, still got the job done.
    3. The software project was already overdue and over budget, at least compared to executives’ original expectations. Patience was pretty low. Pulling the plug on a failed expensive project would be unpleasant but relatively low-profile. On the other hand, a troubled product roll-out would be more expensive, highly visible, and might just cost somebody their job.
    4. Fixing one or more fundamental flaws in the application would be another long development effort, which would have roughly the same likelihood of success. (This was the era of classic waterfall development.)

    So, did the product get released, or what?

    Don’t hurt me, but the answer is: I’m not sure.

    By the time validation testing started, I had already been assigned to a different project for a different client, so that part of my memory is haziest. The indelible effect the experience had on me, however, wouldn’t have been increased or lessened by either result.

    But, if I’d heard that the application was killed, I’d remember it… right?

    So, it very likely was released.

    Probably.

    50-50.

    Is User Acceptance Testing always that terrifying?

    Is validation testing always such a high-stakes all-or-nothing venture? Not really.

    The particular circumstances of the company, the project, and the software development practices of the day all conspired to make the validation phase a tightrope walk in which the fate of the product teetered on the edge of oblivion.

    I have to hope that was more common back then than it is today.

    We certainly have more knowledge, tools, and best-practices today to help us remove and mitigate a lot of the risk from user acceptance testing. In retrospect, it was a bit forward-thinking of the company in my story to incorporate a validation phase as part of their process, and to have a rational willingness to stop throwing good money after bad. Today we are also, hopefully, better at initial research and goals analysis, iterative development and testing, periodic usability testing, etc.

    But let my eye-opening experience from nearly 20 years ago serve as a reminder to you: user acceptance testing is worthy of your respect, and maybe a little healthy fear. You’re testing to see whether your product or feature actually fulfills its reason for being. If it performs badly, you may not have to throw everything in the trash, but there’s going to be some discomfort. You may need to postpone a launch, insert a development sprint to fix a flawed deployment, or, yes, maybe even go back to the drawing board.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • User Acceptance Testing (UAT): What It Is, and What It Shouldn’t Be.

    If you read one article describing User Acceptance Testing (UAT), you’ll walk away with a solid understanding of what it’s all about. If you read 5 or more articles, you might just wind up confused. That’s because “User Acceptance Testing” is one of those annoyingly overloaded terms that means different things to different organizations.

    Now, I’m not “that guy” who insists upon specific definitions for cloudy terms and tells everyone they’re wrong if they disagree. However, there are at least three different ideas out there about what UAT is, and one of them is the more useful concept to embrace, especially for you fine folks reading this. So, for the purposes of this article, I’ll present UAT by its most useful definition. But I’ll also address the other two definitions and explain where they are coming from.

    What UAT Is:

    User Acceptance Testing is a software testing activity in which actual users test the product to confirm that it works in real-life situations to fulfill its original intent.

    UAT is often the last phase of testing, following developer testing and QA testing. (Your organization may use different activity names in place of these, e.g. Unit Testing, Functional Testing, Integration Testing, System Testing, etc.)

    Actual users

    Here, actual users are:

    • the users already using a previous iteration of the product; or
    • users who will use the product once it is released; or
    • potential users of the kind you wish to attract to your product; or
    • sample users who are reasonable approximations of the above.

    The key is to realize that software developers, software testers, project managers, product owners, etc. are NOT the actual users of the software, and not who you should be targeting for UAT.

    Real-life situations

    UAT is planned testing, so it may not capture literal real-life use. Users may be instructed to perform certain tasks, but those tasks should reflect real-life scenarios in real-life user conditions as much as possible.

    Original intent

    There was, presumably, a reason why the product or feature came to be in the first place. Usually, some combination of user needs and business goals were deemed good enough reason to greenlight a software development effort. Software specs—requirements, designs, etc.—soon followed.

    Most software testing activities focus on whether a product or feature matches the software specs. UAT instead focuses on whether the whether the product or feature sufficiently meets the original user needs and business goals. It’s the difference between verification and validation.

    Key concept: Verification vs. Validation

    In software testing, validation and verification are not interchangeable terms. Definitions of these two terms don’t get much pithier than these frequently referenced quotes

    from Barry Boehm:

    Verification is: “Are we building the product right?”

    Validation is: “Are we building the right product?”

    These definitions are memorable and strike at the heart of the difference between validation and verification. They are also a little too concise and clever for their own good—because, you know, what does that actually mean in practical terms?  So, let’s elaborate a little further…

    Verification is the act of determining how well something matches agreed-upon specifications.

    When you think of “software testing”, you’re probably thinking about verification activities. Verification confirms that software sufficiently meets its predefined software requirements, specs, and designs.

    Success or failure in a verification process is determined by the software’s behavior. Non-conformities (bugs) are found and captured. If a bug is considered important enough to fix as a software change, the specific desired software behavior is clear—it’s already been defined.

    Verification could be performed by actual users, but rarely is, as it is usually inefficient to do so. So, verification is performed by technical professionals using all sorts of methods, from automated testing to manual test scripts to ad-hoc testing.

    Validation is the act of determining we how well something serves its intended purpose.

    When you think of “usability testing”, you’re probably thinking of validation activities, but that is not the only kind of validation. When it comes to usability testing, the focus is on how well the user deals with the interface against reasonable expectations. When it comes to UAT, the focus is on how well the product fulfills its stated purpose when in the hands of the user. That stated purpose was likely documented as user requirements and business requirements at the start of the project.

    Validation activities such as UAT may be planned and facilitated by experts, but need to be performed by actual users of the software in order to be effective. Success or failure in a validation process is determined by user behavior. Users’ issues and negative reactions (whether directly stated by the user, or observed by another party) are captured. If an issue is considered important enough to address in a software change, the desired software behavior is not immediately clear—thought and redesign are needed.

    Takeaways:

    • Project success runs on verification.
    • Product success is enhanced by validation.
    • UAT at its finest is predominantly a validation activity.

    What UAT Also Is (But Probably Shouldn’t Be)

    While many good sources on the subject of UAT are in line with the definition presented above, many others have a different idea about what UAT is about. This isn’t a huge problem, but it’s unfortunate for, let’s say, three reasons: 1) confusion; 2) there are other terms readily available that already mean what they use UAT to mean; and 3) Using UAT in these other ways pushes aside an important concept that UAT embodies.

    This article wouldn’t be as helpful to you if I simply ignored the fact that individuals and organizations often use the term User Acceptance Testing to mean something different. So here goes.

    UAT as Client Acceptance Testing

    Many organizations treat UAT as equivalent to plain ol’ Acceptance Testing or Client Acceptance Testing. Acceptance testing of this sort is a process hurdle, where a software delivery is evaluated, and a sign-off is required in order to proceed. The agreed-upon acceptance process may consist of any kind of testing or no testing at all, and frequently has nothing to do with validation.

    Aside from the words they have in common, the reason for conflating acceptance testing and UAT is a matter of perspective—and sometimes a looser definition of “user” than is desirable.

    Let’s say you’re a software vendor for hire. Client acceptance of your software releases is how you close out projects and get paid. Business requirements and user requirements are the client’s responsibility, and may not even by fully shared with you. Your responsibility is to come to an agreement with the client on what it to be built, and to do a good job building it. You may have little or no access to the product’s actual end-users; indeed, you may even think of the client and the user as equivalent. For you, the goal of client acceptance is to confirm the client is satisfied enough to move forward: go, or no-go. There is often motivation on both sides to check that box as quickly as possible.

    When this is your perspective, it makes sense that your definition of UAT doesn’t include validation, and might not even include actual users. The activity takes place at the same points in the software lifecycle, but it has a different purpose for you.

    However, if your perspective is that of a savvy client, or if your company is building its own product, then your attention should be not only on how software milestones are approved, but on determining if the software product is actually a valid solution.

    UAT as Usability Testing

    If you scan the landscape, it’s not uncommon to see User Acceptance Testing defined effectively (or exactly) the same as Usability Testing—also referred to as User Testing.

    This is understandable. UAT and usability testing have a lot in common: both are validation activities performed by real users, and the practical feedback you’d get from Usability Testing and UAT might overlap quite a bit.

    The problem, however, is the focus and the goal of the testing. Usability testing is validation that a given UI design is intuitive and pleasant for the user; UAT is validation that the product as a whole is meeting the user’s needs. Again, there is some overlap here. But the point is, focusing on one risks sacrificing the other, especially if you institutionally view UAT and usability testing as equivalent concepts.

    Why UAT Matters

    A software product or feature could pass comprehensive verification tests with flying colors, please the client, show little-to-no issues in usability testing, and still fail validation during UAT. To me, that may be reason enough to keep the term “User Acceptance Testing” separate from “Acceptance Testing” and “User Testing”. But there’s another reason, too.

    If you convince yourself that UAT is something else, it’s easy to simply not perform that kind of validation at all. On real-life projects where money and jobs are involved, there is a ton of motivation to keep things in the “success” column and keep moving forward.

    Validation can be a scary prospect. It’s common for UAT to occur very late in the game, and it’s natural to not want validation testing to actually find problems.

    There are lots of reasons why serious issues could arise, seemingly out of nowhere, on a project that had been a glowing success right up until User Acceptance Testing. Maybe reasonable assumptions made early on about what to build were off the mark. Maybe users were wrong about what they wanted in the first place. Maybe other software applications negatively interact with yours in real-life situations. Etc.

    The issues uncovered in UAT can be fundamental and serious. It can be unclear how to correct the issues, and—for those organizations that think this way—it can be unclear who was “at fault”. It’s a big can of worms no one is eager to open.

    The choice not to perform validation-based UAT may not even be a conscious one. If your organization doesn’t institutionalize validation testing in its processes, then it may not even occur to people running software projects to do it.

    Final Thing

    Failing to discover validation-level issues in your product during testing might make your software development project run more smoothly, but it’s at the expense of the released product. The problems are still there—they’ve simply been deferred to the future, where they’ll be more expensive to deal with.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Bad UI or Bad UX? The Real Story Behind the Hawaii Missile False Alarm.

    The Hawaii missile false alarm was blamed on a moron operator and bad UI design. But what’s the real story?

    One Saturday morning in the state of Hawaii, everyone’s phone beeped and buzzed and displayed this official emergency message: “BALLISTIC MISSILE THREAT INBOUND TO HAWAII. SEEK IMMEDIATE SHELTER. THIS IS NOT A DRILL.” A similar Emergency Alert System warning also interrupted television programming.

    There was statewide confusion, alarm, and panic. Families ducked into their bathtubs. Hotel guests were ushered into basements. 911 dispatch systems were overwhelmed. Hundreds of thousands of people wondered if they were going to die that morning.

    It turned out to be a false alarm.

    Within fifteen minutes, word had begun to get out from various official sources that there was no threat. Thirty-eight minutes after the initial alert was issued, an official false alarm notice went out to everyone’s cellphones.

    The Big Question

    Once the panic subsided and everyone caught their breath, the big question was: how did this happen?

    The Governor of Hawaii issued a statement, reporting that the false alarm was not the result of hacking or other malicious intent. The story was, simply: during a routine drill at the Hawaii Emergency Management Agency (HI-EMA), a false alarm was caused by “human error” and that an employee “hit the wrong button.”

    The dots were connected, and a narrative immediately formed.

    Bad UI as the Culprit

    In the days that followed the missile scare, the blame was largely pinned on bad user interface. Something like a cottage industry of content popped up on the subject of HI-EMA’s bad UI: from snide comments and gifs to vague think pieces and thoughtful analyses on the importance of usability.

    This in itself was a reminder of how the zeitgeist has changed over the past few decades: the public at large now seems to recognize that “human error” tends to mean “bad interface”. Humans make mistakes, but the computer applications we use at home and at work—even when we have been “trained” on how to use them—often invite our mistakes or exacerbate them.

    Of course, there were still people who declared the guy who issued the false alert to be a “moron”—it is the internet, after all. And there was plenty of mocking and scorn rightly and wrongly directed at officials who floundered in the 38 minutes after the alert went out.

    On the whole, however, the narrative around the cause of the false alarm was bad UI. Much was written about it, fueled in part by “screenshots” of the UI in question that showed up days later. Confusingly, each of the two UI snips released publicly were quickly determined to not be the actual interface, however the version that depicted a dropdown menu was declared to be “similar” to the real thing. (The actual interface would not be distributed for security reasons.)

    Despite the confusion, the pretty-close-to-actual UI samples depicted a UI that was easy to criticize, and rightly evaluated as problematic and error-prone. The event was a teachable moment for bad UI.

    Then we learned that the missile alert wasn’t triggered by accident.

    It Wasn’t the Bad UI After All

    By the end of January 2018, state and federal investigations revealed the reason why an operator at the Hawaii Emergency Management Agency triggered the infamous alarm: he thought there was a real threat.

    It wasn’t a mistake due to an error-prone UI, it was an intentional action made by a confused man who thought he was warning his fellow citizens of impending danger and doing his job as he was trained.

    But, as we know, there was no actual threat; it was a drill.

    So, what the heck?

    A Different Kind of Human Error

    Initially, the world assumed—and not without reason—that the missile alert mishap was caused by one kind of user error, where the user’s action doesn’t match their intention. But—surprise!—the actual cause was a different kind of user error, where the user’s intention is incorrect.

    The employee’s intention was set incorrectly by something that happened just a couple minutes before he sent out the alert.

    This is where our story of human error winds up containing more old-fashioned human error than originally thought:

    The guy in question was sitting with a room full of people who all heard the same test exercise recording, played to them over a speakerphone. Everyone else in the room understood it to be a drill; this guy did not. (As investigators reported, it wasn’t the first time he’d confused exercises with reality.)

    There were errors up the chain that probably helped cause this man’s confusion. While the speakerphone message correctly included “EXCERSISE, EXCERSISE, EXCERSISE,” at the outset, the body of the message incorrectly included live missile alert language, including, “THIS IS NOT A DRILL.”

    Once this guy mistook the drill for a live threat (and didn’t talk to his coworkers about it), there was no process in place to stop his misconception from becoming the alert that went out to the masses. He went back to his workstation, intentionally selected the live alert option from the menu, and selected “Yes” at the confirmation prompt. Done.

    It did not matter that the UI was stodgy, or that confirmation prompts aren’t the best safeguard against mistaken actions. The degree to which one particular UI screen “sucked” was not the issue that morning.

    The user error wasn’t the kind the world had assumed it was. But either way, the systems in place—both people-powered systems and computer-driven systems—were not built to handle the outcome swiftly or gracefully.

    What Can We Learn from All This?

    Even though we were collectively wrong about the exact cause, many of the lessons we learned and conclusions we drew along the way are still applicable… except that we need to look deeper. Perhaps we should start associating “human error” with “bad system” instead of “bad interface”.

    In the end, a general lesson to draw is that robust scenario design and task analysis are important when we create and improve our software and processes. We should look broadly at whole systems—not narrowly at one software application—and appreciate the complexity of all the human-human, human-computer, and computer-computer interactions in that system.

    We should look for ways to minimize errors where we can, but we should always design with the assumption that errors will happen (of both kinds!), and then ensure that processes and functionality for dealing with those errors are an inherent part of the design.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Why Is Beta Testing Important?

    Why is beta testing important now more than ever before? The product development lifecycle has changed dramatically over the past 10 years, and beta testing is no longer a phase that definitively ends shortly before product launch.

    Product launch was traditionally the end of the product development lifecycle: (1) concept (2) design (3) build (4) launch. In this view, beta testing is something that lived and died in stage 3 (build and test). In contrast, with the rise of agile testing and continuous improvement, beta testing is more important than ever and can now be considered a process that not only occurs pre-launch, but also during the ongoing continuous improvement process prior to launching new features, design changes, or other product improvements.

    Prior to launching a new feature or a new product, the aim of the alpha and beta phases is to steadily increase the probability that the product will succeed when it is launched. Continuous improvement is an ongoing effort and philosophy that continues in perpetuity with an aim to constantly evaluate and improve products over time. All of these test phases depend on feedback from real people, using the actual product in unique environments, though each phase is driven by definitive processes and objectives.

    Sometimes, there is confusion over the distinction between alpha and beta testing. These two concepts are often interchangeable. However, they can be viewed as two consecutive phases of product testing with unique goals.

    Alpha Testing & Beta Testing Important Differences

    Alpha testing seeks to evaluate product quality and ensure readiness for beta. The focus of these tests is to find bugs and affirm the product generally works according to the expectations of the product development team. This phase occurs after preliminary QA testing and prior to beta testing. Ideally, the beta phase should occur when the product is about 65%-85% complete—which means that it has adequate stability for technical testers, but likely not feature complete.

    Typically, the alpha phase runs 1-2 weeks on each test cycle and may continue for numerous test cycles—varying with the number issues found by the testers find and the number of new features in the release. For some teams, the entire alpha phase can even be 3 to 5 times the length of the beta phase that follows. The stakeholders that typically take interest in the alpha phase include the engineering, product management, and quality assurance teams.

    In alpha testing, the testing participants ideally include a range of customers and potential users that are capable of providing meaningful technical feedback during the test. The involvement of employees in alpha testing can also help tighten internal cohesion and prepare staff for live-support after launch. Alpha testers should expect a product containing many bugs, induces crashes or significant failures, and may be missing elements or exhibit glitchy features.

    The objective of alpha testing is to identify critical problems that are not testable in the lab and also capture show-stopper bugs that are likely to thwart upcoming beta testing. A product is ready for beta when it meets requirements and design specifications, all primary features function correctly, and the testers no longer find blocking issues.

    Beta Testing Process

    About the beta testing process

    Beta testing evaluates the level of customer satisfaction and verifies readiness to release or deploy a new product, feature, or improvement. Beta tests typically include task and survey distributions to guide users in their engagement, and allow each user to discover new and changed product features. The goal is to gather feedback and make a judgement as to whether or not this is ready to be released into the wild.

    Typically, a beta testing phase may run from 1-12 weeks, which may include many smaller iterative cycles. This may vary widely according to the type of product that is under development. For example, accounting system may require more than a month for a single cycle. By contrast, it may be quite sufficient to test a newsreader product on one-week test cycles—with new participants each week. More beta cycles may be necessary if additional features are added toward the end of the project.

    Key stakeholders in beta testing include people with user experience (UX), quality management, and product management expertise. In beta, it’s important to recruit fresh, independent users that are drawn from the target market for your product. Great value comes from this vantage, since such testers can provide more objectivity and additional insights on product usage. This feedback can be a vital boost to the success potential for the product, and help provide greater value to your current and prospective customers.

    Beta testers should expect a nearly feature-complete product that may still have some bugs, crashes, and incomplete documentation. The aim is to identify and fix critical / important issues, and suggest user experience improvements that are achievable before launching the product.

    Beta testing seeks to improve success of the upcoming product launch or release, by providing evidential recommendations for product improvement and a comprehensive perspective of customer experience. Also, future product development is heavily influenced by beta testing outcomes.

    A product is ready to move into a continuous improvement phase when typical target-market users are quite comfortable with the user interface, are satisfied with how the product functions, and indicate overall satisfaction with their experiences in using the product.

    Agile beta testing important for continuous improvement

    With the rise of agile, iterative development approaches, together with the advent of continuous delivery, conventional notions of a beta testing phase—in which product development ends while customers evaluate the software—is fading away. Today’s most product teams are focused on customer feedback, analyzing data, and working to continually improve the user experience by launching new builds weekly. This is what makes beta testing important today as a key component in a continuous improvement strategy.

    Erlibird does not view each testing phase as having a strict beginning and end. We focus on simple, agile, and iterative testing phases—rather than one-and-done. We also strongly encourage and cultivate Continuous Improvement following any product launch. Beta testing important new features, design changes, or any major product changes is something that should occur with every release.

    Agile development is flexible and high-speed, and therefore requires a beta testing approach that is also fast and flexible. Iterative testing is frequently conducted with a unique set of testers for each iteration, to ensure each participant can engage and provide feedback with open eyes, without the influence of their previous experiences. However, for some products, if makes more sense to re-engage the same set of testers continually, so that each user can receive a build after each sprint and provide updated feedback over time before the the final release ships to paying customers.

    Why Is Beta Testing Important For The User Experience?

    Like you, we at Erlibird encounter some surprising bugs. And sometimes we don’t fully delight our users. We’ve come to realize that users typically don’t have much patience in a competitive marketplace. If you ship something buggy, they won’t come back.


    Whether you are launching a website, a new app, or a new mobile device, it is important that you have independent users test the product thoroughly prior to shipment. Beta testing serves multiple purposes, though all of those initiatives lead to one thing – improving customer experience.

    Beta testing important new products is also known as user acceptance testing—UAT—and occurs near the end of the pre-release product development cycle. Though often neglected or glossed over, this validation activity reveals critical insights about how customers will perceive, engage with, and operate the software. It assesses how the product meets user expectations. In other words, beta testing move the development team very far along to answering the penultimate product question: Is it shippable?

    The cost to ship bad software will always be quite high. If a prospective user downloads your application, there is a very short interval of time to convince that user of its value. In this short duration of time, this user will become a customer only if they perceive that your software will help solve a problem or meet a need, according to John Egan, Pinterest’s growth engineering manager:

    “For activation, it all comes down to: Does the user get enough value from your product? For early stage startups, this means to figure out how to reach product-market-fit. And if you’re past that stage, it comes down to being able to communicate your product value to your users. When someone downloads your app, you have a couple of minutes to convince them that this is something they need to use on a regular basis. In this short time they need to understand how the app will help them accomplish whatever the product value is.”

    It’s always important to remember that the penultimate goal of any product is to build something that is better than other alternatives and either solves a problem or meets a basic human need. Beta testing helps you achieve that top-level product development goal. When done properly, beta testing helps you iron out the wrinkles and find deficiencies prior to launching your product.

    Gain Insights About Real-World Usage

    Beta testing provides insights into product functionality, and also helps you better understand user experience. Going beyond lab performance tests, beta testing reveals whether or not the same level of performance is achievable in actual user environments. Many products need to perform well in hundreds of various environments and many different usage contexts.

    By presenting the software to users that stand well outside the insular community of developers, beta testing serves to identify elements of functionality that are all too easily overlooked in the lab. In addition, your team benefits from feedback that can be very useful for improving future product versions—or even spawn ideas for entirely new products.

    Why Is Beta Testing Important For Your Product?

    Beta testing is more important now than ever before. There are many different reasons to beta test, but be sure to keep the following goals in mind:

    • Customer Feedback – Beta testing important new products and features is one of the best ways to determine if your product provides value and solve a problem or satisfies a basic human need.
    • Quality – Will the product function flawlessly on various devices and environments?
    • Usability – Will users be able to accomplish what they want in an intuitive and enjoyable way?
    • Performance – Will your product operate quickly and efficiently in the hands of real users?
    • Save Money – Save money by fixing problems before they occur.
    • Make Money – Improve usability and provide more value to users, leading to increased conversions and higher revenue.
    • Vetting Ideas – Prior to launching a new product or feature, it’s important to put it into the hands of new users.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • How Netflix Does A/B Testing (And You Can, Too)

    Netflix is big. Your company is—most likely—not quite as big. So, when I tell you that Netflix uses A/B testing in numerous ways to improve their service and enhance their business, you might say, “That’s nice for them, but how does that help me and my decidedly smaller business?”

    If you said that, I’m glad you did. Because the different ways that Netflix does A/B testing are things that you can do, too—or can inspire you to do something similar.

    Hey, real quick: what is A/B testing?

    A/B testing is a live experiment that compares two versions of a thing, to find out which version works better. Version A is the one your users normally experience, and Version B is the one you think might more effectively accomplish a particular goal. You run the A/B test on your live product by diverting a portion of your users to Version B, while the rest of your users continue to use Version A. You collect the results from both groups during the test, and then use that data to determine which version performed better.

    You’ll see what I mean in the examples below.

    1. A/B Testing to increase user sign-ups (and other macro goals)

    Netflix wants to add new paying customers to their service (of course), so they consider modifications to their website’s registration page. Whenever their product design team comes up with an improvement that they predict will lead to more user sign-ups, they prepare an A/B test to confirm their hypothesis.

    One such hypothesis went like this (paraphrased for our purposes): “If we allow non-members to browse our content before signing up, then we’ll get more people to join the service.” The idea came from actual user feedback, and it made sense to designers at Netflix.

    So, in 2013, the product design team tested their hypothesis using an A/B test. They built an alternate working version of the registration page that allowed content browsing (Version B). When they started their test, some visitors to the live site were unknowingly directed to Version B, while the rest saw the original design (Version A), which didn’t allow content browsing. Throughout the test, Netflix automatically collected data on how effective Version A and Version B were in achieving the desired goal: getting new user sign-ups.

    Netflix ran this test 5 times, each with different content-browsable Version B designs pitted against the original non-browsable design. All five Version Bs lost to the original design. The design team’s hypothesis turned out to be incorrect. However, Netflix learned valuable lessons from the tests. They also saved themselves bigger problems by testing their assumptions first, instead of simply rolling the change out to everyone. As Netflix Product Designer Anna Blaylock stated at a talk in 2015, “The test may have failed five times, but we got smarter five times.”

    How does this apply to me?

    Trying to get more users to sign-up for your service is an example of a “macro conversion” goal, and this is perhaps the most common application of A/B testing. Macro conversions goals are the “big” goals that usually represent the primary purpose of the thing you’re testing, whether it’s a website, app, or marketing email.

    Examples of macro conversion goals:

    • Get users to use your site’s Contact form
    • Get users to complete a product purchase
    • Get users to respond to your sales email

    This kind of A/B testing is something you can do, too. Even though Netflix’s example involved testing a design with a whole new feature (browsing), the design changes that you test will often be much simpler—the text label on your call-to-action button, for example.

    Sure, your modest app doesn’t get as many user hits as Netflix does, but you can still set up and run A/B tests of your own. Just like Netflix, you’ll run your test as long as necessary to collect enough data to determine which design is more effective. For Netflix, this is often several weeks. You may need to let your test run for several months.

    2. A/B testing to improve content effectiveness (and other micro goals)

    Netflix also runs A/B tests to optimize “micro” conversion goals, like improving the usability of a particular UI interaction.

    For example, Netflix ran a series of A/B test experiments to determine which title artwork images (a.k.a. “thumbnails”) would be most effective to get users to view their content.

    First, Netflix ran a small test for just one of their documentary titles, The Short Game. The test involved assigning a different artwork variant to each experimental test group, and then analyzing which variant performed best—and by how much.

    As Netflix’s Gopal Krishnan wrote:

    “We measured the engagement with the title for each variant — click through rate, aggregate play duration, fraction of plays with short duration, fraction of content viewed (how far did you get through a movie or series), etc. Sure enough, we saw that we could widen the audience and increase engagement by using different artwork.”

    With this initial A/B test, Netflix established that significant positive results were possible by optimizing title artwork. Netflix then went on to run more elaborate tests with larger sets of content titles. These tests measured additional factors as well, to verify that the success of the optimized artwork was actually increasing total viewing time, and not simply shifting hours away from other titles.

    I can A/B test like that, too?

    Yes! If you have a hypothesis about a design change that might improve your users’ experience, then you can set up an A/B test to try it out.

    You’ll want to be careful to:

    • Minimize the differences between Version A and Version B. If there are multiple or unrelated design changes on the same screen, you won’t be sure which design element was responsible for better performance.
    • Make sure you can measure the results before you start the test. Determine what data would indicate success, and make sure you can automatically collect that data. Micro conversion goals are often not as trivial to track as macro conversion goals. You may need to set up a custom Goal in Google Analytics, or what have you.
    • Consider the bigger picture. Your redesigned button might get more click conversions, but perhaps users are also leaving your site sooner. Think about how your design change may affect your other goals, and make sure you’re collecting that data, too. Consider running additional tests.

    3. A/B testing custom algorithms

    A core part of the Netflix user experience is that much of the UI is customized for each user. Most of that customization is accomplished via algorithms. Algorithms can be rather simple or extremely complex, but their output can still be A/B tested.

    Remember a few paragraphs ago, when Netflix tested which thumbnails would work best for all their users? That’s already old news. In 2017, they moved to a system that selects thumbnails for each user based on their personal preferences. For example, if Netflix knows you like Uma Thurman movies, you’re more likely to get the Pulp Fiction thumbnail with Uma Thurman staring back at you, making it statistically more likely you’ll watch it. This personalization is driven by an algorithm that Netflix can improve over time.

    Any time Netflix wants to adjust one of their personalization algorithms (or their adaptive video streaming algorithms, or their content encoding algorithms, or anything) they A/B test it first.

    Do I need to test algorithms?

    You may not actually be employing any custom algorithms in your app or website. If you are, however, you should seriously consider running an A/B test for any algorithm adjustments you make.

    Let’s say your company makes a news aggregator app that recommends articles for the user to read. Currently, each article recommendation is based on the user’s pre-set preferences and their overall reading habits.

    If you had reason to believe that your users prefer reading articles of the same type in batches, you could modify the algorithm to give higher priority to articles that are similar to the article the user just read. You could run an A/B test to see if your new algorithm is more effective at achieving your goal of increasing app usage.

    Before you can run such a test, you’ll need to actually implement the new version of the algorithm (while not interfering with the current algorithm). This may be trickier than your average design change, so tread carefully.

    More things to consider:

    • Make sure your Version B implementation has been sufficiently software-tested. Bugs in your new version will negatively affect the results and may require you to throw out the test.
    • Try to ensure that your Version B latency is very similar to the original’s. If your new algorithm takes noticeably longer to process than the original, then your A/B test will largely be testing the user experience implications of the slowdown, and not the output of the new algorithm vs. the old.

    Conclusion

    Netflix is a monster at A/B testing. They’ve got data scientists, design teams, and engineers working on potential service improvements all the time, and they A/B test everything before it becomes the default experience for their massive user base. They know it’s too risky not to.

    You and your company probably do not have the resources to build the impressive experimentation platform that Netflix has. Not yet, anyway.

    But you, too, can reap the benefits of A/B testing for your products, at your own scale and at your own pace. And you can take Netflix’s core lesson to heart: always be looking for ways to optimize your products.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Neura / Beta Testing Case Study

    Neura delivers high performance technology by optimizing battery consumption.

    Perhaps no technology will impact our future more than Artificial Intelligence. If 2017 was the “Year of The Bot”, 2018 may be the Year the Bot finally earns the title of “Smart”. A record breaking holiday sales season saw the Echo Dot become the #1 selling product on Amazon, while Google has sold more than one Google Home per second since the product debut in late October.

    In recent years, AI technology has progressed from a joke, to incredibly smart. Now, it’s hard not to be impressed as our devices begin to integrate seamlessly into our lives, understand what we say, and actually improve our experience interacting with the products around us. Despite the advances, it’s only just the beginning and the future for AI technology is bright.

    One company leading the charge is Sunnyvale, California based Neura, Inc.

    Neura is a personal AI solution that changes the way digital products interact with people. Neura adds user awareness to apps and devices. This enables their customers to deliver a more personalized experience to their users based on their behavior, habits, and activity. With Neura, digital products can boost engagement and retention by personally reacting to critical moments throughout the user’s day.

    As an ingredient technology embedded into other software, it’s vitally important that Neura operates flawlessly, fast, and efficiently. When Neura needed a partner to manage research and testing for battery consumption in the real world, Neura CTO Triinu Magi turned to BetaTesting.com.

    “Because Neura’s APIs are accessed via its SDK, it is very important to keep our resources consumption on the user’s device as low as possible,” said Magi. “Keeping a low battery consumption is imperative to our customers as it has great impact on the user experience. We work very hard to be able to deliver our services with high performance levels and minimum battery usage (less than 1%). We run many tests internally and keep improving our performance but before we roll out a new version to our customers, we must check how it performs in the ‘real world’ on a variety of devices and users in different locations. Our ability to check the battery consumption is limited to the devices that we have and the internal testers we can use.”

    While testing products in-house or with automated testing tools can prove very helpful, there is no replacement for real-world testing on a wide array of devices and environments. BetaTesting.com helped recruit users within Neura’s target market, and tested the SDK for multiple days (both weekdays and weekends) in order to thoroughly test the product in the real world.

    “From experience we’ve learned that some issues are detected only after experiencing the SDK in different locations and devices. We need to make sure that problems are solved so they don’t happen with our customer’s users,” said Magi.

    BetaTesting’s test results allowed Neura to quickly diagnose and improve battery consumption, leading to multiple test cycles in order to optimize battery performance.

    “During the first test cycle, we received valuable information that led to product changes. We went on to add another test cycle. The process was very easy and convenient. BetaTesting can move very fast and adapt to our changing needs. We informed BetaTesting that we want to lengthen the test and they were fast to respond and react with a sufficient number of testers. The testers that completed the test were good – completed all the questions and provided a lot of feedback. It helped us understand better how the product works in the real world. We improved our battery consumption and also our monitoring capabilities,” said Magi.

    As a company that focuses on continually improving the user experience, Neura has found a consistent partner in BetaTesting.com.

    “We have worked with BetaTesting a few times in the past and have been pleased with the process. BetaTesting offers a good value compared with the competition. They provide a guarantee to the numbers of testers that will finish the cycle and professionally manage the process, minimizing the resources required for us to manage it. The BetaTesting platform is also very easy to use (as a test manager). The results are shown immediately and clearly. The ability to connect with the user to ask specific questions is also valuable to us.”

    Learn about how BetaTesting.com can help your company launch better products with our beta testing platform and huge community of global testers.

  • Drupe: Beta Testing Case Study

    Drupe leverages successful beta testing period and finds success with millions of users and a 4.6/5.0 on the Play Store.

    Today, more than 23 billion text messages, 30 billion whatsapp messages, 10 billion phone calls, and countless more emails, IM messages, Tweets, Snapchats are exchanged daily around the world.

    Our mobile technology has forever changed the way we communicate. But as we deal with an increasing array of communication options, it has become difficult and time-consuming to manage interactions across apps with separate contact lists and unique experiences.

    Innovative Israel-based startup drupe has been working on solving that problem. Now with Millions of downloads and an average user rating of 4.6/5 stars, drupe brings your apps and contacts together in an intuitive unified experience that helps you connect with one-swipe and less hassle.

    Since the company’s inception, drupe has been committed to understanding and improving the user experience. As part of the pre-launch research and marketing process, drupe ran an intensive 3 month beta test, before the official launch.

    “drupe’s beta stage took 3 months and was a critical element in our successful launch,” said drupe’s CEO, Barak Witkowski. BetaTesting.com played a key role in this beta testing stage, helping drupe secure over 300 quality real-world testers for feedback on the user experience.

    “BetaTesting.com helped us gain quality beta testers quickly and at scale. It’s pretty much the holy grail when you go on beta,” said Barak, the CEO.We tested the app’s visual language (color scheme, icon and so forth), text, UI and UX. We’ve done a lot of changes based on feedback we received in the beta. We were also able to work on our retention, based on real users’ feedback, before the public launch, which was great.”

    In addition to UX and UI feedback, drupe’s large beta pool allowed the team to catch difficult bugs and technical issues through a wide range of devices and real-world scenarios.

    “We identified some UX issues and bugs on specific devices that we didn’t catch before the beta,” said Barak Witkowski, the CEO. “We also changed some elements in the UX that were found to be either confusing or annoying, such as eliminating constant notifications in the ‘drawer’”

    So what beta advice does drupe team have for new product managers and marketers?

    “Running a successful pre-launch beta stage is a must have for every app that aims high. Our two cents are: Make sure you have enough quality testers so you get insights based on valid data, ask the right questions, and blend in your marketing – a beta launch is still a launch!”

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Xerox / Beta Testing Case Study

    Xerox Transportation launches mobility pilot after beta testing in select cities across the US.

    It shouldn’t be a surprise that the company most responsible for creating the modern personal computer is continuing to invest in research and development projects capable of changing our world. Since 1970, through its R&D labs, Xerox has built a distinguished reputation as a innovative technology leader, helping invent the Graphical User Interface, Ethernet, laser printing, and even Object Oriented Programming.

    Now Xerox is focused on revolutionizing the way we travel.  Since 2010, Xerox Innovation Group, spread out across four research centers globally, has been designing new technologies to enable smarter mobility. Xerox is helping cities all over the world to optimize their transit systems and traffic flows, and get people where they need to be, quicker and with less hassle.

    A central part of this initiative is the Xerox Mobility Marketplace – an ambitious vision for a unified travel subscription for city dwellers and visitors, branded for a specific city, integrating all the different modes of public and private transportation seamlessly into one easy-to-use app.

    Leonid Antsfeld is in charge of Business Development for Transportation in Xerox Research Center Europe. As the Product Owner responsible for Xerox Mobility Marketplace, Antsfeld faced a unique challenge to recruit the right beta testers for the first real-world tests for the city-specific apps in Los Angeles and Denver.

    It was important that testers represented a range of demographics and device types, and a mix of public and private transportation users and not only those that primarily relied on their car.

    “The first question was– where do we find testers?” asked Antsfeld. “The key challenges were defining and recruiting the right audience, determining which features to test, what questions to ask, and how to interpret the answers. We had to analyze and understand the feedback – what is important and what is less important?”

    When searching for a beta testing partner, Xerox turned to BetaTesting.com. “BetaTesting left the best impression and they were recommended by our partners that used the service in the past,” said Antsfeld.

    From the early stages of planning through execution, BetaTesting worked with Xerox to design a successful beta testing campaign. BetaTesting helped define the target audience, create the beta tasks and feedback questionnaire, recruit and coordinate with the testers, and summarize and help analyze the results.

    “The BetaTesting.com campaign was extremely helpful, literally from the first minutes of the launch when we immediately started to receive instant feedback through the app,” said Antsfeld. “Besides fixing many bugs, we also learned what potential users liked and disliked in the app, and what direction they would like to see it going in the future.”

    Today the future for Mobility Marketplace looks bright and the vision is more important now than ever. Urban Millennials are interested in simplifying their life with flexible, environmentally-friendly travel options and participating in the sharing economy through the use of car and bike-sharing services. Just as important, there are huge advantages for cities that prioritize the availability of different transport choices: pollution reduction, improved health and happiness, reduced accidents, and the potential for sustainable population growth with a high quality of life.

    “We took testers’ feedback very seriously and are already working on integrating requested functionality in the next versions,” said Antsfeld. “We aim to have bi-weekly updates with bug fixes and minor improvements and once a quarter to release major versions updates with new functionality. We are definitely are planning to continue to work closely with BetaTesting before any major release.”

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.