• Announcing ErliBird + Instabug Integration & the New Instant Bug Test

    We’re happy to announce that ErliBird (Now BetaTesting.com) and Instabug have joined forces to make it easier than ever to recruit bug testers and get actionable bug testing results in as fast as one hour.

    On ErliBird, we have created a new Instant Bug Test package, which makes it easy to link ErliBird and Instabug. This new test provides a great way to recruit targeted testers from ErliBird to conduct on-demand bug testing for iOS and Android apps. By integrating the Instabug SDK in your app and connecting to ErliBird, your testers will be able to submit bugs directly in your app, with additional contextual information to help you discover and squash important bugs.

    The Instant Bug Test (designed exclusively for Instabug customers) is inexpensive and you’ll typically start to see results within just 1 hour of starting the test once your company is approved for self-service submission. When planning your timeline, note that we will manually review your company’s first test submissions, which can take 3-24 hours. Once you have a clear understanding of how to design bug tests using our tool, we’ll activate your account for completely self-service submissions. 

    In addition to the Instant Bug Test, ErliBird is also offering Instabug clients that require more in-depth testing a 15% discount on any ErliBird Beta Testing package. Just contact ErliBird and mention the Instabug discount. Likewise, Instabug is providing ErliBird customers with a 15% discount on all paid plans.

    What is possible with the Instant Bug Test?

    Exploratory Testing

    In an Exploratory test, your testers will explore and report bugs they discover, without any specific guidance from you. This is the easiest and fastest way to get started with an Instant Bug Test with minimal setup. You can optionally provide a basic description of what you’d like testers to focus on, along with a list of known bugs and issues to avoid.

    QA / Functional Test Cases

    Run a functional QA test with set of test cases to test specific functionality and features within your app. When users run into an unexpected or failed result, they’ll submit a bug report detailing the issue. Mix functional test cases with UX or UI based survey questions for a robust and powerful test.

    UX Feedback (with Bug Reporting)

    Design a custom test process for your testers and create a set of tasks and survey-based questions for your testers to follow. Whenever a tester runs into a bug or issue during this process, they will submit a formal bug report using Instabug. You can decide how general or specific you’d like the test process to be: For example, you can provide only a few high-level instructions if you’d like (e.g. register, complete your profile, etc), or you can provide very specific test cases (e.g. Click on the package icon. Do you see an image of the package?).

    How ErliBird + Instabug Integration Works

    During the test setup process on ErliBird, you’ll be given a WebHook URL. Within Instabug (Settings->Integrations->WebHook), create the WebHook to send your bug reports to ErliBird. See this training video to learn how.

    Instant Bug Test FAQ

    What is the Instant Bug Test?

    The Instant Bug Test allows you to recruit real-world bug testers to test your app and generate bug reports and feedback. Once your company is approved for self-service submission (without ErliBird review), you’ll be able to launch tests on-demand and start to get results within 1 hour. Your test can be Exploratory (users explore the app on their own to discover and report bugs on their own), Functional (driven by formal tasks and test-cases) or a mix thereof.

    Can we customize our test process?

    Yes, you can design your own set of tasks, questions, and formal test cases.

    Do we need to design our own test?

    No, if you’d like, you can use our Exploratory Bug Testing template which requires absolutely zero setup of the test design process. Users will explore your app on their own to discover and report bugs they encounter.

    Can we target a specific device or operating system version?

    At the launch of the Instant Bug Test, we’re allowing for some customization of operating system versions and device types (e.g. Most popular, older phones, etc). Shortly, we’ll be launching the ability for full customization of the operating system versions, device manufacturers, device models, and more. What’s better, these are not lab-based devices or even cloud devices. These are real world users that own and use these devices every day.

    Need More Robust Beta Testing?

    Instabug Customers Get a 15% Discount On Our Beta Testing Packages.

    Need something more in-depth? We also offer our full Beta Testing product with Instabug integration. Our beta tests provide quality in-depth beta testing and user experience feedback from targeted real-world users over 1-week test iterations. We provide a Project Manager and provide our industry expertise to design a successful beta process. While our new Instant Bug Test is a single session test process (i.e. testers use your app in one sitting to explore and report bugs over 30-60 minutes), our Beta Testing process is perfect for tests that last days or even weeks, and include multiple sets of tasks/questions during this testing process.

    This makes it possible to run tests that match your real-world organic usage scenario. For example, need 100 testers to play your new Steam game together and provide feedback on the experience? No problem. Need 50 testers to use your new News Reader app each day for a week? Solved. Need users to track their sleep with your sleep tracking app and help hunt down all of the bugs and technical issues? You get the idea (we can do that).

    About Instabug

    Instabug is a comprehensive bug reporting and in-app feedback SDK that is helping companies worldwide build better apps. With just one line of code, beta testers and users can report bugs and provide developers with detailed feedback by just shaking their phones. The level of details Instabug’s SDK grabs with each bug report attracted tens of thousands of companies like Lyft, T-Mobile and eBay to rely on Instabug to enhance their app quality and iterate faster.

    Check this link to know more about how can Instabug help you receive better feedback.

    About ErliBird

    ErliBird gives you the power to beta test with real people in real environments and collect on-demand user feedback for Android, iOS, websites, desktop apps, and tech products. Powered by a global community of 100,000 real-world testers.

    Learn more about ErliBird’s Beta Testing offerings.

  • Real-Life Lessons from Design Testing Failures

    Whether you’re a startup or a mature corporation, you should user-test new features and UI designs. Sometimes design testing will end in failure, but you can learn from your mistakes, as well as others’. Here are some lessons learned from real-life design testing failures.

    When you redesign your product, you do it to satisfy your users in a way that meets your business goals. If you fall short, it could cost you. So it pays to test your new features and UI designs with some of your users before rolling those changes out to the whole world.

    You’ll find, however, that design tests often fail—meaning that your new design didn’t perform better than the old one.

    Take solace in the fact that those failures happen to everyone. Take heart in the fact that there’s a lot you can learn from those failures. And take some of the lessons learned by others and use them as your own.

    Here are some real-life examples of design test failures and lessons-learned from two companies: a startup you may have never heard of, and a juggernaut that you definitely have heard of.

    Groove’s neutral results

    Helpdesk software provider Groove wanted to increase their conversion rates: more landing page visitors signing up for the free trial, and more marketing email recipients actually opening the email.

    The startup experimented with different design and content tweaks. The changes included color and copy choices on website pages and subject-line text choices on marketing emails.

    The changes Groove experimented with were not random ideas. They came from conventional wisdom and credible sources who found legitimate success employing similar changes. All sounded promising.

    Groove individually tested their many design variations using A/B tests.

    Most of them failed.

    When failure looks like no difference at all

    Failures aren’t always dramatic.

    Sometimes you test out that new design you think will work better… and it works just about the same as the old design. It’s not a spectacular failure, it’s just “meh.”

    Groove experienced a lot of these shoulder-shrug “failures”. Company founder Alex Turnbull straightforwardly calls them “neutral results”.

    Groove’s design testing included trying out different colors on call-to-action buttons, and listing prices in different suggested forms (e.g., “$15” vs. “14.99”). Groove tested these and other types of design variations, but often found the results to be inconclusive.

    When an individual test has neutral result, it can be difficult to draw any meaningful conclusion.

    For Groove, lessons arose from the aggregate of their design testing efforts.

    Lessons

    • Expect failures. For each winning result, you’ll rack up a whole bunch of failures.
    • Don’t rely on “proven” design tricks. Tactics that worked great for other companies:
      • May not apply to your particular audience
      • May not work well in your particular design
      • May have lost effectiveness over time
    • Optimize your designs AFTER you’ve settled on a well-researched approach that is specific to your users.

    Quote

    From Groove Company founder Alex Turnbull:

    […] one of the most important lessons we’ve learned as a business is that big, long-term results don’t come from tactical tests; they come from doing the hard work of research, customer development and making big shifts in messaging, positioning and strategy.

    Then A/B testing can help you optimize your already validated approach.

    Netflix’s failed feature

    Netflix had an idea for a new feature to add to their home page that would increase new user sign-ups.

    The folks at Netflix are experts at A/B testing, so naturally they A/B tested their new design before rolling it out to the whole world.

    It failed.

    Undeterred, the product design team created variations on design and approach for the new feature. Over the course of a year they tried these variations out in four additional A/B tests.

    They all failed.

    What’s going on?

    The feature in question: allow potential users to browse Netflix’s catalog of movie and TV offerings before signing up.

    Netflix had very strong evidence to believe this feature would increase user sign-ups. Users were requesting the feature in significant numbers. The design team liked the idea, too. It made a ton of sense.

    But in every test, the home page with the browse feature performed worse than the home page without the feature. The sign-up rate was simply lower whenever the new feature was available.

    Over time, Netflix came to understand why the browse feature was a failure. This included learning more about their users and learning more about their own product. They were hard-fought lessons, but valuable ones.

    Allowing potential subscribers to see what Netflix had to offer? At its core, this was still a good idea. But the “browse” feature was the wrong approach.

    Part of the reason for this, the team learned, was that to really see what Netflix has to offer, you have to actually use Netflix. The customized experience of the service is the key, and that experience is not captured by impersonally browsing the media catalog.

    The barrier to entry for potential users was already low: a free one-month trial. So the goal of the Netflix homepage should be to direct potential users to the free trial. The browse feature, in all of its incarnations, was actually a distraction from the conversion goal, no matter how it was designed.

    In the end, a static background image that gave users an impression of Netflix’s breadth of offerings was better than a full-fledged browse feature. And directing visitors’ focus toward the free-trial sign-up was far more effective.

    Lessons

    • Users don’t always know what they need
    • Fully understanding what makes your product experience special to your users will help you make better design decisions
    • Don’t distract your users away from the goal
    • Test your new design before rolling it out, even if you’re *sure* it will work

    Quote

    As Product Designer Anna Blaylock said about the string of tests: “The test may have failed five times, but we got smarter five times.”

    Design testing: Better to try than not try

    All in all, testing designs before rolling them out to all your users is wise, and relatively inexpensive:

    • only a small portion of your user base see the experimental interfaces while the tests are being run, so any negative impact is minimal;
    • despite the many likely failures, there are valuable lessons to be learned…
    • let alone the occasional “big win” successes that make your efforts worth it.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Nvizzio / BetaTesting – Game Play testing Case Study

    Nvizzio Creations conducts successful beta game playtesting for new Early Access Steam game.

    Clicking the button to officially launch a game to the public is a thrilling and high-stakes moment for every game developer – especially in the age of instant consumer feedback. For apps and games launching into public marketplaces like the App Store, Play Store, and Steam, poor reviews and ratings can quickly wreak havoc on your brand and have a long-lasting impact that can be difficult to overcome. It’s important to get it right the first time, and this is never more true than with a new brand’s first major release.

    So when Montréal-based Nvizzio Creations set out to launch their first self-published game through Steam, they planned a thorough beta testing phase prior to their official Early Access launch date, and partnered with BetaTesting for remote multiplayer game tests with real gamers.

    “Player feedback is vital to any game, but even more so with a multiplayer experience,” said Brent Ellison, Game Designer.

    Testing a multiplayer game presents a unique challenge during the beta testing phase. To coordinate a successful real world beta test, BetaTesting worked to recruit targeted gamers and designed and executed multiplayer tests with 100+ gamers using a wide mix of real-world devices, graphics cards, operating systems, and hardware configurations.

    “It was important for us to see how people received the game in their home environment,” said Kim Pasquin, General Manager at Nvizzio Creations. “We simply knew we did not have the capability in house to see how the product would react with 100+ people playing together. Getting data from our real world users was something we really wanted to do before launching the game.”

    When choosing a beta testing partner, a key factor for Nvizzio was the ability for BetaTesting to recruit the right candidates for the test. “Finding candidates that would potentially be interested in our product in the real world and getting their feedback and data was our principal decision factor,” said Pasquin.

    When the test was complete, the results proved immediately helpful.

    “Working with BetaTesting was fantastic and the test was very useful,” said Pasquin. “During beta testing, we uncovered several technical issues with the game. We discovered lag, ping and server connection issues that we did not experience during in-house testing with high-end computers. Getting testing candidates with real-world equipment was invaluable to us.”

    “The results allowed us to greatly improve the game. We were able to fix most of our technical issues. We also tested those fixes with a second test where players saw huge improvements,” said Pasquin.

    The Early Access launch has proved incredibly successful to date, with a “Very Positive” rating and close to 90% of users recommending the game. With continued player feedback and ongoing development, the future is bright for Eden Rising.

    “Not only are we stepping into self-publishing for the very first time, but we have worked hard to create something truly special. I believe players will love the co-op experience we have created in this lush, vivid world.” said Jeff Giasson, President of The Wall Productions.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Try a 5 Second Test to See if Your UI Design Makes the Right Impression

    If your visual design needs to make a specific impression on your users, don’t rely on your project team’s assessment of the UI. Try this quick 5 second test with actual users to confirm your brand goals are being achieved.

    Is your product strategy dependent on achieving a specific initial brand impression?

    There might be any number of reasons why you want your website or app to immediately elicit a particular emotional reaction from your users.

    It might be that your research indicates your web app has to appear fun or interesting enough to keep your target audience from leaving. Or maybe your website needs to immediately feel modern and authoritative so that its content is taken seriously. Or perhaps it’s imperative that your mobile app seem professional and trustworthy so that your users are willing to create an account or enter their credit card information.

    Regardless, if your answer to the brand question was YES, then you should run a “5-Second Test” to see if your visual design is successful in achieving the desired effect with actual users.

    Your users didn’t sign off on your visual design

    Your product might have a very nice visual design that totally meets your stakeholders’ vision.

    And if your product doesn’t have specific high-priority branding objectives, then getting sign-off approval on your design might be all you really need. You can then move forward with usability testing and beta testing, and you’ll only make changes to your visual design if you encounter specific problems.

    However, if your visual design is meant to induce a specific emotional effect from your users, then you’ve got more work to do.

    Sure, your project team thinks the visual design accomplishes the required brand impression. That’s a nice place to start. But you need to test your design with actual users to make sure that they agree.

    The 5 second test

    The 5 second test is simple, and the name spoils a lot of the mystery.

    It goes like this:

    1. Prep the user to view a visual design
    2. Show your design to the user for just 5 seconds and then stop
    3. Ask the user questions about the design they just saw

    Despite the short, non-directed period of time viewing the design, users will be able to give you useful and meaningful data about their impression of the design.

    But why does this work?

    Reliable opinions at a glance

    Meaningful split-second judgements are a real thing. Users can reliably make their judgement of a visual design in as little as 1/20th of a second (50 milliseconds).

    At a glance, users will subconsciously make an instantaneous judgment on the general visual appeal, and even some specific aesthetic factors of the design. This judgement, made in a fraction of a second, is highly accurate to the judgement that would have if given more time to look at the design.

    The design-review time in your 5 second test is a hundred times as long as a 50-millisecond glance (math!), but is still, you know, a pretty short time.

    In 5 seconds, the user gets to take in the visual design, get an impression, and feel some of its impact. The user doesn’t really have time to inspect, read content, or dig deeper. Which is good—the impression stands alone, fresh in the mind.

    Example and tips

    Nielson-Norman Group has a short video about the 5 second test, which includes some helpful tips.

    I’ll add an additional tip here: don’t make your post-view question period too long or too complex. You want to capture the information you need while the design is still fresh in the participant’s mind. The participant doesn’t (and shouldn’t!) get to see the design again while they answer.

    [Nielsen Norman Group (NNgroup) video on the 5-Second Test, from Youtube]

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • When Beta Testing, Verify Brand Experience at the End

    When beta testing your app or website with real users, you’ll want to save your brand experience questions for the end of the session. Here’s why.

    What are your users’ impressions of your product? How do they feel about it?

    Do they think of your product as fun? Corporate? Simple? Friendly? etc.?

    These types of questions relate to your users’ brand experience, which is something you can verify.

    Brand experience

    Your users’ brand experience with your digital product includes their attitudes, feelings, emotions, and impressions of that product. (Brand experience also affects—and is affected by—external interactions, your larger brand, and your company as a whole. But for this article we’ll focus on individual products.)

    Brand experience is generally important, though it may not be a high priority for every project or every company. Your product team, however, might find it important enough to directly and specifically evaluate. This might be because:

    • You want to collect brand-related feedback and experience data for future consideration; or
    • You have specific goals regarding brand-related attributes you need to verify are being met, so that your product can be successful.

    Early testing vs beta product testing

    If it’s vital to your business goals that your product elicits particular reactions from your users—then you are surely designing your product with the intent of giving those impressions and drawing out those emotions.

    But you can’t just rely on your intent. You need to test your product to see if real users’ interpretation of your brand matches your intent.

    Versions of such testing can occur early and late in the project lifecycle.

    Early testing

    Your visual design is key to your users’ first impression of your product.

    Fortunately, you can evaluate your users’ brand impressions rather early in a project, before your team writes a line of code. Once you have high-fidelity design mockups, or a high-fidelity prototype, you could, for example, run 5 second tests with your users to verify that your design makes the right first impression.

    Beta testing (or later)

    Within your product, your visual design is not the only thing that makes a brand impression on your users. The rest of your design—navigation, animation, interactions, etc.—as well as your content contribute to the user’s opinions and reactions as well.

    Once your product reaches Beta, you have an opportunity to verify your target brand traits in a deeper way, after your users interact with your working product.

    Emphasis on after.

    Brand experience assessments at the end of beta sessions

    In your beta test sessions, the best time to collect brand experience data is at the end… and only at the end.

    Why at the end?

    Users can get a pretty good brand impression from a simple glance at your visual design. But interaction with your product will affect your users’ brand perception as well.

    For example: an interface whose visual design gives off strong trustworthy and professional vibes might see those brand traits eroded by wonky UI interactions and amateurish text.

    Kathryn Whitenton of Nielsen Norman Group gives this more straightforward example: “a design that appears simple and welcoming at first glance can quickly become confusing and frustrating if people can’t understand how to use it.”

    After performing some realistic tasks in your product, your beta test users will have formed a deeper impression of your product’s brand traits. In your beta testing, the cumulative effect of your product’s design and content is what you want to capture. You maximize this by assessing brand experience at the end of the test session.

    Why not assess at the beginning AND the end?

    The instinct to evaluate twice and compare the data is a good one. Knowing where the brand impression starts and seeing how it changes after your users performing beta tasks might help you focus in on what you need to improve.

    But here’s why you shouldn’t assess at both the beginning and the end, at least not in the same session with the same user:

    When you ask users to describe and/or quantify their reactions to your design, they will then consciously form and solidify opinions about your design. Those opinions—while not unmovable—would self-bias the user for the remainder of the test.

    Thus, asking about brand experience at the beginning (or in the middle) is likely to affect the opinions you’d get from those same users at the end.

    However, if you do wish to compare “both kinds” of data, you could instead:

    • Split beta testers into two groups: those who are evaluated at the beginning, and those who are evaluated at the end; or
    • Compare your end-session beta test data with your previous pre-beta visual design impressions data (assuming you have some); or
    • Re-use some of your pre-beta visual design test participants in your beta testing. If it’s been a few months (for example) between the visual design tests the beta tests, those participants probably don’t remember many details of the earlier testing, and the impact of self-bias would be minimized.

    One last thing: watch out for bugs

    You’re testing a beta product, so of course there might be some bugs your users encounter. Keep in mind that bugs may affect you users’ impression of the product for a number of brand traits you may care about.

    Factor the impact of beta bugs into your post-testing analysis, and/or retest with subsequent product iterations.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • How to Turn Beta Testers Into Real Users

    Find the right beta testers and they’re more likely to become real users.

    When testing for Usability, UX or running a Q/A Bug test, you typically want to run the tests with participants that match your target audience as closely as possible. When you obtain feedback from users that are representative of the people who will actually be using your finished product, you can draw meaningful conclusions from the data and take the right actions to improve your product.

    Although testing with your true target audience is sometimes not possible (e.g. very niche audiences), when it is, you also get an additional benefit: Your testers are more likely to turn into real users. In fact, your testers are probably more likely to become active users of the end product than organic users that you might acquire through other methods.

    The reason that the right testers are more likely to become real active users and ambassadors of your product is that they will probably feel a sense of buy-in. If they have provided you with feedback and have helped you improve the product, they will feel like they have some ownership in the end result. This is more likely to lead them to continue using your product and even spread the word about it. They might brag to their friends about that fact that they got early access and helped with testing, which will result in more exposure for you.

    Now that we’ve established why it is a good idea to test with your target audience, let’s take a look at where to find the right beta testers.

    Here are some of the best FREE resources to find beta testers that might be a great fit for you:

    • Reddit  – Reddit is one of the best resources to find any sort of a niche audience. Subreddits are communities within Reddit for different interests, and it is very likely that there are subreddits that relate to your niche. You can promote yourself or your product by engaging and adding value to the group, or simply by running ads within them.
    • Hacker News – A social news website with entrepreneurial / technical audience (developers, startups, etc), that are generally eager to help. This could be a great website to promote your startup / product to potential beta users.
    • Blogs / Press – Here is a good list.
    • Daily featured startup email lists – Featuring your startup on popular sites like BetaList, Product Hunt can be a great way to add a lot of beta users to your email list. The users you find on these sites might already be early adopters in general – so people that sign up for your list are likely to be enthusiastic about providing feedback and are self-selecting as a part of your target market.
    • Beta testing platforms (not free) – Beta testing platforms like BetaTesting and others are a great option for finding the specific users that you’re looking to target. At BetaTesting, we can help you target your ideal audience through demographic targeting, and a screening survey for refined targeting on any criteria: interests, lifestyle, etc.
    • For specific countries – In addition to the above mentioned resources, there are some things you can do to acquire users from specific countries. For example, you can advertise in those countries through Google Adwords or Facebook, post on blogs/press sources that are specific to that country, and target users in their local languages.

    Finally, just because you’ve managed to find the testers that match your target audience doesn’t mean your work is finished in turning them into real users. The likelihood that they become real active users depends on one thing: Are they excited about your product and do they want to continue using it? Treating your testers as customers from the very beginning can help immensely in ensuring that they remain excited.

    Here are some tips to follow to get better beta engagement:

    • Remove obstacles – Any inconvenience in using your product can turn off users easily. Go out of your way to make sure that the process of testing / using your product is as easy and enjoyable as possible.
    • Be thankful – Don’t take your users for granted. Appreciate the fact that they are taking time to provide you with valuable feedback. Always make sure to read their feedback and respond directly where applicable.
    • Communicate – Communication is key. Keeping the users updated is not only helpful in the feedback process, but it will also help in keeping them excited about the product. You can provide a quick recap on the test, update them on any changes, and let them know your launch date etc.
    • Distribute – Be sure testers have access to the full public product when you launch. They might have had access to the “beta” version through a third party app like TestFlight. Do let them know when the finished version is available through regular channels like App Store / Play Store etc.
    • Improve – If for some reason, you see that users aren’t engaging as much as you’d expect, it’s a sign that they are not interested / excited or that they don’t find the product useful. It might be a good idea to read some of their feedback and improve the product or make necessary tweaks.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Communicating UI Design: Problems with Lorem Ipsum Placeholder Text

    Avoid the pitfalls of pasting improbable dummy placeholder text (lorem ipsum or otherwise) into your user interface designs.

    When communicating UI design, you’ll sometimes use temporary placeholder text (“dummy” text) in your wireframes, mockups, or even interactive prototypes.

    Below I list some of the problems with pasting perfect pieces of placeholder text all over your UI design.

    Note: in this article, placeholder text refers to temporary dummy text (e.g., lorem ipsum) that appears in UI design mockups and documentation. It does NOT refer to the placeholder text labels that appear inside form fields in a published app or live website.

    Common placeholder text problems

    Unrealistically concise titles, labels, descriptions, etc.

    Intentionally or unintentionally, placeholder text often depicts headings, labels, text blocks, etc. as shorter (i.e., containing fewer characters) than real-life content will actually be in the final product.

    This might happen due to bad assumptions, wishful thinking, or from a conscious or subconscious desire to make the design to look nicer.

    • Invites designers and reviewers to ignore and defer design concerns such as:
      • Text overflow (e.g., text wrapping vs. ellipses vs. other)
      • Proper static font sizing
      • Whether to specify dynamic font sizing.
    • It will be more expensive to address such design issues later (and/or might result in a worse implementation than if those aspects of the design had been considered and specified earlier.)
    • The design under review doesn’t have the same visual effect the real-life product will.

    Related: on occasion, placeholder labels are longer than the real-life text will typically be. Some of the same problems can result.

    Ambitious UI elements that no one will want to fill with real text

    Sometimes designs include well-intended text-filled elements… that are doomed. Perhaps a place in the UI is carved out for a page summary or detailed user instructions. These UI elements appear sensible and look nice with dummy placeholder text! But when it comes time to actually write something to put in there, problems become apparent.

    • In the abstract, design reviewers can lazily assume all sorts of compelling, life-changing, product-selling copy will fill the spaces reserved for text.
    • Reviewers might even be lulled into not thinking about that element at all. It looks good in the mockups… and it’s probably there for a reason, right?
    • When the time finally comes, however, filling those elements with text might not work out:
      • No one knows what to actually put in there; or
      • Any text that is put there is not useful or is unwanted noise.
    • Perhaps some pages can make good use of that text-filled element, but other pages just don’t need it at all. Design guidance is needed for when the element is empty.
    • Perhaps the amount of text actually written for this element in the final product results in an unbalanced or odd-looking UI. Not what the designer intended. If the truth was known earlier, the designer could have improved the design before it was even implemented.

    Multiple elements copied and pasted with the same placeholder text

    Just about any app or website has the potential for repeated elements on the same page (e.g. article previews, product cards, etc.) It’s common for the designer to copy and paste the same components with the same lorem ipsum placeholder text for each instance in a UI mockup.

    Identical blocks can give an unrealistic impression of the design

    • Everything lines up nicely
    • Everything looks neat and tidy
    • The overall effect is not realistic

    Identical blocks can obscure design and layout issues

    • Allows designers and reviewers to misunderstand or ignore issues of alignment, sizing, layout rules, etc.
    • Many of these layout details would be immediately apparent if the placeholder text were simply varied instead of repeating the exact same text for each
    • For example, cloned box-elements displayed side-by-side are always the same height. What happens if one box has more text? Does it wrap to more lines and make that box taller? If so, do all the other boxes have to match that height? What if one box has way less text? Does the box get shorter? Is there a maximum and/or minimum height for boxes? Etc.
    • Again: it is better to consider, address, and specify around these issues sooner rather than later.

    Filling every possible data slot with text, when in real life that won’t happen

    Mockups often include all of the data that is possible to include, in every instance. In the final product, however, all the data might not always be available or necessary.

    For example, a user account panel may need to be able to display address, phone number, email address, etc. But what if the address and phone number were optional when the user signed up, and she left them blank?

    For similar reasons as mentioned above, it helps to create and see mocked-up examples where optional data is not shown, in various configurations. This will clarify the impact (or lack of impact) missing fields have on layout. For example: should there be “holes” where the missing data would have been, or does the other data “move up” to fill the gaps?

    Holding on to lorem ipsum for too long

    Content is very important. Depending on your product, it might be the most important thing.

    The further along you are in the project, the more you should be using and testing real content in conjunction with the design.

    As your initial design graduates from lo-fidelity sketches/wireframes, realistic content needs to replace more dummy content. As your mockups graduate to higher-fidelity prototypes, lorem ipsum should be banished, and you should be incorporating and testing the actual intended content wherever possible.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • How to Find Beta Testers for Small Business Software

    There are many benefits to beta testing your small business software before final release. Beta testing helps you test key processes and performance, get feedback on UX (User Experience) and UI (User Interface), identify bugs, and verify that your ideas are well-received.

    It’s always ideal to test your product with users that match your target audience, specifically when seeking higher-level UX feedback. However, finding beta users for business software can be significantly more difficult than it is to find testers for consumer products – especially if you need the software to be tested by a business as a whole, or by a specific group within a business. For example, if you need an entire HR department or sales team within a company to test your app as a group, you are likely to have a very difficult time finding willing testers.

    Let’s look at why it is difficult to recruit small business beta testers, and what are some potential ways to get around this.

    It’s difficult because:

    • Incentives – Incentivizing a business is a lot more complicated than an individual. Someone interested in a consumer app might be happy to provide feedback for a small reward or for free access to the app for a certain period of time. On the other hand, it is very unlikely that even a small business would find it worth their time and effort to test / provide feedback for anything less than $5K or more.
    • Reliability – Even if the financial incentive was adequate, businesses might still be reluctant to use software for testing purposes. The tasks or processes that they are using the software for, will probably have a direct impact on their customer’s experience or on operations within their company. This is why they want to use proven software, and not beta software which likely isn’t feature rich and may have technical issues.
    • Logistics – It can be logistically challenging to coordinate the testing of your software with a business because it will often require buy-in and approval from multiple people.

    Potential solutions:

    • Individual testers – You may want to reconsider if you really need to test your product with an actual business. In many cases, it may be possible to simulate the experience / interaction a normal business might have with your product with a group of individual testers. For example, if you have a to-do app, can you recruit individuals that work for small businesses to provide you with feedback? Also, for some types of testing you can still get invaluable feedback from testing without your true target audience (e.g. Bug testing or UI testing). In any of these cases, you can follow the same advice we provide for finding beta users for consumer products. Also, at BetaTesting we can help you target professionals based on industry, company size, and job function.
    • Existing network – Can you test with your existing customers, partners, or other contacts in your network? Think about if you have existing relationships with businesses that would benefit from the product you have built and if it would make sense for them to test it. This would ideally be a situation where your software or app can add value to their business without having to disrupt any of their current workflow.
    • New customers – If you do not have any existing relationships with businesses that can help you with testing, it might be a good idea to start acquiring some of these customers. Some methods and resources to acquire B2B clients (not necessarily just for testing) are:
      1. Cold email – The important thing to keep in mind when reaching out to businesses through cold email is to not come across as spam. keep the emails short and simple, point out how your software can benefit them (if you have an example of how a previous version of your software has helped another business, definitely include that), and avoid pushy marketing lingo. Also, make sure that you’re reaching out to the right person whenever possible.
      2. Content creation – Writing interesting and engaging content that is relevant to your target audience will help you attract more traffic to your site and establish you as an authority in your niche, potentially resulting in more customers for your product.
      3. Reddit, Quora – Great resources to find people within any interest group, engage with them, and promote your product. Reddit most likely has subreddits that relate to your niche where you can run ads.
      4. Linkedin – You might already have people in your extended network (who you have shared connections with) that are in your target market. If you see someone that you think would benefit from what you’re offering, send them an invitation to connect, along with a personalized message. If they accept, you can now consider them a warm lead and have a conversation with them through LinkedIn’s messaging system.
    • External help – If none of the above seem like feasible options for you, you will need the budget to pay for external help. This will probably mean offering sizable incentives to recruit willing businesses.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Adobe XD Update Adds Two Long-Awaited Features

    The June 2018 Adobe XD update improves the tool’s prototyping capabilities by adding fixed-position elements and overlay support, among other enhancements.

    Just last month, Adobe put their UX design and prototyping tool—Adobe XD—on a promising path of improvement while simultaneously making the tool free to use for everyone.

    This month’s update—available as of June 19—fulfills a bit of that promise with a handful of improvements, large and small.

    Two long-awaited features

    The June Adobe XD update brings two features that move the tool’s prototyping capabilities closer to those of its advanced peers.

    These features are: overlays and fixed elements.

    image of youtube video about Adobe XD update features

    [Adobe video on Fixed Element and Overlay features on Youtube]

    Fixed elements

    This feature allows you to mark elements that should remain fixed to their position on the artboard, regardless of scrolling.

    This means that you can now have a fixed header, bottom navigation, etc. that acts properly in prototype mode for your scrollable screens.

    Unless, of course, you also wanted those elements to animate on scroll. XD still can’t do that.

    Overlay support

    With overlay support, artboards can now be displayed on top of other artboards.

    This is most straightforwardly useful for pieces of UI that temporarily appear “on top” of other content on a screen. For example: you can configure your prototype such that when the user taps a button, a small modal dialog appears in the middle of the screen.

    XD designers could also, for example, display a mobile keyboard over the current screen of an interactive prototype, without having to fake the effect by transitioning from one copy of the screen to another copy of the same screen with a mobile keyboard pasted on it. In addition, as an overlay the mobile keyboard can now be more realistically animated in the prototype—it could slide up from the bottom of the screen, for example—because the underlying screen doesn’t move in the transition.

    In addition, support for overlays allows for more flexibility in your prototype and more economy of artboards in your project file. For example, an XD project could include a single drop-down menu artboard that can be used by any number of other screens. Before overlay support, a designer would have to make a dedicated copy of any and every screen she wants to support displaying the drop-down menu.

    Other improvements in the latest Adobe XD update

    Private sharing (BETA)

    About private sharing, Dani Beaumont of Adobe writes:

    You are now able to create a new private link for your prototypes and invite someone to view it by email. You can include a message, and once they’ve received your email, they will be able to view and comment on your design.

    Adobe provides a video of private sharing in action.

    Ability to adjust an image after dropping it into a shape

    Adobe refers to this as “improved crop and place image fills.” You can now resize and reposition a masked image you dropped into a shape. (Video link)

    This beats the previous process of dropping an image into a shape, not liking the result, cropping/scaling the image in an external application, and then trying again.

    Improved Photoshop and Sketch image fill support

    Related to the previously mentioned improvement, adjustments to image-filled shapes from Photoshop and Sketch are now preserved when importing files into Adobe XD.

    Also:

    • Support for Typekit fonts on mobile
    • Math calculations in property fields

    About the math calculations, the update announcement on the Adobe blog provides some examples: “For math calculations in property fields, you can reduce grouped objects by 80 percent, quickly dividing an artboard area for tile design, or moving an object in an X or Y location by a specific number of pixels.”

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Project Tips for When Your UI Design Includes ALL CAPS Text

    From design and implementation to translation concerns, here are some tips for your project when your UI design includes ALL CAPS text elements.

    Whether you’re following a standard design language like Material Design, or have developed your own custom look and feel, your new website or app design might include UI elements that use uppercase text (“ALL CAPS”).

    Subheadings, button labels, tiny links, etc. are common candidates for ALL CAPS treatment.

    Below are some tips for you to apply to your project when handling ALL CAPS text in parts of your UI design. (These tips would also apply if your website styling used forced lowercase in some places, e e cummings style.)

    These tips might just save you time, money, and hassle!

    Note: By my reckoning, these tips are in rough order from more obvious to less obvious. Your mileage may vary.

    1. Make sure ALL CAPS isn’t used in places where there will be a lot of text to read.

    As it is commonly believed, all-caps text is somewhat harder to read than regular sentence-case (“mixed-case”) text.

    I’ll spare you the controversy, conflicting cases, and misapprehensions on this matter. Suffice it to say, readers tend to read ALL CAPS text a bit more slowly, and tend to dislike reading long passages of ALL CAPS text.

    None of this means you can’t use ALL CAPS in your UI designs. But it does mean you should use it sparingly, and in places where there won’t be chunks of text for you users to read.

    Body text (article text, product descriptions, etc.) should obviously not be displayed in all uppercase. However, tab labels, button labels, and the like are supposed to be concise, so ALL CAPS is certainly an option for those elements.

    Further, elements that tend to be as long as medium-length sentences or phrases—such as certain heading types—may warrant an ALL CAPS treatment in your design. For example, ALL CAPS might help you establish visual contrast for one heading type versus other headings and body text, without having to otherwise exaggerate size, weight, or color differences. (This is not a recommendation, it’s just one way to go. It’s up to you what works for your overall design and brand aesthetic.)

    But again, the thing to watch out for is long blocks of text. Let’s say, for example, your H3 headings are styled in ALL CAPS. But if some pages of your website might use H3s for potentially long lines of text (e.g., the title + subtitle of non-fiction books), you may want to change the styling of your H3s, at least in those contexts.

    2a. Use CSS or string functions to convert text to ALL CAPS for display

    You don’t want to store your display text strings (see below) in ALL CAPS and just display them. Instead, you’ll want to use CSS or string transform functions to convert text to ALL CAPS for display only.

    CSS Example

    h3 {
    text-transform: uppercase;
    }

    Reasons:

    • Future flexibility – You’ll be able to change your styling later with just a few changes to your CSS or presentation code, without having to hunt down and rewrite a bunch of text labels.
    • Reusability – There may be some instances of text labels you wish to reuse in different places throughout your site or app. In different contexts, the styling of the same text might be different.

    2b. Store text strings with normal (appropriate) capitalization

    Whether you’re keeping your UI text strings in a database, in text resource files, in your source code (I won’t tell anyone!), or a mix of these, you’ll want to store those text strings with appropriate capitalization, not in ALL CAPS.

    Ask yourself: if this piece of text wasn’t going to be styled in ALL CAPS, what kind of capitalization would it have? However you answered, that’s how that text should be stored.

    Most text strings should be stored using either:

    • Sentence-style capitalization – e.g., “Tools, resources, and help files”
    • OR Book-title capitalization – e.g., “Tools, Resources, and Help Files”

    …even if it will wind up being displayed as all uppercase (e.g., “TOOLS, RESOURCES, AND HELP FILES”) in the user interface.

    3. Tell your devs and writers what’s up!

    Make sure to inform your team that text strings are not to be stored in ALL CAPS.

    Depending on the project, UI text may originate from visual specs, prototypes, word docs, etc. UI text might initially be typed up by developers or by copywriters. Later on, UI text additions and modifications may come from change requests, bug reports, etc.—and perhaps handled by entirely different team members. You likely won’t be in the position to review every change.

    Everyone is busy and focused on getting their own work done. Writers might not know what implementation is possible; developers might not think about how best to handle text strings from your perspective. And neither is likely to automatically have your best-practice plans in mind when they are doing their work. It helps to let them know.

    Further, your style guide should include which UI elements should use sentence capitalization and which should use book-title capitalization.

    4. Double-check your text resources before you have them translated!

    Allowing ALL CAPS text to go out to translators could be an expensive mistake.

    Different languages have different capitalization rules. If you discover you’re storing ALL CAPS source text after it’s already been translated, you won’t be able to properly fix those instances yourself without sending them back to translators.

    That’s time and money you didn’t anticipate spending!

    Note: You might be able to check this yourself without having to pore over all your text resource files or poke through your CMS database. If you can temporarily turn off uppercase styling, you can run through your site or app and look out for labels that still appear in ALL CAPS.

    For example, you might be able to use your browser’s inspection tools to change text-transform: uppercase; to text-transform: none; for all buttons on the page, and then inspect the result.

    5. Don’t trick your users to into entering data in ALL CAPS

    One last weird consideration.

    If your application supports user-entered content, make sure not to use ALL CAPS styling on the same kind of data your users will be entering into a form.

    For example, if you display message titles in uppercase in your UI, then users might naturally follow suit and enter their message titles in ALL CAPS to match. That is not desirable.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.