• Getting Started with Flutter SDK to Build Native Android and iOS Apps

    The Flutter SDK allows you to build native Android and iOS apps, with advantages over other cross-platform methods. Interested? Here’s how to get started with Flutter.

    Google’s Flutter SDK is a relatively new way to build native mobile apps for BOTH Android and iOS.

    Since being able to quickly produce apps for both platforms’ app stores may be very important to your business or your professional career, you may want to begin moving forward with Flutter, or simply try it out so that you know what’s up.

    Before you get started: learning about Flutter (and getting motivated)

    Flutter may have already piqued your interest. But that might not be enough motivation to push you through learning a new technology.

    So, before you jump in, you might first want to strengthen your resolve (and increase your excitement) about using Flutter.

    First, if you haven’t already, you can read my previous article about what Flutter is and why you might want to use it.

    Beyond that, you might want to:

    Rohan Teneja recommends the two above podcast episodes as a great way to get started, adding:

    If I had not listened to these back in March, I probably wouldn’t have been excited to try out Flutter immediately. The two episodes include everything you need to know about the “What” and “How” of Flutter.

    Getting started

    Set up the Flutter SDK

    Google’s official Flutter site Flutter.io is a great resource for getting started.

    The first four steps walk you through getting the proper things installed and configured, taking the Flutter SDK for a “test drive”, and quickly building a simple app.

    Related to the step 1—Installation—Android developer Harshit Dwivedi has this recommendation that you may or may not want to keep in mind:

    Note : You don’t actually need libimobiledevice , ideviceinstaller, ios-deploy and CocoaPods unless you want to build an iOS app, so ignore flutter doctor’s advice on installing them and you’ll save a hell lot of a time.

    “Flutter doctor” refers to a tool that you’ll be installing and running as part of the overall Installation step.

    Supported development environments

    Installing/configuring an IDE is step 2 of the setup process.

    If you wanted, you could develop Flutter using just the command line and your favorite code editor (and, if so, I will gladly refer to you as “hard core”). But, when getting started with Flutter, it’s advisable to use one of the supported Integrated Development Environments (IDEs).

    Supported IDEs:

    • Android Studio,
    • IntelliJ, and
    • Visual Studio Code

    Being overly stereotypical: If you’re an Android or Java developer, there’s a good chance you’re familiar with one of the first two IDEs; if you’re a web developer, there’s a reasonable chance you’re familiar with VS Code.

    However, if you’re not partial to any of the supported IDEs, then using Android Studio seems to be your best option for getting started for Flutter.

    Per Mike Bluestein:

    Android Studio offers the most features, such as a Flutter Inspector to analyze the widgets of a running application and well as monitor application performance. It also offers several refactorings that are convenient when developing a widget hierarchy.

    VS Code offers a lighter development experience in that it tends to start faster than Android Studio/IntelliJ. Each IDE offers built-in editing helpers, such as code completion, allowing exploration of various APIs as well as good debugging support.

    (Note also that installing Android Studio is required for using the Flutter SDK anyway, whether you plan to use it as your code editor or not.)

    Trying it out to make sure you’re set up properly

    Following step 3 (“Test drive”) and step 4 (“Write your first app”) right away will help you make sure everything is installed and configured correctly. You’ll also, of course, get introduced to Flutter and learn a bit about how the tools generally work.

    What next, though?

    After step 4, Google provides a number of different directions you might want to go next. This includes additional “codelab” tutorials you might want to work through—including adding enhancements to the starter app you just created—as well as various additional documentation on Flutter and its programming language Dart.

    While Google still has a lot of materials for you, you’re essentially on your own now. You’re still in the “getting started” phase, and yet you have to chart your own course for how you want to learn Flutter.

    Here are some ideas and advice on that front.

    Learn how Flutter compares to what you know best

    The next thing you might want to do is start bridging your current UI development experience to how UIs are built in Flutter.

    Flutter.io provides documentation sets that compare and contrast Flutter code with UI code that you’re used to. For example, comparing HTML/CSS code to the equivalent Flutter code if you’re coming from a web background.

    Flutter for web developers

    Flutter for Android developers

    Flutter for iOS developers

    Flutter for React Native developers

    Flutter for Xamarin developers

    Take Google’s Free Online Course at Udacity

    I don’t feel comfortable directing you to paid online courses. If that’s what you’d want to do next, I’m sure there are a bunch you can find on your own. I wish you good luck in evaluating whether they’re for you, and whether it’s worth your money.

    Here, however, is a free online course on Udacity produced by Google. This is separate from the materials provided at Flutter.io.

    screen-capture image of free Flutter Udacity course

    This course is marked as “intermediate” (as opposed to “beginner”), but expects that you do not know about Flutter or Dart before you begin. Instead, the prerequisite is that you “should have at least one year of programming experience, ideally with an object-oriented language like Java, Python, or Swift.”

    The course is described as self-paced learning that might take you “approx. 2 weeks”. Google software developer Mary Xia—who contributed to the course—stated that you could conceivably finish the course “over a weekend” in her article that describes the training in a bit more detail (as well as why you might be interested in learning Flutter).

    Parting advice about getting started

    • Don’t try to learn about the entire catalog of supported Flutter widgets right away
    • Don’t get bogged down trying to learn Dart up front
    • Give yourself achievable goals you’ll be interested in doing, and use this method to learn as you go

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Android Pie Gesture Navigation Isn’t Such a Big Change

    Google’s pivot toward gesture-based navigation has been much discussed ahead of the release of Android 9 Pie. But Android Pie gesture navigation isn’t as big a change as it may seem, especially compared to how Android Pie behaves with the feature turned off.

    Android Pie gesture navigation: on or off?

    For all the talk about Android Pie’s new gesture-based navigation feature, you might be surprised to learn that, if and when you upgrade your device to Pie, the feature is not turned on by default.

    In the near future, phones that ship with Android Pie preinstalled—the Pixel 3 line of phones, for example—will use gesture navigation out of the box. In fact, users might(?) not be able to turn it off.

    But for phones that are upgradable to Android Pie, the established 3-button navigation bar will be displayed unless you explicitly turn on gesture navigation. (At least for now.)

    The path to gesture navigation: blending existing features

    The primary action of the new gesture navigation system is the upward swipe from the bottom of the screen. (There are supplementary gestures attached to the new system as well, but the primary action is the focus of this article.)

    The core of the new navigation system, however, is a blending of existing Android navigation functionality into a single more consistent feature. The application of the swipe gesture just takes the core idea further.

    If you have upgraded to Android Pie but are still using old-school 3-button navigation, then you can see the core of the change in the expanded behavior of the Recents button. (The Recents button is typically the square button at the bottom right, but your phone’s manufacturer—*cough* Samsung *cough*—may do this differently.)

    About the Recents button

    Note: Google now calls this button Overview, but it they previously called it Recents. Since this button’s days are numbered, and more people seem to call it Recents anyway, let’s continue to refer to it at Recents in this article, out of respect or whatever.

    In previous versions of Android, the Recents button simply brings up a list of your “open” apps.

    In stock Android Oreo, for example, tapping the Recents button displays an overlapping vertical stack of large screen thumbnails. One can scroll through the list of open/recent apps, and then tap on one to open that app.

    Note that this is different from browsing all your apps via the App Drawer, which is accessible by swiping up from the bottom of the Home screen.

    Expanded behavior of Recents button

    Android Pie changes the behavior of the Recents button by combining it with the App Drawer.

    Thus, in Android Pie, tapping the Recents button now brings up a blended view:

    • a horizontal array of open/recent app thumbnails, and
    • a tray at the bottom that:
      • includes a Google search bar
      • includes a row of app shortcut icons (auto-populated by the OS based on context and usage)
      • can be swiped upward to display the App Drawer

    So the new blended view combines browsing recent apps with browsing all apps, though doing the latter requires an additional action.

    The upshot is this: now one button combines two versions of the goal “go to another app”: 1) Go to an app that is on the open/recents view, or 2) Go to any arbitrary app from my full list.

    Note that this does not comprise all versions of this user goal. For example, returning to your home screens to access your personally curated app shortcuts.

    Gesture navigation takes the core idea further

    Android Pie gesture navigation stands on the change to the Recents button and takes it further, by providing a semi-consistent interface for accessing open and closed apps, activated in a consistent way.

    Consider the following two actions (assume gesture navigation is turned OFF):

    • If you’re on a Home screen and want to browse all apps, you swipe up from the bottom of the screen
    • If you’re anywhere and tap the Recents button, you can browse your open apps, and/or browse all apps by then swiping up from the bottom of the screen

    The Android Pie gesture navigation system combines these two different actions into one, activated by a consistent gesture.

    Gesture replaces button

    As a result, the Recents button is eliminated. Instead, you swipe up from the bottom of the screen.

    Note: The Android Pie gesture settings screen says “swipe up from the Home button”, but actually you can swipe up from anywhere at the bottom of the screen.

    The combined behavior is applied consistently to all contexts. For example, swiping up from the bottom on a Home screen now brings up the blended Recents/App Drawer interface.

    Aside from being a consistent action, using the swipe-up gesture affords Google an attempt to make accessing the App Drawer single-step process.

    No need to stop on the way to the App Drawer

    With Android’s 3-button navigation—whether in Pie or in previous versions—accessing the App Drawer from inside an app is always two steps: Either tap Home and then swipe up; or (in Pie only) tap Recents and then swipe up. A tap and swipe, either way.

    With Android Pie gesture navigation, accessing the App Drawer can be done in a single step.

    Swiping up from the bottom of the screen:

    • A short swipe from the bottom reveals the open/recents view
    • Continuing the swipe motion upward pulls up the App Drawer

    Of course, you can still make this a two-step process. For example, one might do a short swipe up to view recents, fail to find the desired app, then swipe up again to access the App Drawer.

    Gesture navigation: it’s not crazy different, but…

    Android pie gesture navigation combines similar functionality and user goals into one feature, and wraps it in a consistent activation mechanism.

    However, that doesn’t mean the feature is automatically better for all users. For certain, there are advantages and disadvantages to the feature, as well as ways it falls short of its own goals.

    One example: existing Android users will be expected to develop new muscle memory for navigation actions, and unlearn some of the old. This is not fun. And it’s further complicated by the fact that users will still want to return to their home screens to access apps, too, and that is a separate interaction.

    This is one of the reasons that the Android Pie gesture navigation system, though seemingly a big departure, winds up feeling more like an intermediate step in the direction of further planned redesign (and experimentation).

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Does Color Affect Mood in User Interfaces?

    In physical spaces, color can unconsciously affect our mood. But does color affect mood in user interfaces? Not exactly.

    If you are about to paint a room, you’ll find no shortage of web articles ready to give you advice about what colors to choose. Some of these articles will assert that color can affect our mood, and suggest how you might select a paint color to produce a particular emotional effect you want your room to have.

    Some of this advice regarding color-mood effects is anecdotal or based on assumption, but some of it actually has some basis in science.

    There is good reason to believe that the colors on the walls around us can affect our mood somewhat. But should we seriously consider this when picking paint colors, or ignore it?

    And what about user interfaces? Does color affect mood there?

    Mood effects vs. other effects

    First, some clarification.

    While some web resources combine multiple kinds of color effects under a single umbrella of “color psychology”, in this article I’m intentionally isolating mood effects from other potential effects.

    So when I’m talking about the mood effects of color in this article, I’m referring to the idea that certain colors tend to have direct unconscious effects on a person’s general emotional state.

    For example: the idea (however true or untrue) that orange walls tend to cause people to become agitated, regardless of whether those individuals happen to prefer the color orange or not.

    What I’m not referring to:

    • Colors’ various meanings and inferences, which can vary based on culture, industry, context, etc.
    • Color preferences
    • Color’s influence on aesthetics
    • Color’s other psychological effects, such as impact on task performance

    Does color affect mood in the physical world?

    Yes. But…

    1. The effects of color on mood are often overstated

    Many psychologists are skeptical of the extent to which color actually affects us, compared to common claims. The reliability and amount of mood change caused by color are probably much lower than your average web article implies. There is also reason to believe that the effects are rather temporary.

    2. The effects of color on mood aren’t well understood

    Plenty of research shows that color can affect our mood, but a lot more research is still needed.

    As professor of psychology Andrew J. Elliot wrote in 2015:

    The focus of theoretical work in this area is either extremely specific or extremely general. […] What is needed are mid-level theoretical frameworks that comprehensively, yet precisely explain and predict links between color and psychological functioning in specific contexts.

    Color psychology is complex, and clearly not as simple assigning colors to emotional reactions for all humans. For example, some color effects are tied to or affected by culture (e.g. life-long associations regarding color), and some color effects are highly dependent on context. But we don’t have nearly enough research to predict how these interacting factors come into play in any particular application of color.

    Does color affect mood in user interfaces?

    When it comes to software user interfaces, the color-mood effect isn’t really a thing.

    Some web articles, infographics, and resources appear to be based on the mistaken assumption that the color-mood effects of physical spaces can also be applied to media.

    However, for a color to affect your mood, you need to be surrounded by that color. Smaller patches of color, such as on mobile phones or computer screens, aren’t going to have that effect.

    Color on our screens can have interesting and unexpected psychological and physiological effects on users. For example, particular colored backgrounds can have an effect on the performance of certain kinds of tasks, and the bright bluish light that pours out of our phones and tablets can negatively impact our sleep patterns.

    But our understanding of how color affects mood in physical spaces—as limited as that understanding is—does not translate to user interfaces.

    Conclusion

    When it comes to selecting a paint color for a room, feel free to pay some attention to the color-mood charts. Or don’t. In most cases, color preferences will matter far more than pure mood effects would. If the room ends up looking pleasing and appropriate, you probably did fine.

    But when it comes to picking colors for your user interface (or logo, or what have you), just ignore the color-mood effect all together.

    But don’t worry, there’s still plenty to worry about.

    With consideration of color-mood effects out of the way, you’re still left with all of the other important aspects of color in your UI. This includes aesthetics, setting user expectations, lending meaning to data and UI elements per culture and context, accounting for color vision deficiency in your users, etc.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • How to Prototype a Scroll Shadow in Adobe XD

    Adobe XD doesn’t provide scroll state settings or scripting, but you can still imitate scroll shadow behavior in your prototypes. Here’s how.

    Desired design

    Let’s say you’ve designed this scrollable screen.

    The header casts a shadow at all times when you run the prototype. But let’s say, for the purposes of keeping this article moving along, that this isn’t exactly what you want.

    What you really want is a scrollable screen that displays the header shadow only when the content isn’t scrolled all the way to the top.

    Adobe XD doesn’t support doing this directly via settings or scripting, but you can still simulate the effect.

    The basics of scrolling in Adobe XD

    First, let’s address the basics of how scrolling works in Adobe XD.

    If you already know this stuff, skip ahead to the “Prototype a scroll shadow” section.

    Scrolling elements and fixed elements

    Adobe XD allows for vertical scrolling when the artboard height is taller than the viewport height. This just means that you designed a screen with more content than fits in the viewable screen area.

    On a scrollable screen, every element on the artboard moves when you scroll, except the elements you’ve marked as Fixed Position. This means that if you want a particular element to stay put when you scroll, you’ll select that element and check the “Fixed Position” checkbox.

    Layers of elements

    Just because an element’s position is fixed doesn’t mean that the element will automatically appear in front of the scrolling content like you might intend.

    Elements appear in front of other elements based on their position in XD’s Layers panel. Typically, you want the fixed elements to appear higher on the list than the scrollable elements. Otherwise, the resulting prototype will look odd when, say, the scrolling content slides over a navigation bar instead of under it.

    Prototype a scroll shadow

    Adobe XD doesn’t support element settings that let you automatically toggle a shadow on or off based on scroll state. Adobe XD also doesn’t support event triggers or scripting to allow you to program things like that yourself.

    Still, you can pull off a good approximation of the scroll shadow effect in Adobe XD.

    Here’s how you do it:

    1. Create a sneaky element that obscures the shadow when the content is scrolled to the top
    2. Arrange things so that the sneaky element slips in between the shadow and the rest of the header

    1. Create an element to obscure the shadow

    This part is simple. Create a rectangle that matches the color of the background and position it to obscure the shadow.

    The sneaky rectangle will covers the shadow until it moves upward when you scroll, revealing the shadow.

    However, when you scroll, our new rectangle will pass in front of the fixed header. That won’t look good, so we have to…

    2. Make a “sandwich”

    Your fixed header, including its shadow, is probably an element group. If you’re re-using the header on multiple screens, then you’ve made it a symbol, in which case it’s definitely a group.

    You want your sneaky rectangle to pass above the shadow but below the rest of the header. That’s only possible if you rearrange the header a bit.

    You need to make a layer sandwich: a top header layer without a shadow, a bottom header layer that displays a shadow, and the sneaky rectangle that will appear in between the two.

    “Sandwich” steps:

    a) Isolate the background of the header group.

    b) Copy it, exit the header group, and then paste it as a new “under-header” element. Check Fixed Position for this new element.

    c) Arrange the new element to appear below the header group on the Layers panel.

    d) Keep the shadow on the new under-header, but turn off header’s shadow.

    e) Arrange the sneaky rectangle to appear between the header and the under-header.

    Limitations

    This approach approximates a real scroll shadow you might implement in your website or app, but it’s not quite the same thing. Here are some of the limitations of this approach.

    • The bigger the shadow, the more the effect of this approach—revealing the shadow from the bottom up—looks a little weird.
    • If the shadow already overlaps the topmost scrollable UI element, then it’s not possible to obscure the shadow cleanly.
    • If the background is an image or pattern or gradient, this approach isn’t very effective. Even if you copy a bit of the background to cover the shadow, you’ll get a “tearing” effect when you scroll.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • When Well-Meaning UI Text Makes Things Less Usable

    If you want users to successful, it helps to know how users actually read your UI text (spoilers: they don’t), and how more text can ruin everything.

    A real-life example of well-meaning UI text making things worse

    Years ago, I joined a large desktop application project as the lead designer, taking over for another vendor.

    One of my first tasks was to observe usability testing on the existing UI design. A realistic prototype of the desktop application had already been developed for that purpose.

    One page of the application was a menu screen for navigating to discrete sections of the application. The menu consisted of 6 items. Each item was a large clickable icon button, with a button title and description text to the right of it.

    The menu screen exhibited a surprising problem in user testing.

    Usability Testing the Menu Screen

    During user testing sessions, one of the tasks given to users was to update their data by downloading the latest data records from their device.

    Usability results from the menu screen with regard to the download task included:

    • Several users clicked on the title label first, and then when that didn’t work, they clicked on the square icon button to the left of it
    • Several users clicked the 6th button (“User’s Manual”, lower right) to begin their download task, instead of clicking the 1st button (“Download”, upper left)

    The first problem was not surprising. The second issue was very surprising. In each of the cases, the users in question navigated to the User’s Manual section, were disappointed with what they found, backed out the main menu again, and then picked the Download button.

    Why did they go to User’s Manual section first? Was it to try to read help documentation about their task?

    No. The users who clicked on the User’s Manual button did not do so with the intention of reading about how to download data. They thought they were navigating to the place where they could begin their task.

    So why would any users make that mistake?

    Well, part of it has to do with how users actually read UI text.

    The way users “read” your user interface

    Despite our earnest wishes, users don’t actually pore over your user interface and read all the UI text before carefully deciding what to do next.

    No, users don’t really read your UI, they scan it.

    Users have a goal. It might be paying a bill, or finding the answer to a question, or getting to content they want to actually read (maybe). Regardless, all the UI text they encounter along the way is just a means to an end. The user is scanning the landscape for words and elements that are applicable to their goal. And, if further navigation appears to be required, they are probably jumping at the first thing they notice that seems promising.

    Once you know how users read your UI, you can focus on making your app or website more scannable. This includes hierarchical design, concise text, headings and subheadings that stand out appropriately, and emphasizing things that are important to your users.

    Back to the problem with that menu screen

    Okay, so users “scan”. But how the heck does that explain why users would click on User’s Manual instead of Download for a download task?

    Scanning is part of it. Users’ knowledge and unintended consequences are also part of it.

    Aside from the button labels being neither clickable nor part of the button, the problem we observed with the menu screen was the “helpful” description text below each button title.

    Some users were clicking the User’s Manual button instead of the Download button because of that descriptive text, combined with some knowledge of what the task would entail.

    Many users already had a good idea that downloading data from their device would require connecting their device to the PC via cable.

    And some of those users, when scanning the UI text, saw the “connecting devices” bit, and—perhaps while thinking, “that’s what I need to do next!”—clicked the associated button.

    Note what these users (likely) did not do.

    These users did not circle back to the top to see if any other buttons were a better candidate for their download task. These users did not even look up a half inch and chew on the fact that this was the “User’s Manual” button, and therefore probably not what they want. A user’s attention can be extremely narrow.

    The description for the User’s Manual button could have been written any number of ways. The unintended consequence of how it happened to be written caused a surprise usability issue.

    But rewriting the description text would not have been the right solution to the issue.

    A problem of superfluous UI text

    The description text, while well-meaning, caused more problems than it solved.

    When it comes to scanning, the description text made scanning harder for users.

    • More text to scan through
    • The additional description text either took more time for users to scan or was actively ignored
    • More text competing to define the meaning of a UI item

    The description text was also a bad idea for these additional reasons:

    • It was mostly unnecessary – most of these buttons had no need for examples.
    • It feigns importance – the descriptions’ mere existence made them seem worthy of attention—more meaningful than the button labels, even—but they were not.
    • It diluted the meaning and importance of the button labels, which were pretty effective on their own.
    • It was misleading to users – once you start listing the contents of a section, if you can’t list everything (and you probably can’t) then users might think that a lack of mention means it’s not in there.

    Lesson: Optimize your icon-and-label pairings, and keep labels meaningful but concise. That should almost always be enough. Avoid the temptation to add text to help “explain” things. Or, at the very least, test with minimal text first, and then see if you need to add anything to address actual usability issues.

    The menu page that worked better

    An iteration on the UI design proved to eliminate the Download / User’s Manual problem, due to simply removing the description text and making the button labels stand on their own.

    This would not be the final design for the menu screen, but making this quick change to the prototype between rounds of user testing showed that the problem was indeed understood (well enough) and effectively corrected.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Announcing ErliBird + Instabug Integration & the New Instant Bug Test

    We’re happy to announce that ErliBird (Now BetaTesting.com) and Instabug have joined forces to make it easier than ever to recruit bug testers and get actionable bug testing results in as fast as one hour.

    On ErliBird, we have created a new Instant Bug Test package, which makes it easy to link ErliBird and Instabug. This new test provides a great way to recruit targeted testers from ErliBird to conduct on-demand bug testing for iOS and Android apps. By integrating the Instabug SDK in your app and connecting to ErliBird, your testers will be able to submit bugs directly in your app, with additional contextual information to help you discover and squash important bugs.

    The Instant Bug Test (designed exclusively for Instabug customers) is inexpensive and you’ll typically start to see results within just 1 hour of starting the test once your company is approved for self-service submission. When planning your timeline, note that we will manually review your company’s first test submissions, which can take 3-24 hours. Once you have a clear understanding of how to design bug tests using our tool, we’ll activate your account for completely self-service submissions. 

    In addition to the Instant Bug Test, ErliBird is also offering Instabug clients that require more in-depth testing a 15% discount on any ErliBird Beta Testing package. Just contact ErliBird and mention the Instabug discount. Likewise, Instabug is providing ErliBird customers with a 15% discount on all paid plans.

    What is possible with the Instant Bug Test?

    Exploratory Testing

    In an Exploratory test, your testers will explore and report bugs they discover, without any specific guidance from you. This is the easiest and fastest way to get started with an Instant Bug Test with minimal setup. You can optionally provide a basic description of what you’d like testers to focus on, along with a list of known bugs and issues to avoid.

    QA / Functional Test Cases

    Run a functional QA test with set of test cases to test specific functionality and features within your app. When users run into an unexpected or failed result, they’ll submit a bug report detailing the issue. Mix functional test cases with UX or UI based survey questions for a robust and powerful test.

    UX Feedback (with Bug Reporting)

    Design a custom test process for your testers and create a set of tasks and survey-based questions for your testers to follow. Whenever a tester runs into a bug or issue during this process, they will submit a formal bug report using Instabug. You can decide how general or specific you’d like the test process to be: For example, you can provide only a few high-level instructions if you’d like (e.g. register, complete your profile, etc), or you can provide very specific test cases (e.g. Click on the package icon. Do you see an image of the package?).

    How ErliBird + Instabug Integration Works

    During the test setup process on ErliBird, you’ll be given a WebHook URL. Within Instabug (Settings->Integrations->WebHook), create the WebHook to send your bug reports to ErliBird. See this training video to learn how.

    Instant Bug Test FAQ

    What is the Instant Bug Test?

    The Instant Bug Test allows you to recruit real-world bug testers to test your app and generate bug reports and feedback. Once your company is approved for self-service submission (without ErliBird review), you’ll be able to launch tests on-demand and start to get results within 1 hour. Your test can be Exploratory (users explore the app on their own to discover and report bugs on their own), Functional (driven by formal tasks and test-cases) or a mix thereof.

    Can we customize our test process?

    Yes, you can design your own set of tasks, questions, and formal test cases.

    Do we need to design our own test?

    No, if you’d like, you can use our Exploratory Bug Testing template which requires absolutely zero setup of the test design process. Users will explore your app on their own to discover and report bugs they encounter.

    Can we target a specific device or operating system version?

    At the launch of the Instant Bug Test, we’re allowing for some customization of operating system versions and device types (e.g. Most popular, older phones, etc). Shortly, we’ll be launching the ability for full customization of the operating system versions, device manufacturers, device models, and more. What’s better, these are not lab-based devices or even cloud devices. These are real world users that own and use these devices every day.

    Need More Robust Beta Testing?

    Instabug Customers Get a 15% Discount On Our Beta Testing Packages.

    Need something more in-depth? We also offer our full Beta Testing product with Instabug integration. Our beta tests provide quality in-depth beta testing and user experience feedback from targeted real-world users over 1-week test iterations. We provide a Project Manager and provide our industry expertise to design a successful beta process. While our new Instant Bug Test is a single session test process (i.e. testers use your app in one sitting to explore and report bugs over 30-60 minutes), our Beta Testing process is perfect for tests that last days or even weeks, and include multiple sets of tasks/questions during this testing process.

    This makes it possible to run tests that match your real-world organic usage scenario. For example, need 100 testers to play your new Steam game together and provide feedback on the experience? No problem. Need 50 testers to use your new News Reader app each day for a week? Solved. Need users to track their sleep with your sleep tracking app and help hunt down all of the bugs and technical issues? You get the idea (we can do that).

    About Instabug

    Instabug is a comprehensive bug reporting and in-app feedback SDK that is helping companies worldwide build better apps. With just one line of code, beta testers and users can report bugs and provide developers with detailed feedback by just shaking their phones. The level of details Instabug’s SDK grabs with each bug report attracted tens of thousands of companies like Lyft, T-Mobile and eBay to rely on Instabug to enhance their app quality and iterate faster.

    Check this link to know more about how can Instabug help you receive better feedback.

    About ErliBird

    ErliBird gives you the power to beta test with real people in real environments and collect on-demand user feedback for Android, iOS, websites, desktop apps, and tech products. Powered by a global community of 100,000 real-world testers.

    Learn more about ErliBird’s Beta Testing offerings.

  • Real-Life Lessons from Design Testing Failures

    Whether you’re a startup or a mature corporation, you should user-test new features and UI designs. Sometimes design testing will end in failure, but you can learn from your mistakes, as well as others’. Here are some lessons learned from real-life design testing failures.

    When you redesign your product, you do it to satisfy your users in a way that meets your business goals. If you fall short, it could cost you. So it pays to test your new features and UI designs with some of your users before rolling those changes out to the whole world.

    You’ll find, however, that design tests often fail—meaning that your new design didn’t perform better than the old one.

    Take solace in the fact that those failures happen to everyone. Take heart in the fact that there’s a lot you can learn from those failures. And take some of the lessons learned by others and use them as your own.

    Here are some real-life examples of design test failures and lessons-learned from two companies: a startup you may have never heard of, and a juggernaut that you definitely have heard of.

    Groove’s neutral results

    Helpdesk software provider Groove wanted to increase their conversion rates: more landing page visitors signing up for the free trial, and more marketing email recipients actually opening the email.

    The startup experimented with different design and content tweaks. The changes included color and copy choices on website pages and subject-line text choices on marketing emails.

    The changes Groove experimented with were not random ideas. They came from conventional wisdom and credible sources who found legitimate success employing similar changes. All sounded promising.

    Groove individually tested their many design variations using A/B tests.

    Most of them failed.

    When failure looks like no difference at all

    Failures aren’t always dramatic.

    Sometimes you test out that new design you think will work better… and it works just about the same as the old design. It’s not a spectacular failure, it’s just “meh.”

    Groove experienced a lot of these shoulder-shrug “failures”. Company founder Alex Turnbull straightforwardly calls them “neutral results”.

    Groove’s design testing included trying out different colors on call-to-action buttons, and listing prices in different suggested forms (e.g., “$15” vs. “14.99”). Groove tested these and other types of design variations, but often found the results to be inconclusive.

    When an individual test has neutral result, it can be difficult to draw any meaningful conclusion.

    For Groove, lessons arose from the aggregate of their design testing efforts.

    Lessons

    • Expect failures. For each winning result, you’ll rack up a whole bunch of failures.
    • Don’t rely on “proven” design tricks. Tactics that worked great for other companies:
      • May not apply to your particular audience
      • May not work well in your particular design
      • May have lost effectiveness over time
    • Optimize your designs AFTER you’ve settled on a well-researched approach that is specific to your users.

    Quote

    From Groove Company founder Alex Turnbull:

    […] one of the most important lessons we’ve learned as a business is that big, long-term results don’t come from tactical tests; they come from doing the hard work of research, customer development and making big shifts in messaging, positioning and strategy.

    Then A/B testing can help you optimize your already validated approach.

    Netflix’s failed feature

    Netflix had an idea for a new feature to add to their home page that would increase new user sign-ups.

    The folks at Netflix are experts at A/B testing, so naturally they A/B tested their new design before rolling it out to the whole world.

    It failed.

    Undeterred, the product design team created variations on design and approach for the new feature. Over the course of a year they tried these variations out in four additional A/B tests.

    They all failed.

    What’s going on?

    The feature in question: allow potential users to browse Netflix’s catalog of movie and TV offerings before signing up.

    Netflix had very strong evidence to believe this feature would increase user sign-ups. Users were requesting the feature in significant numbers. The design team liked the idea, too. It made a ton of sense.

    But in every test, the home page with the browse feature performed worse than the home page without the feature. The sign-up rate was simply lower whenever the new feature was available.

    Over time, Netflix came to understand why the browse feature was a failure. This included learning more about their users and learning more about their own product. They were hard-fought lessons, but valuable ones.

    Allowing potential subscribers to see what Netflix had to offer? At its core, this was still a good idea. But the “browse” feature was the wrong approach.

    Part of the reason for this, the team learned, was that to really see what Netflix has to offer, you have to actually use Netflix. The customized experience of the service is the key, and that experience is not captured by impersonally browsing the media catalog.

    The barrier to entry for potential users was already low: a free one-month trial. So the goal of the Netflix homepage should be to direct potential users to the free trial. The browse feature, in all of its incarnations, was actually a distraction from the conversion goal, no matter how it was designed.

    In the end, a static background image that gave users an impression of Netflix’s breadth of offerings was better than a full-fledged browse feature. And directing visitors’ focus toward the free-trial sign-up was far more effective.

    Lessons

    • Users don’t always know what they need
    • Fully understanding what makes your product experience special to your users will help you make better design decisions
    • Don’t distract your users away from the goal
    • Test your new design before rolling it out, even if you’re *sure* it will work

    Quote

    As Product Designer Anna Blaylock said about the string of tests: “The test may have failed five times, but we got smarter five times.”

    Design testing: Better to try than not try

    All in all, testing designs before rolling them out to all your users is wise, and relatively inexpensive:

    • only a small portion of your user base see the experimental interfaces while the tests are being run, so any negative impact is minimal;
    • despite the many likely failures, there are valuable lessons to be learned…
    • let alone the occasional “big win” successes that make your efforts worth it.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Nvizzio / BetaTesting – Game Play testing Case Study

    Nvizzio Creations conducts successful beta game playtesting for new Early Access Steam game.

    Clicking the button to officially launch a game to the public is a thrilling and high-stakes moment for every game developer – especially in the age of instant consumer feedback. For apps and games launching into public marketplaces like the App Store, Play Store, and Steam, poor reviews and ratings can quickly wreak havoc on your brand and have a long-lasting impact that can be difficult to overcome. It’s important to get it right the first time, and this is never more true than with a new brand’s first major release.

    So when Montréal-based Nvizzio Creations set out to launch their first self-published game through Steam, they planned a thorough beta testing phase prior to their official Early Access launch date, and partnered with BetaTesting for remote multiplayer game tests with real gamers.

    “Player feedback is vital to any game, but even more so with a multiplayer experience,” said Brent Ellison, Game Designer.

    Testing a multiplayer game presents a unique challenge during the beta testing phase. To coordinate a successful real world beta test, BetaTesting worked to recruit targeted gamers and designed and executed multiplayer tests with 100+ gamers using a wide mix of real-world devices, graphics cards, operating systems, and hardware configurations.

    “It was important for us to see how people received the game in their home environment,” said Kim Pasquin, General Manager at Nvizzio Creations. “We simply knew we did not have the capability in house to see how the product would react with 100+ people playing together. Getting data from our real world users was something we really wanted to do before launching the game.”

    When choosing a beta testing partner, a key factor for Nvizzio was the ability for BetaTesting to recruit the right candidates for the test. “Finding candidates that would potentially be interested in our product in the real world and getting their feedback and data was our principal decision factor,” said Pasquin.

    When the test was complete, the results proved immediately helpful.

    “Working with BetaTesting was fantastic and the test was very useful,” said Pasquin. “During beta testing, we uncovered several technical issues with the game. We discovered lag, ping and server connection issues that we did not experience during in-house testing with high-end computers. Getting testing candidates with real-world equipment was invaluable to us.”

    “The results allowed us to greatly improve the game. We were able to fix most of our technical issues. We also tested those fixes with a second test where players saw huge improvements,” said Pasquin.

    The Early Access launch has proved incredibly successful to date, with a “Very Positive” rating and close to 90% of users recommending the game. With continued player feedback and ongoing development, the future is bright for Eden Rising.

    “Not only are we stepping into self-publishing for the very first time, but we have worked hard to create something truly special. I believe players will love the co-op experience we have created in this lush, vivid world.” said Jeff Giasson, President of The Wall Productions.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Try a 5 Second Test to See if Your UI Design Makes the Right Impression

    If your visual design needs to make a specific impression on your users, don’t rely on your project team’s assessment of the UI. Try this quick 5 second test with actual users to confirm your brand goals are being achieved.

    Is your product strategy dependent on achieving a specific initial brand impression?

    There might be any number of reasons why you want your website or app to immediately elicit a particular emotional reaction from your users.

    It might be that your research indicates your web app has to appear fun or interesting enough to keep your target audience from leaving. Or maybe your website needs to immediately feel modern and authoritative so that its content is taken seriously. Or perhaps it’s imperative that your mobile app seem professional and trustworthy so that your users are willing to create an account or enter their credit card information.

    Regardless, if your answer to the brand question was YES, then you should run a “5-Second Test” to see if your visual design is successful in achieving the desired effect with actual users.

    Your users didn’t sign off on your visual design

    Your product might have a very nice visual design that totally meets your stakeholders’ vision.

    And if your product doesn’t have specific high-priority branding objectives, then getting sign-off approval on your design might be all you really need. You can then move forward with usability testing and beta testing, and you’ll only make changes to your visual design if you encounter specific problems.

    However, if your visual design is meant to induce a specific emotional effect from your users, then you’ve got more work to do.

    Sure, your project team thinks the visual design accomplishes the required brand impression. That’s a nice place to start. But you need to test your design with actual users to make sure that they agree.

    The 5 second test

    The 5 second test is simple, and the name spoils a lot of the mystery.

    It goes like this:

    1. Prep the user to view a visual design
    2. Show your design to the user for just 5 seconds and then stop
    3. Ask the user questions about the design they just saw

    Despite the short, non-directed period of time viewing the design, users will be able to give you useful and meaningful data about their impression of the design.

    But why does this work?

    Reliable opinions at a glance

    Meaningful split-second judgements are a real thing. Users can reliably make their judgement of a visual design in as little as 1/20th of a second (50 milliseconds).

    At a glance, users will subconsciously make an instantaneous judgment on the general visual appeal, and even some specific aesthetic factors of the design. This judgement, made in a fraction of a second, is highly accurate to the judgement that would have if given more time to look at the design.

    The design-review time in your 5 second test is a hundred times as long as a 50-millisecond glance (math!), but is still, you know, a pretty short time.

    In 5 seconds, the user gets to take in the visual design, get an impression, and feel some of its impact. The user doesn’t really have time to inspect, read content, or dig deeper. Which is good—the impression stands alone, fresh in the mind.

    Example and tips

    Nielson-Norman Group has a short video about the 5 second test, which includes some helpful tips.

    I’ll add an additional tip here: don’t make your post-view question period too long or too complex. You want to capture the information you need while the design is still fresh in the participant’s mind. The participant doesn’t (and shouldn’t!) get to see the design again while they answer.

    [Nielsen Norman Group (NNgroup) video on the 5-Second Test, from Youtube]

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • When Beta Testing, Verify Brand Experience at the End

    When beta testing your app or website with real users, you’ll want to save your brand experience questions for the end of the session. Here’s why.

    What are your users’ impressions of your product? How do they feel about it?

    Do they think of your product as fun? Corporate? Simple? Friendly? etc.?

    These types of questions relate to your users’ brand experience, which is something you can verify.

    Brand experience

    Your users’ brand experience with your digital product includes their attitudes, feelings, emotions, and impressions of that product. (Brand experience also affects—and is affected by—external interactions, your larger brand, and your company as a whole. But for this article we’ll focus on individual products.)

    Brand experience is generally important, though it may not be a high priority for every project or every company. Your product team, however, might find it important enough to directly and specifically evaluate. This might be because:

    • You want to collect brand-related feedback and experience data for future consideration; or
    • You have specific goals regarding brand-related attributes you need to verify are being met, so that your product can be successful.

    Early testing vs beta product testing

    If it’s vital to your business goals that your product elicits particular reactions from your users—then you are surely designing your product with the intent of giving those impressions and drawing out those emotions.

    But you can’t just rely on your intent. You need to test your product to see if real users’ interpretation of your brand matches your intent.

    Versions of such testing can occur early and late in the project lifecycle.

    Early testing

    Your visual design is key to your users’ first impression of your product.

    Fortunately, you can evaluate your users’ brand impressions rather early in a project, before your team writes a line of code. Once you have high-fidelity design mockups, or a high-fidelity prototype, you could, for example, run 5 second tests with your users to verify that your design makes the right first impression.

    Beta testing (or later)

    Within your product, your visual design is not the only thing that makes a brand impression on your users. The rest of your design—navigation, animation, interactions, etc.—as well as your content contribute to the user’s opinions and reactions as well.

    Once your product reaches Beta, you have an opportunity to verify your target brand traits in a deeper way, after your users interact with your working product.

    Emphasis on after.

    Brand experience assessments at the end of beta sessions

    In your beta test sessions, the best time to collect brand experience data is at the end… and only at the end.

    Why at the end?

    Users can get a pretty good brand impression from a simple glance at your visual design. But interaction with your product will affect your users’ brand perception as well.

    For example: an interface whose visual design gives off strong trustworthy and professional vibes might see those brand traits eroded by wonky UI interactions and amateurish text.

    Kathryn Whitenton of Nielsen Norman Group gives this more straightforward example: “a design that appears simple and welcoming at first glance can quickly become confusing and frustrating if people can’t understand how to use it.”

    After performing some realistic tasks in your product, your beta test users will have formed a deeper impression of your product’s brand traits. In your beta testing, the cumulative effect of your product’s design and content is what you want to capture. You maximize this by assessing brand experience at the end of the test session.

    Why not assess at the beginning AND the end?

    The instinct to evaluate twice and compare the data is a good one. Knowing where the brand impression starts and seeing how it changes after your users performing beta tasks might help you focus in on what you need to improve.

    But here’s why you shouldn’t assess at both the beginning and the end, at least not in the same session with the same user:

    When you ask users to describe and/or quantify their reactions to your design, they will then consciously form and solidify opinions about your design. Those opinions—while not unmovable—would self-bias the user for the remainder of the test.

    Thus, asking about brand experience at the beginning (or in the middle) is likely to affect the opinions you’d get from those same users at the end.

    However, if you do wish to compare “both kinds” of data, you could instead:

    • Split beta testers into two groups: those who are evaluated at the beginning, and those who are evaluated at the end; or
    • Compare your end-session beta test data with your previous pre-beta visual design impressions data (assuming you have some); or
    • Re-use some of your pre-beta visual design test participants in your beta testing. If it’s been a few months (for example) between the visual design tests the beta tests, those participants probably don’t remember many details of the earlier testing, and the impact of self-bias would be minimized.

    One last thing: watch out for bugs

    You’re testing a beta product, so of course there might be some bugs your users encounter. Keep in mind that bugs may affect you users’ impression of the product for a number of brand traits you may care about.

    Factor the impact of beta bugs into your post-testing analysis, and/or retest with subsequent product iterations.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.