• You Don’t Always Need a Search Box

    Sometimes including a search box in your application feels like the right move, even when it could actually be detrimental.

    Over the course of my career, I’ve several times found myself talking a client out of including search functionality in their product. I wouldn’t say it’s a frequent occurrence, but it’s definitely been a recurring theme.

    Sometimes product owners include search as part of the product scope when the feature simply isn’t necessary. Other times search is requested when the feature would actually be detrimental to their goals. Either way—and especially in the latter case!—the project is better off without search.

    In my experience, the search/goals mismatch phenomenon is more common on web projects. I’ve also encountered this for at least one desktop application. (But not on any mobile app projects—yet?)

    When facing this urge to include search functionality—whether its coming from you or from a product owner who is not you—it helps to understand where the motivation comes from.

    Why would a client request search functionality if they don’t need it?

    There are, of course, many possible reasons, but I’ve noticed it’s often some combination of the following.

    The client:

    • Assumes they need search for their product;
    • Tends to think of a search element as “standard”;
    • Knows that their CMS supports search, so might as well include that feature, right?

    Let’s start with last item first.

    My CMS supports search, so we might as well include a search box, right?

    No.

    No to the general idea that if our third-party tool supports feature X, then we might as well include that feature in our application.

    For one thing, even if the implementation and integration of said feature is as simple as flipping a switch and requires zero configuration—and how often is that, really?—you will still need to test it, let alone investigate it to mitigate risk, etc. Feature X is never free. It’s always worth considering whether you actually need pre-fab functionality before adding it to your product.

    For another thing, feature X might actually be detrimental to your goals. And that’s often the case with search.

    A short list of how search might be detrimental:

    • It circumvents your messaging and presentation – a detriment to your business goals
    • It invites your users to abandon your well-designed navigation and try to figure out their own way through your site – leading to a bad user experience
    • Search functionality is often bad out of the box – leading to a bad user experience
    • Users having a bad experience with search will flee – bad for everybody
    • Good search can be difficult to design and implement – an unneeded expense

    The idea of search as “standard”

    It may be that when the client thinks of the “prototypical website”, that site includes a search box in the upper-right corner of the screen.

    Or it might be that the client’s vision for the website or web app they want to build is based on specific existing examples, and those examples include search.

    Either way, the search feature sneaks in through the side door and finds itself part of the product vision. It becomes part of bulleted system requirements or rough design sketches to communicate desired functionality at the start of a project.

    But soon the client and I (or you!) will be talking through what he or she wants to build, the business goals behind it, and the users the product is meant to serve. It’s not until that point (and hopefully this is early in the project lifecycle) that we discover that search might not actually be needed or desired.

    The assumption that search is needed for this product

    A mistaken assumption that search is needed may simply come from the idea, as discussed above, that it’s “standard” — a staple of good websites.

    Or it might come from the product owner’s experiences with other sites and applications. Bad content organization and/or badly designed navigational systems can make people presume that these aspects are always problematic, and that search needs to be there as a fallback. But this is not a good strategy.

    In this case, your job (or the job of the appropriate member of your team) is to convince the product owner that good design of content and navigation elements is both possible and essential. And then actually make that happen through user research, design, and testing.

    Another reason search may be on the menu when it shouldn’t be: someone has simply prescribed the wrong solution (search) for a legitimate goal. This happens all the time, certainly not just with search.

    You’ll likely find that offering better alternatives to fulfill the goal is very effective.

    A caveat

    Sometimes a search feature is needed, but only for a narrow data set in specific contexts. In that case, the better approach is to move from global search (the search box is on every page) to contextual search (the search feature appears only where it is applicable).

    A caveat to the caveat

    Sometimes employing filtering options is a better solution than contextual search. Filtering may take the form of tags, categories, attributes, etc. that your team (or your users themselves) may curate to better fulfill your users’ goals.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Why You Want to Use Longitudinal Studies for Your Product Testing

    The longitudinal study is a powerful tool for testing your product and learning about your users. Here are some examples of why you might use longitudinal studies.

    User testing tends to be single-serving research. Whether it’s remote, unmoderated testing or an in-person moderated session, user testing tends to span a short period and gather a single set of data from each participant—a “snapshot” of his or her experience with your product.

    A longitudinal study, on the other hand, is user testing in which you collect a data from the same participants multiple times over an extended period.

    Longitudinal testing allows you to see how your users’ behaviors and attitudes change over time, and/or test out features and tasks that could not be performed naturally (or at all) in the single test session. It’s appropriate for both beta testing and post-release product testing.

    Longitudinal studies are sometimes referred to as “Diary studies” as if the two terms are interchangeable. However, having your participants enter data into a diary is just one method of collecting data. You might use other methods instead of, or in addition to, a data diary.

    In fact, you really have carte blanche to combine surveys, test data collection, and user instructions however you want. The flexibility of longitudinal studies is part of how they enable you to discover insights about your product and your users that you wouldn’t be able to otherwise.

    Sound intriguing?

    Below are some examples of why you might want to use longitudinal testing. The list may not include your team’s exact needs, but it should spark ideas about how you can use longitudinal studies to better understand your users and improve your product (or service or system or process).

    Sound confusing?

    Feel free to read more about longitudinal studies before or after you continue here.

    Examples of why you’d want to use longitudinal studies

    I don’t know you, but you might want to use longitudinal studies to help you to find out things like:

    How users’ attitudes toward your product change over time

    Longitudinal testing can show you how users’ attitudes about your product evolve over time. After you collect the data, you can map users’ journeys, and tie attitude changes to circumstances both inside and outside of the software application.

    Example insight goals:

    • Do users remain engaged with your app over time? If not, then where and why do your users become disenchanted?
    • Is your mobile game still fun for users after they master the core game mechanics?
    • How do task-completion failures and frustrations affect long-term engagement and enjoyment?
    • How do users perceive your brand after significant interaction with your product and business?

    Natural use and usage patterns over time

    Over a longer period of time, users have a chance to use a product or service in ways that are more natural to them. As a result, you have a chance to see how users settle into usage patterns, how they discover and use features without prompting, etc.

    Example insight goals:

    • When and how often users use your product over time? Do they incorporate it into their daily or weekly habits?
    • Find out whether internal and environmental prompts (content changes, internal and external notifications, etc.) have an impact on product usage.
    • How often do users naturally use infrequently-used features? When and how often? Are there usability problems exposed by infrequent use? Do users find these features on their own without prompting?
    • Why do users stop using a particular feature? (e.g., Do they forget about it? Do they find the function useless or frustrating?)

    How users’ own data affects their behavior

    In the limited time of a traditional test, it’s often necessary to use canned data to facilitate tests and evaluate success. With the time available in a longitudinal study, the user might be able to use their own data, which will be more meaningful to them than test data would.

    Example insight goals:

    • How do users behave differently when using their own data (versus using prepared test data)?
    • How long does it take for users to add their own data to your social media app when they are self-directed?
    • How do users’ preferences and activity change after they’ve added a “critical mass” of their own data?

    How users handle long-term tasks

    Some tasks take longer than can naturally occur within a single usability test session. For example, doing your taxes, or planning a vacation itinerary. In real life, users will start a task, leave, return, continue, leave again, etc.

    Example insight goals:

    • How well do users handle long-term tasks in your mobile app?
    • Do your users want or need to break up tasks across multiple usage sessions (particularly in flows that were designed assuming completion a single session)?
    • How well do users handle disengaging from and reengaging with a long-term multi-session task? Does usability decrease when the users spend more time away in between sessions?

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • 4 Reasons NOT to Use Polished Prototypes in Early Usability Testing

    Your usability testing might be negatively affected by an overly polished prototype.

    In our current age of prototyping tools, it’s cheaper and easier than ever to create beautiful, clickable prototypes for usability testing. It is now feasible to sit your very first usability test participants in front of a fully interactive prototype that looks and acts like a finished product. You may be tempted to do just that… but there are reasons why you shouldn’t.

    Here are four ways finished-looking prototypes may be harmful to your early-project usability testing.

    1) Beauty is skewing your usability testing feedback

    If a highly aesthetic design is alcohol, then the aesthetic-usability effect is “beer goggles”. Or, as Lidwell, Holden, and Butler stated more intelligently in Universal Principles of Design:

    The aesthetic-usability effect describes a phenomenon in which people perceive more-aesthetic designs as easier to use than less-aesthetic designs—whether they are or not.

    An aesthetically pleasing design makes people feel more positively about your product and consequently have a higher tolerance for design issues. This psychological effect is great news for when you start selling your shiny final product, but may mask usability issues and feedback in the meantime.

    Testing with a lower-fidelity prototype—which would presumably be less intoxicatingly beautiful than your final product—may help reduce the mismatch between how your users perform with the product and what they say about it.

    2) Root causes of issues are harder to determine

    If you jump right to a hi-fi prototype for your initial user testing, it becomes more difficult to determine what impact your surface visual design is having on usability.

    By running usability tests on lower-fidelity prototypes first, you can establish a usability baseline. From there, you can more accurately determine where the final visual design is helping or harming usability. Did the final design draw more attention to important information that users were occasionally missing? Did the final design make the buttons look less like buttons to your users? The answers to such questions will be clearer if you have a lower-fi usability baseline to compare to.

    3) Tighter-lipped users

    Your finished-looking prototype may be causing your users to hold back. A long-held tenet of usability testing is to use a prototype that intentionally looks a little rough.

    As simply stated in The Wiley Handbook of Human Computer Interaction, “users are less likely to provide useful feedback if the design already looks finished.”

    Reasons for this include:

    • politeness;
    • an inclination to focus on the details instead of underlying structure when provided with a polished product;
    • and a general sense of futility in providing feedback on something that is essentially done.

    You can (and should) inform your testers that the product they are testing is unfinished and that their feedback will be helpful in improving it. But such instruction will not completely overcome users’ conscious and subconscious tendencies when faced with a fully-polished design.

    4) You’re distracting from your mission

    You need to get the most out of the usability testing you have available to you. In any test session, you have a targeted set of things you want to accomplish and a limited amount of time to do it.

    For an effective test, you need users to be able to focus on the parts you actually want to test, not distracted by elements that are superfluous at the time.

    In early testing, you want to establish whether the skeleton of your design and the primary interactions are going to work. Extraneous visual design, text, and images have the potential to distract from the mission and become the subject of the feedback you receive from users.

    If you get the skeleton working well first, you can more effectively test the skin later.

    The upshot

    When preparing your product for test, your goals should be different depending on what kind of testing you’re preparing for.

    Going into beta testing, for example, you typically want your test product to be as complete and close to market-ready as you feasibly can make it.

    When preparing for early usability testing, however, there are reasons why you don’t want the product to look quite so polished—even if it’s relatively easy to whip up high-fidelity, finished-looking prototypes.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • GDPR Compliance for Startups: 9 Reasons Not to Freak Out

    There’s a lot to GDPR compliance for startups to manage. You may be behind schedule or just generally stressed about it. Here’s why you shouldn’t panic.

    [ Note & disclaimer:  Learn about BetaTesting’s GDPR readiness and updated Privacy Policy here. The views in this article are of the independent contributing writer and should not be taken as legal advice. With that said, did anyone ever accomplish anything worthwhile by freaking out?]

    As of May 25, 2018, the EU’s General Data Protection Regulation (GDPR) is in effect, affecting businesses and organizations worldwide.

    Hopefully you’ve already progressed through the Five Stages of GDPR Grief™ and have arrived at acceptance. But the emotions don’t end there. Chances are you’re behind on your GDPR compliance work and have added panic to the stress you were already feeling. (Arguably these could be two more stages of GDPR grief, but I’ve already trademarked “Five”, so they’re not.)

    The following list is here to make you feel a little better about your GDPR compliance status.

    9 reasons not to freak out about your current level of GDPR compliance

    Understand you should definitely keep moving forward with your GDPR compliance work.

    But in the meantime, try to find some comfort in this list.

    1. It’s not just you

    A significant percentage of companies aren’t yet fully compliant with GDPR. (The Wall Street Journal says 60–85% are not.) Many businesses aren’t even fully aware of how GDPR affects them.

    You have lots of company when it comes to lack of preparedness.

    2. GDPR is not trying to destroy us

    The purpose of GDPR is protecting people’s private data. Its purpose is not to destroy economies or fine companies into oblivion when they were sincerely trying to comply.

    And there’s not really a deadline. GDPR isn’t like a nuclear bomb that goes off once and immediately decimates everyone who didn’t fully complete their bomb shelter in time.

    May 25, 2018 is the start, not the end. It’s an ongoing process.

    3. Regulators aren’t ready, either

    The regulators who will be policing the GDPR rules aren’t ready yet either.

    There is no single entity that enforces GDPR. Instead, it will be managed by a bunch of national and regional regulatory authorities. A recent Reuters report stated that “seventeen of 24 authorities who responded to a Reuters survey said they did not yet have the necessary funding, or would initially lack the powers, to fulfill their GDPR duties.”

    4. There won’t be a lot of proactive investigating by regulators

    In part because of their lack of readiness, authorities will be more reactive than proactive when it comes to identifying non-compliance.

    From that same Reuters report: “Most respondents said they would react to complaints and investigate them on merit. A minority said they would proactively investigate whether companies were complying and sanction the most glaring violations.”

    5. You’re one fish in a big ocean

    There are tens of millions of businesses in the US and the EU alone. That’s not even counting charities and other organizations that may also be subject to GDPR.

    Point is, there are tons of companies doing tons of business with EU residents. Yours isn’t likely to get more attention than any other.

    6. Some of what you’ve heard is just fear mongering

    Much of the reporting on GDPR is well-meaning but overly dramatic. But some groups, from security firms to sensationalist click-bait news sites, have a profit motive in scaring you about GDPR.

    Don’t let those jerks freak you out.

    7. There are other deterrents besides fines

    Regulators aren’t likely to immediately jump to monetary fines in any infraction case.

    UK Information Commissioner Elizabeth Denham says:

    And while fines may be the sledgehammer in our toolbox, we have access to lots of other tools that are well-suited to the task at hand and just as effective.

    […] GDPR gives us a suite of sanctions to help organisations comply – warnings, reprimands, corrective orders. While these will not hit organisations in the pocket – their reputations will suffer a significant blow.

    8. Actual fines will not be the maximum fine

    That terrifying fine of 20 million euros or 4% of your yearly revenue? That’s the maximum possible fine by law for the worst abuses, not the blanket cost for any non-compliance.

    In practice, when fines do happen, they’ll be proportionate to the actual infraction. The guidelines for how regulators should impose administrative fines state that, all things considered, the fine should be “effective, proportionate and dissuasive” to the offender. That doesn’t mean put the offender out of business.

    9. If you’re actually trying to comply, you’ll be in less trouble

    If you ever actually ran afoul of GDPR to the point of monetary fines, your prior efforts to comply with the law and take data privacy seriously will work in your favor.

    Regulators are instructed to look at each case individually and consider the circumstances. This includes your past and present behavior. If you were intentionally flaunting the rules, you’ll receive a bigger fine. If you’ve been earnest in your attempts at compliance and you cooperate with regulators, you’ll be treated with more lenience.

    All your efforts a not for nothing. Stop freaking out and get back to work.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Now is the Time to Try Adobe XD… and Make it Better

    Adobe recently made its UX design and prototyping tool free to use. Now is a great time to help shape Adobe XD into the tool it needs to be.

    As you may have already heard, Adobe announced a free version of Adobe XD, their UX design and prototyping tool for web and mobile applications.

    The move to free is certainly a way for Adobe to gain market share in the splintered UI design / prototyping tools space. It also adds another entry point into Adobe’s Creative Cloud funnel, with potential for converting XD users into Creative Cloud subscribers down the road.

    Image for Adobe XD announcement video

    [Adobe’s announcement video on Youtube]

    Still, Adobe’s shrewd business moves can also benefit you. In fact, right now is the perfect time for you to try out XD… and help Adobe eventually make it into the tool you need.

    The free version of Adobe XD

    The good news

    The good news is the new free version of Adobe XD—called the “Starter Plan”—is almost the same as the paid version of the tool. Other than a few restrictions on your usage, the full range of features of the paid version are available to you in the free version. For the foreseeable future, anyway.

    The limitations of the Starter Plan are:

    • Share only one active project at a time (vs. unlimited);
    • 2 GB of cloud storage (vs. 100 GB);
    • Only a subset of TypeKit fonts—the FAQ says over 280—are available to you (vs. the full library of TypeKit fonts)

    That’s not bad for a free product. It’s comparable to InVision’s free plan, which gives full access to their tool, but limits you to one active prototype project. (You delete the project if you want to start a new one.) To be clear, Starter Adobe XD allows for unlimited design projects, but you can only share one at a time.

    For what it is, Adobe XD is worthwhile application. It combines designing and prototyping in one tool, and it’s quick and easy to swap between the two modes. XD is clean, built from scratch for its purpose, and doesn’t carry the baggage of Photoshop or Illustrator.

    Adobe XD incorporates many de facto standard features—such as symbols and color swatches—that help designers create and maintain a large set of artboards. It also includes Repeat Grid, a not-so-standard but beloved tool that can really save you a bunch of time.

    The bad news

    The bad news is that Adobe XD is kind-of incomplete. There are things it just doesn’t do. And if any one of those things is something you need for your work, then Adobe XD can’t fully replace your current tools.

    For example, I want the ability to prototype microinteractions, instead of being limited to full-screen transitions only. I’d at least like to be able to mark non-changing areas of the screen to be excluded from screen transitions. For example, if I have a fixed navigation area, I don’t want a new screen to slide in from the right with its own copy of the navigation bar, I want the navigation to stay put and the rest of the content to slide in. XD doesn’t currently give me any way to do that.

    Competing UX design tool Sketch—which seems to be the primary inspiration for Adobe XD—supports third-party plugins that expand the native capabilities of that tool. For Adobe XD, however, there are no workarounds for what XD can’t do natively, in part because plugin support is one of the things it can’t do…

    …yet. But it will soon.

    Which brings me back to my original thesis.

    You should try Adobe XD (and help them make it better)

    Now that Adobe XD is free, you should definitely try it… and help make it into the tool you want it to be.

    Adobe is clearly still working to expand and improve Adobe XD. And they seem to recognize that they must heed their users to get Adobe XD where they need it to be.

    The path to a better Adobe XD

    1) Feature requests via forum

    Adobe is listening. Users are encouraged to submit, vote, and comment on feature requests in the Adobe XD UserVoice forum. For example, at the time of this writing the biggest vote-getter is called “Fixed elements in scrolling artboards (header, nav bar, etc.)”.

    Adobe forum admins will note when a requested feature has started development, to eventually make it into one of the…

     

    2) Monthly updates

    Adobe updates Adobe XD every month and discusses the product roadmap. For example, the May 2018 update announcement discussed several requested features that were added to the tool this month, but also included hope that my wish for microinteractions will be partly addressed in the near future:

    Over the coming months you can expect to see significant progress in advanced prototyping and animation capabilities, new team collaboration features, and support for extending XD via plug-ins […].

     

    3) Plugin support and development

    So plugin support is coming soon, too. Developers are encouraged to join in on plugin development as soon as it’s ready. The newly announced $10 million Adobe Fund for Design will also support Adobe XD plugin developers, who can apply for a slice of the funds.

    Get on it.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • You Might Build Your Next App This Way: Cross-Platform With Google's Flutter

    Google invests heavily in Flutter, a cross-platform framework for building native iOS and Android apps from a single codebase.

    Building an app for both iOS and Android

    If your business wants to create a mobile app, you likely want it to appear on both iOS and Android. Simply speaking, there are two ways to do this:

    1. Native development — develop a native iOS app and a native Android app, separately
    2. Cross-platform development — use a framework to develop both apps from the same codebase

    Native development

    Each platform holder provides a means for software developers to build native apps on their platform. Apple’s iOS SDK (Software Development Kit) enables developers to build iOS apps using Objective-C or Swift. Google’s Android SDK similarly allows devs to write Android apps in Java or Kotlin.

    Cross-platform solutions

    There are also third-party frameworks that allow developers to build apps for both iOS and Android (and sometimes other platforms as well) from one codebase. There might be some platform-specific code you have to (or want to) write, but the idea is that that the vast majority of the code is shared across all platforms. Building with these frameworks allow devs to produce real apps that can be released and sold on the iOS App Store and the Google Play app store.

    Pros and Cons

    The big-headline advantage of cross-platform development is producing your iOS / Android app faster and more cheaply than building two separate native apps. A single codebase for both apps is a benefit throughout the product lifecycle, as every new feature and change needs only be developed once, instead of once for each platform.

    Other advantages include having a single development team for both apps. iOS and Android native development often require a separate development team for each platform, if not because of the separate knowledge bases, then because parallel development is required if you want apps or features to be available on both platforms simultaneously.

    There are disadvantages, of course, and they vary depending on the framework. I won’t go too deeply into those here. Suffice it to say that some frameworks result in slower-performing apps (particularly if they don’t actually get compiled into native code), display non-native look and feel, provide limited or delayed access to device features, and/or  require specific expertise that your team may not have.

    Enter Flutter

    Remember how I said each platform holder provides an SDK for creating apps? Well Google has introduced another SDK for building mobile apps, in parallel with its existing SDK.

    It is called Flutter, and it’s a framework that allows developers to create native mobile apps for both Android and iOS.

    Relatively new, but maturing fast

    Google introduced Flutter a year ago, providing an alpha release and inviting developers to try it out. In February of 2018 Google announced the beta release of the Flutter. While still in beta, developers and companies have produced real applications with the framework. The most famous of these is the app for Hamilton the musical.

    Google is putting serious effort into Flutter. As stated in their beta 3 release announcement earlier this month, Google has “invested tens of thousands of engineering hours preparing Flutter for production use” over the past year. In addition, Google touted Flutter at their Google I/O 2018 conference, devoting several hours of livestream sessions to the framework.

    Google has labeled beta 3 as “ready for production apps,” and counts hundreds of Flutter apps already released through app stores.

    A promising option

    Flutter outputs compiled native apps on both platforms, which should largely eliminate the performance issues other cross-platform solutions exhibit. Flutter provides native UI controls for Android and iOS, so while you’re free to design a single branded look and feel for both platforms, you can also use native widgets that iOS and Android users would expect to see in their apps.

    With Flutter, Google has also prioritized ease and speed of development, from how widgets are used to lay out screens, to how you can see code changes reflected almost instantly while developing.

    It’s worth noting that Google chose Dart as the programming language for Flutter. Dart is a programming language few developers have experience with, but is said to be relatively easy to learn.

    The upshot

    A year after Flutter’s introduction, Google has put a lot of muscle behind the framework.

    The desire for rapid cost-effective cross-platform mobile development is not waning. Flutter is a promising framework for building iOS and Android apps in a way that maximizes development speed and minimizes the common drawbacks of non-native development.

    Flutter will definitely tempt many developers and product teams to at least try it out over the coming year. It’ll be interesting to see how much traction Flutter has gained by next May.

    Additional resources:

    What’s Revolutionary about Flutter

    Flutter.io

    Video sessions from the I/O 2018 conference

    Flutter Codelabs tutorials

    Dart programming language

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • Google Fixes Some Problems with Material Design

    Google moves Material Design from a design language to a flexible design system and address some longstanding issues.

    It was not a surprise that Google had announcements about Material Design at the Google I/O 2018 developer conference. Hints, clues, and evidence pointed to a mild “refresh” of the four-year-old design language. And the internet talks about that stuff.

    What was surprising about the conference was how little talk there was about the expected stuff, and how much talk there was about additions and changes that attempt to mitigate some of the problems of actually dealing with Material Design.

    Common problems with Material Design

    As Rich Fulcher (the Google guy, not the comedian) noted at the Developer Keynote, Google has heard two main complaints from product teams and developers over the years:

    1. Material isn’t visually flexible enough; different brands’ apps look too similar
    2. Engineering support is lacking

    These were familiar sentiments.

    Visual monotony certainly cuts both ways. While designers on Material UI projects feel hamstrung, users have to live in a world of same-y looking apps and websites.

    “Engineering” support affects both designers and developers. Lack of official support for design tools and development libraries has made dealing with Material Design more difficult.

    As a designer I was not too surprised when I had to rummage for Material symbol libraries, or roll my own in a particular design tool. But I was surprised when one of the developers on my project informed me that implementing one of the standard Material components would actually require a bunch of custom control development. Not fun for him, nor for me, who had to provide a bunch of extra specs to support that custom development.

    Changes to the Material Design ecosystem

    Design refresh

    Yes, there was a refresh of the Material Design standards. The differences are not very significant in appearance or approach. If you hated Material Design before, you still will.

    There are new components like the Bottom App Bar (maybe that’s the only new component?), and tweaks and variations on existing components, like the addition of outlined text fields, and the banishment of plain non-filled text fields.

    For all the speculation ahead of the conference, Google did not actually take much time to talk through their changes to the Material Design standards. That is, except in context of the other things that Google was clearly more excited to talk about, like…

    Material Theming

    Material Theming is based on the notion that teams should make apps that display a more distinctive style… without straying too far from the core principles of Material Design.

    Google seems eager for app makers to lean toward their own look and brand, and away from Google’s. Resources and case-study examples on Material.io offer implicit guidance and encouragement about which aspects designers should feel free to change, and which should remain standard.

    Material Theming attempts to make it easier to create visual themes for your apps that deviate from Google’s standard theme. Changes you make to color, typography, shape, etc. are automatically (or more easily) spread across the entire app design. Importantly, Material Theming is made real via tools and source code Google recently released for designers and devs.

    In addition, Google now provides 5 icon style sets: Filled, Outlined, Rounded, Two-Tone, and Sharp. They are not wildly different from one another—it’s just about how the icons are filled in and what their corners look like.

    Material Theming is an attempt to solve a design-monotony problem while still encouraging us to color inside the lines, as it were. We’ll see what actual teams do with it.

    Material Components

    Material Components (or MDC, for Material Design Components) is a code library to help developers implement web and mobile applications using Material Design UI. MDC is a replacement for previous Material support libraries from Google, but it can be imported into existing codebases without having to start your app from scratch.

    MDC supports Material Theming. For example, on a web project, a developer can modify a few MDC-provided Sass variables to intelligently affect the visual theme across the entire application, while still being able to override individual components as desired.

    Google says MDC is what they use in-house for development.

    Material Components are available for:

    • Web
    • iOS (Objective-C, Swift)
    • Android (Java, Kotlin)
    • Flutter, Google’s cross-platform development framework to build native iOS and Android apps

    Tools

    This part is bittersweet for those of us who can’t/don’t use Sketch.

    Google has released Material Theme Editor:

    The Material Theme Editor helps you make your own branded symbol library and apply global style changes to color, shape, and typography. Currently available for Sketch, you can access the Material Theme Editor by downloading the Material Plugin.

    It looks pretty cool. But whatever.

    Google has also updated Gallery, their tool to help teams share and collaborate UI design work. The big deal is the new Inspect mode. Material elements in artboards uploaded from Sketch are inspectable. Team members can click on individual components and view its width, positioning, font size, etc.

    Seems like that might be useful.

    *sigh*

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • When a Failed A/B Test is Still a Success

    If the current design beats the new design in an A/B test (a.k.a “split test”), the experiment is called a failure. But it’s the best kind of failure there is.

    Let’s say you have an idea that you think will work. You try that idea out… and find out that no, it doesn’t work. In regular life, we tend to call that failure. In a scientific context, however, it’s not failure at all.

    A/B tests are little science experiments we run to improve our products and increase revenue. But we tend to use regular-life terms when discussing the outcomes: if the new design doesn’t beat the current one, we call it a “failure”.

    That’s okay, as long as we remember that in A/B testing, failures can still be successes.

    Success, and two kinds of failure

    In A/B tests (a.k.a. split tests), you try out a prospective design (B) concurrently against your existing design (A), resulting in one of these four basic outcomes:

    1. Positive result (Success) – B performed better than A (by enough to meet your threshold for success).

    Going into a test, your philosophy may simply be “better is better.” Or you might have predetermined a performance delta value that design B must exceed in order to be worth using.

    2. Neutral result (“Failure”) – B performed effectively the same as A (or insignificantly better for your purposes).

    3. Negative result (“Failure”) – B performed worse than A.

    4. Invalid test (Actual failure) – The test was compromised in some fashion such that the data is useless.

    Perhaps design B had software bugs that design A did not. Or mechanisms were not put in place at the start of the test to collect the data needed for evaluation.

    Whatever the reason, if you determine the test was compromised then you should toss the results, fix the test infrastructure problem, and run another test to determine the real outcome of B vs A.

    When B performs way better than A, it’s an obvious win. But actually, all three outcomes of a valid A/B test—positive, neutral, and negative results—should be considered successes.

    Here’s why.

    How A/B test “failures” can be successes

    You learn something from each A/B test

    At the very least, you learned that your design didn’t work as well as you thought it would.

    It sounds silly, but if you hadn’t tested that, you wouldn’t know. Seriously. You might’ve been arguing about the theoretical benefits of that design change in meetings for years to come. Now you can move on and argue about something else.

    If you go further than that simple lesson, there is…

    An opportunity to learn even more

    Okay, so your design didn’t perform as expected. Think about the results, make connections, capture conclusions, and figure out what to do next.

    Is the underlying idea still sound?

    Perhaps you should try a variation on the design and see if that works better. If that doesn’t work, maybe the idea could be expressed in a completely different way.

    Did your new design fail because you don’t understand your users as well as you should?

    Only you can figure this out. If you think you have a knowledge gap, you can plan some research to fill it.

    But if you honestly think your user research game is solid, you might be right; an A/B test failure doesn’t prove you didn’t do enough user research.

    Did you just learn something about your users that could only come from A/B testing?

    All the interviews, surveys, ethnography, and usability sessions in the world are not going to tell you whether your users are 12% more likely to click a button labeled “Free to Try!” over “Try for Free!”. That’s the kind of thing you can only learn about your users from A/B testing.

    A springboard for better ideas

    A/B test failures answer a question, pose new questions, and inspire new thinking. The more you learn about what works and what doesn’t work, the less scattershot your future design changes will be.

    It was probably cheaper than blindly rolling out the design

    If an A/B test has a negative outcome, the impact on your user base is minimal: only the “B” participants saw the change, over a limited period of time.

    If you had instead pushed that design directly out to all your users, it could’ve meant significant lost revenue. And because you couldn’t compare results to a concurrent control group, it would have taken longer to notice and would have been harder to determine the cause.

    For a neutral-outcome design, a direct roll-out might be a bit less expensive than testing it first—but you’d never really know if it was or not.

    As

    Conclusion

    A/B tests allow you to:

    • measure the relative success of a new design;
    • keep failures from annoying your users and negatively impacting revenue; and
    • gain valuable understanding about your users.

    You’ll have more failures than big wins. That’s fine. Resist the temptation to bury the failures. By examining them, you can turn each one into a success worth sharing.

    Want to get more insight into how other companies conduct A/B testing? Check out our article: How Netflix Does A/B Testing (And You Can, Too)

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • How GDPR Might Not Affect Your Small Business

    The EU’s General Data Protection Regulation (GDPR) impacts businesses worldwide, but here’s how your startup or small business might avoid being part of its scope.

    The European Union’s General Data Protection Regulation (GDPR) is an overhaul of Europe’s data security rules. It gives EU citizens more transparency and control regarding personal data collected and processed by companies and other entities.

    After a two-year transition period, GDPR goes into effect May 25, 2018. And while many EU companies are working hard to be compliant with the law ahead of the deadline, many U.S. and other non-EU companies are surprised to learn that they are subject to the regulations as well.

    Read on to find out if your small business or organization should be freaked out by GDPR, or if you will be relieved to find you may be exempt.

    Look, seriously: I’m not a lawyer, do your own research, all that. But I think this information will be helpful to you. GDPR applies to thousands of businesses that don’t even realize what it is, but there is also an important exception case that not being addressed much in the news.

    The worldwide impact of GDPR

    GDPR lays down a sweeping set of rules regarding the collection and handling of EU citizens’ personal data. Among other things, GDPR requires that EU citizens very explicitly consent to the collection and use of their personal data, and provides them an array of new powers over that personal data after it is collected.

    For example, businesses must be ready and able to provide any user a digital copy of all their personal data. Businesses must also delete all of a user’s personal data upon request (formerly known as “the right to be forgotten”).

    The costs of complying and not complying

    GDPR is good for consumers in the EU, and consumers outside of the EU will incidentally reap some of the benefits as well. By and large, GDPR forces organizations to prioritize doing right by their users when it comes to their personal data.

    However, as you can imagine, complying with these rules is not simple. Implementing new systems and processes, coordinating compliance with vendors, etc. in order to be compliance takes a lot of work.

    But there is strong motivation to comply. If you are discovered to be in violation, you are subject to legal warnings and then hefty fines: potentially over $20 million dollars or 4% of the business you did in the previous year, whichever is greater.

    Which is why businesses—particularly those that are late to realize that GDPR applies to them—are scrambling to comply.

    Surprise, this EU law might apply to you, too

    It’s an EU regulation, but it affects the whole world.

    GDPR doesn’t apply only to EU-based organizations, or multinational corporations that happen to have a presence in the EU. GDPR applies to people and organizations anywhere in the world that collect, store, or process the personal data of anyone in the EU.

    It doesn’t matter if you are a free service or a paid service. It doesn’t matter if you don’t share data with other organizations or not. It doesn’t matter if you are a huge corporation or a tiny entity. It doesn’t matter if you only collect a little bit of personal data—like email addresses for your marketing mail list.

    You might be surprised to find out that YOU collect personal data

    You may think, “I don’t collect, store, or process personal data. GDPR couldn’t possibly apply to me!”

    But under GDPR, “personal data” has a broad definition. It’s not just things like usernames and email addresses and social security numbers. It also includes things like IP addresses and cookie data that could be used to indirectly identify an individual.

    So, if you’re doing something as normal as using Google Analytics on your humble website, then in the eyes of GDPR you’re a “data controller”, and Google is your “data processor”. Both data controllers and data processors have responsibility under GDPR.

    And if your humble site can be visited by people in the EU (it can), you’re subject to GDPR.

    Maybe.

    Why GDPR might not apply to you

    In a Dec. 2017 Forbes.com article, guest author Yaki Faitelson points out an exception to GDPR’s extended scope. This exception is very important to small companies who don’t have a physical presence in any EU country and don’t intentionally market goods or services to the EU:

    The organization would have to target a data subject in an EU country. Generic marketing doesn’t count. For example, a Dutch user who Googles and finds an English-language webpage written for U.S. consumers or B2B customers would not be covered under the GDPR. However, if the marketing is in the language of that country and there are references to EU users and customers, then the webpage would be considered targeted marketing and the GDPR will apply.

    This is an important but not-often-repeated exception. Almost every article I read on the subject of GDPR simply pointed out that you must comply with GDPR if you have collected personal data from even one person in the EU.

    In fact, after finding essentially zero corroboration, I began to doubt Mr. Faitelson’s interpretation of GDPR’s coverage, and started to wonder if it was dangerous advice that should that I should warn readers to ignore.

    But after relentless digging on your behalf, I did eventually locate the GDPR language and interpretation that backs this up.

    About this exception…

    In short, in the official text of the regulation:

    • Article 3 indicates that GDPR applies to entities outside of the EU using personal data of persons inside the EU when related to: “a) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or b) the monitoring of their behaviour as far as their behaviour takes place within the Union.”
    • Recital 23 (at the top of the regulation document) elaborates that there needs to be some intent of offering goods or services to people or businesses in the EU for GDPR to apply. Some attributes and actions do not indicate intent, and some attributes and actions would be sufficient to conclude intent.

    If your business is completely outside the EU, and you’re not in the business of collecting data to track, predict, or monitor users’ behavior, then that just leaves the part about “offering of goods or services”.

    In that case, if you’re not actually marketing your goods or services to people in the EU, then you’re not subject to GDPR, even if people in the EU might come across and engage with your website or application or mailing list.

    From the “Who Must Comply” page at GDPREU.org:

    [Recital 23] therefore provides a safe harbor to firms that do not market goods or services to the EU, by calling out that they do not need to undertake potentially expensive processes to block EU IP addresses from accessing their websites or reject emails sent by EU mail servers.

    Of course, whether or not you’re intending to market to EU citizens is open to legal interpretation. The GDPR text itself offers some ideas as to what would cross the line and what wouldn’t. GDPREU.org offers some additional insight and interpretation.

    (Note, it looks to me like the GDPREU.org information was the source for the aforementioned Forbes article’s conclusion.)

    DOs and DON’Ts for NOT marketing to the EU

    Let’s say you’re not planning on complying with GDPR at this time, because you are legitimately not targeting the EU for your business. It needs to be apparent that you’re not actually marketing to the EU.

    Here are some examples where you’d be in the clear and where you could get into trouble:

    Okay: You are an American company with just one “.com” website written in English. This is okay, even though English is spoken in the EU, and even though EU residents might happen upon your site or wind up in your mailing list.

    Not okay: Your company has one or more specialized sites related to an EU country, e.g. a website with an “.fr” extension with text in French.

    Not okay: Your company has marketing materials that mention the EU as actual or potential customers.

    Not okay: Your company has marketing materials in a language that is particular an EU member state, e.g. Bulgarian.

    Not okay: Your service lists prices in a currency specific to EU countries, e.g. Euros or British pounds (the U.K. is still in the EU for now).

    Not okay: Your company actually has a physical presence in the EU.

    When it comes to mobile apps, I strongly suspect that limiting the display language is not enough to avoid being subject to GDPR. Both the iOS and Google Play app stores let you modify the countries where your app will be available. I presume you’d need to remove all of the EU member states (and also the EFTA States of Iceland, Lichtenstein, and Norway) from the list.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.

  • User Experience Article Roundup – Better Personas and Fast Research

    Read our User Experience Article Roundup: UX rules, bold design, perspective on personas, and other articles and resources from the user experience world.

    The 15 Rules Every UX Designer Should Know

    An easy-to-read mix of rules, principles, and suggestions for anyone who makes products for users. #2 is “Know your audience” and #3 is “You are not the user”, but it gets more interesting from there.

    Even if you’re already familiar with 90% of this, you might find revisiting the ideas helpful and energizing.

    Along similar lines, but from another direction, there’s also…

    Good Designer, Bad Designer

    One man’s list of good morals and bad behaviors for the designers he manages, presented as a charming didactic comic.

    Predictive Personas

    It’s surprisingly easy to lose your way when creating personas. In trying to craft a realistic person, you stuff them full of attributes that don’t matter, instead of attributes that define whether or not they’d actually want to be your customer or use your feature to begin with. Laura Klein will set you straight.

    Note: the meaningful user attributes you uncover when developing personas will help you later when you are defining participant pools for beta testing and user testing.

    How to improve your design process with data-based personas

    A good companion piece to “Predictive Personas”. Tim Noetzel goes deeper into a process you can perform to create demonstrably useful personas, in part by making hypotheses and talking to people.

    He also references Laura Klein’s article. You’ll feel smart when you get to that part, because you already read it. (You read all of these articles in order, right?)

    Rapid UX Research at Google

    An interesting look at how Google’s “Rapid Research” teams do actionable user research with just a one-week turnaround. Heidi Sales details the rapid research process she built at Google, and includes some of the tools her team uses to keep things running smoothly.

    Brutalist Design is the Bad Influence We All Need

    Google’s Material Design and Apple’s Human Interface Guidelines are great, but they are certainly helping to make our web and mobile designs look safe and samey. Boldly breaking out of the monotony is tempting and inevitable. Here is the call to arms.

    Also, this

    Museum of Websites

    See how the designs of some well-known websites have changed over the years. SPOILERS: The old web was jam-packed with blue links and black text, with a seemingly hostile regard for margins. Today we can afford pretty pictures and more whitespace.

    By the way, my award for least-changed website goes to Reddit, which barely changed its visual design over its 12-year lifespan. Until this month’s new big redesign, that is. (Reddit did not follow UX Designer “Rule” #15.)

    20 cognitive biases that screw up your decisions

    Revisit this infographic of cognitive biases and think about how they may affect decisions you make about your product. Look for moments to question the limits of your own knowledge and perspective.

    Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.