-
Beta Testing a New Feature: When to Use the "Beta" Label
When a new product is in the beta phase, it’s normal to clearly identify this. But what about when beta testing a new feature in an already public product? Should you tell users that this is a “beta” feature?
First, let’s quickly define what “Beta” means:
“Beta” is the phase in software development between the alpha phase and the actual public release. In the beta stage, development is largely complete and the product has undergone some amount of initial alpha testing (usually with an internal QA team, or using emulators or online labs). However, the product is not ready for public release yet as it has not been thoroughly tested in the real world – that is what the “beta” phase is for. Apps, websites and new product features are often said to be in “beta” at some point in the development cycle.
When a new product (as a whole) is in the beta phase, it’s common to clearly identify this, and recruit users to participate in “beta testing”. But what about when launching a new feature in an already public product? Should you tell users that this is a “beta” feature?
Is it always a good idea to label a new product feature as “beta” when testing it? The answer, as you guessed, is “it depends”.
Different reasons you may want to use the “beta” label when beta testing a new feature:
- Identifying bugs – To warn users that there may be bugs / issues so that you can manage expectations. You could also incentivize users to identify any outstanding bugs. You can let them explore the feature on their own and report any issue they find, or give them specific tasks and processes to complete and report any issues within those tasks / processes.
- Getting feedback – To let users know up front that you are actively building and improving this feature, in order to get more feedback from the community. To get the highest quality of feedback / engagement, make sure that you’re making the process as easy and rewarding as possible – for example, by making it easy for the users to provide feedback, communicating regularly with them, helping them gain easy access to the new feature, and most importantly being thankful and respectful to them.
- Marketing – “Beta” features are exciting and new. You could label a feature as “beta” as a marketing tactic to attract new interest. When you release the beta to a certain group and encourage them to provide feedback, they feel a certain amount of privilege and “buy-in” in your product. They feel like they are an active part of the improvement process and this can encourage them to spread the word amongst their friends and families. Just make sure that the beta version is presentable and doesn’t have any major bugs.
- Testing – Beta testing a new feature by having users “test drive” it is a great way to test your product in the real world and get a feel for how it will be received by a larger audience when it is released. At BetaTesting we can help you get an in-depth understanding of the user experience, as well as organic feedback / engagement in a real world setting.
The reasons you may not want to use the beta label on something:
- Credibility – You don’t want to disclose to potential customers that you’re beta testing a new feature. A product in beta may still be fixing bugs or just generally be more difficult to use and less valuable. Advertising a product is “beta” definitely won’t help sell it to Enterprises that want stable and reliable software.
- Organic engagement – You want organic engagement from the users for your new feature, without additional messaging or hype around the fact that it exists.
- Continued development – You might be building software quickly, and don’t see any reason to publicly indicate where a particular feature exists in the development lifecycle. You may, after all, be continually refining all your features.
Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.
-
TVCO: Beta Testing Case Study
TVCO Tests Live Broadcast Functionality And Builds a Community of TV Lovers.
Since the Golden Age of Television in the 1950’s, nothing has brought the family together quite like gathering around the living room TV. But today, this pastime is changing. With limitless personalized entertainment options at our fingertips, more and more people are watching exactly what they want, when they want, and no longer fighting over the remote. While this is a welcome change for our hectic lives, watching TV just isn’t as much fun without sharing the experience with someone else.
Startup TVCO, based in New York, is a new social network (iOS and Android) seeking to reshape this experience and build a community of TV fans and celebrities so you never need to watch TV alone again. Through the app, TV lovers can participate in communities built around their favorite shows: join chat rooms, watch live video broadcasts to hear what someone else has to say, or host their own TVCO.
Since the early beta stages, TVCO has been working with BetaTesting to test key app functionality with each major release and collect feedback directly from their target audience. Broadcasting technology is complex, and presents a unique challenge during the testing phase, according to Co-founder Tyler Korff.
“Our mobile application allows any user to broadcast live from within the app, and it was particularly challenging to find a good number of testers who could all be available to test live at the same time,” said Mr. Korff. “We also required a flexible test that allowed for dynamic instructions- with testers alternating between broadcasting and watching. The BetaTesting platform supports such a complex testing environment. BetaTesting was able to recruit testers and coordinate a live test with over a hundred testers on short notice. The feedback allowed us to refine our processes and continue scaling.”
Through the flexible BetaTesting platform, TVCO found an ideal solution for both technical bug testing and user feedback. While initially searching for a company to help conduct a single beta test, TVCO instead found a partner for continued iterative and agile testing into the future with each major release.
“We were looking for a cost-efficient way to simultaneously test our app for bugs and obtain user feedback. BetaTesting’s nifty user interface provides the perfect combination of testing and user feedback, and allowed us to achieve our goals,” said Korff.
“We were looking for a testing company and we were happy when we found a testing partner.” said Korff. “With BetaTesting, we found a company that felt like an extension of our in-house team. The most important factors, for us, were responsiveness, collaboration, and flexibility. We spoke with a couple other testing platforms before finding BetaTesting, but as soon as we met the BetaTesting team, we knew we had found our partner.”
BetaTesting’s approach to beta testing and customer feedback is never hands-off, but instead is a collaborative effort with each client that includes planning, execution, and feedback analysis.
“Our BetaTesting project lead, Michael, was tremendously helpful in shaping our campaigns, providing valuable guidance on survey questions and tasks as well as tweaking our language to make the test clearer,” said Korff. “BetaTesting KNOWS the testing community – even little changes to our proposed test, improvements we never would have otherwise considered, made a HUGE difference both in terms of clarity and efficacy. From the very beginning, the BetaTesting team was quick to respond when we had questions about the process or structure of the tests or the BetaTesting platform; we greatly appreciated the quick turnaround time. I can’t imagine what testing would have looked like without BetaTesting.”
“One’s never quite sure what to expect with respect to the quality of beta testers, but any doubts we had about beta testing were immediately put to rest when we reviewed the results of our BetaTesting campaigns. BetaTesting found us testers who intuitively grasped our product and what we are trying to accomplish. Having such early adopters on board gave us the confidence to move forward with build releases and further development. BetaTesting testers are smart and sophisticated – and it was fun working with them. They provided us with all the detailed information we needed in order to pinpoint bugs and improve user experience.”
Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.
-
User Experience Article Roundup – Alternative User Research and Growing as a Designer
In this edition of User Experience Article Roundup: free user research alternatives, inspiration from a subway map, what happens when robots sound human, and more.
Cheap and free under-the-radar alternatives to field visits
It’s vital to understand your users, particularly when it comes to the context in which they use your product. Site visits with users (ethnography and contextual inquiry) are a great way to accomplish this.
But what if you are not given time or resources to do site visits? David Travis offers up some “under the radar” user research alternatives you can do on your own, quickly and cheaply.
There may be a time and place in your application to intentionally make the user slow down and consider what they are doing. In fact, the negative consequences of “oversimplifying” important workflows might be huge.
What London’s Underground Map Can Teach You About Design
Sometimes you have to know when to turn a map into a diagram, so to speak.
So you’re the only designer at your company
It’s common for startups to have only one designer on staff. If that’s you, this article offers you “a guide to surviving—and prospering—as a one-person design team.”
A 25-Step Program for Becoming a Great Designer
Good generalized advice for being a good designer, presented as 25 “steps”. You may have already internalized some of these principles, but revisiting good ideas can be beneficial.
Google Translate as a case study for improving user experience
While the article initially looks like an advertisement for Google Translate, it’s actually a case study about testing UI designs to improve user experience.
Perhaps you can relate: your product already has the features your users want, but your users don’t seem to realize it. In this article, a UX manager at Google discusses successes and failures in refining a user interface in order to get users to notice and use features they would actually find useful.
Also, this
This service provides an archive of videos and screenshots capturing a variety of user flows from a number of web sites and mobile apps. The service says it aims to reduce UI designers’ research time, or to “to inspire you when you’re stuck.”
Note: There are a few free videos on the homepage, but otherwise it’s a paid service. I haven’t subscribed to it, nor can I vouch for it. Just pointing it out as a resource.
When robots sound human
Last month, Google unveiled an AI system called Google Duplex on stage at the Google I/O 2018 conference. In the presentation, two audio recordings demonstrated how an extension of Google Assistant could make phone calls and interact with (seemingly) unwitting human beings to make haircut appointments and dinner reservations.
I’ll admit that, while I found it interesting, my reaction was mostly skepticism that this functionality would actually work well and be available anytime soon. The internet, on the other hand, immediately started discussing the moral and ethical implications of Duplex.
Sure, it was interesting to witness a debate about whether Google was purposely moving toward making Blade Runner a reality. But even better, the discussion reminded me of an article from last year that thoughtfully discussed the implications of making digital assistants (particularly Amazon’s Alexa) sound more human:
The Surprising Repercussions of Making AI Assistants Sound Human
The way our digital assistants communicate with us influences what we expect from them, how we feel about them, and the way we communicate back to them. You won’t be surprised to discover there are pros and cons abound.
Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.
-
MORE Reasons to Use a Longitudinal Study to Test Your App
Here are more examples of insights that a longitudinal study can provide about your web or mobile application and its users, to help you improve your user experience and increase profitability.
Earlier this month, I revisited longitudinal studies and provided a number of specific-ish examples of valuable insights you might gain from running one.
To refresh your memory: the longitudinal study is a testing method whose defining characteristic is collecting data from the same users multiple times over a period longer than a typical user test. How much longer? Well, there’s no set duration for longitudinal studies. You might plan one to last three days, three weeks, or three months. Whatever it takes to fit your purpose. And, again, your purpose is to gain insights about your product and your users that you just can’t achieve with regular testing.
In that previous article on longitudinal studies, the examples spanned four categories:
- How users’ attitudes toward your product change over time
- Natural use and usage patterns over time
- How users’ own data affects their behavior
- How users handle long-term tasks
Below are four MORE categories of reasons of why you might want to use longitudinal testing to test your app. These examples of target insights might not fit your exact situation. They are to inspire you to think of all the ways you might use longitudinal testing for your own your product.
MORE reasons to use a longitudinal study
You might want to use a longitudinal study to test your app and help you to learn about things like:
Long-term learnability and forgettability issues
Regular user testing will do a good job testing the initial learnability of your user interface when it comes to the tasks you give to your participants. Longitudinal tests can inform you about: long-term learnability and forgettability; how interfaces optimized for short-term learnability affect experienced users; and more.
Example insight goals:
- Are users still experiencing the same problems with the same tasks over time? Do new usability issues arise?
- Do users forget how to perform common tasks over time, or after some time away from the using the product?
- Does your highly learnable tool still satisfy users after they become “power users”?
- Will your users forget how to use (or simply forget about) rarely-used features in your application?
The impact and effectiveness of your onboarding techniques
You can focus your testing to learn more about how your onboarding techniques, tutorials, and hint systems are engaged by users, and what long-term effect they have on product use.
Example insight goals:
- Do users who skip tutorials have a worse experience using the app? If so, do those users ever overcome it?
- Do external prompts—such as follow-up emails that explain app features and encourage their use—have the expected positive impact on the user?
- Do specialized first-time-use flows help the user understand the purpose/usefulness of the app? Does this translate into increased account creation?
How users handle expert / unlockable functionality
Sometimes functionality doesn’t apply to new users. This may be due to a user’s level of ability, or because your product can unlock features over time. With a longitudinal study you can see the evolution.
Example insight goals:
- When do your users feel comfortable tackling advanced tasks?
- Are users overwhelmed or confused when advanced tasks are available to them from the outset?
- What keeps users interested longer: unlocking features as they go, or starting with everything available?
- Does your mobile game’s progression system continue to keep your players engaged through to the end? If not, where and why do they lose interest?
- Does the availability of paid downloadable content keep users engaged with your app longer?
Other validation and discovery
With longitudinal testing you can validate your assumptions, and also just be open to discovering things about your users that you wouldn’t know to specifically target.
You have the opportunity to receive a ton of raw data and feedback from your participants. You can turn this data into insights, some of which you’ll be looking for, some of which you’ll simply stumble upon.
Example insights (as goals or happy accidents):
- What actually motivates your users to use your application?
- What is your user’s impression of your product before they use it? Does it match their understanding of the product after they use it?
- What influences how a user makes a decision in your app?
- What is most important to your users? Does that match your prior research and assumptions?
- What is a sustainable level of use in the long term? Does that meet your business goals?
Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.
-
How to Get More Beta Users to Test Your App or Website
“If we could get more beta users to sign up for our app test, we would be heroes!” said every startup founder.
It’s important to keep in mind that signups are only useful to the extent that they lead to engagement, feedback, referrals, or some other positive outcome. It isn’t just about getting more beta users to test your app. In the beta testing phase, your ultimate goal should be to improve some other key metric, for example active users or revenue, or simply to collect feedback so you can continue to improve. As you build your app, be sure you keep that end goal in mind, because signups are only the first step. Great products are usually born through iteration and customer feedback, which is something that we can help deliver with our beta testing packages.
With that said, we encourage every startup to hustle and make use of every free resource and proven marketing strategy to get more beta users to test your app.
Here are some proven ways to get more beta users to test your app:
- Free online exposure – Take as much advantage of free online exposure as possible. Some great options:
-
- BetaList – Great resource to get pre-launch signups
- ProductHunt – Once you’ve launched
- Facebook groups – There are most likely numerous Facebook groups that include your target audience. Joining these groups and actively engaging in conversation with members and adding value to the group can be a great way to directly engage with your audience and adding users to your beta list.
- Reddit – Reddit has a large number of very active audiences spread across many different areas of interest. There might be sub-reddits that are relevant to your product where you can introduce your product and get feedback. This is a another great way to engage your target market and get the word out.
-
- Get press – This website has an awesome database of tech journalists, guests blogs and influencers: http://tech-blogs-list.com/
- Blogging – Start blogging about your product even before you launch. Content marketing can be a great strategy to attract interest for your product launch. You can also add a link to your landing page on your blog posts where users can sign up for your beta email list.
- Email List – Start building an email list and keep your audience engaged by sending them valuable / interesting content. You should get started as you are building your product (not once you finish). That way, by the time you do launch you will have an audience that is already receptive to you and your product.
- Quora – Another great website to gain credibility with your audience and get exposure. You can search for questions that are relevant for your target audience and provide helpful answers and solutions.
- Advertise – Advertise using Google Adwords or Facebook – While you might be able to get some users for free, if you’re looking for a significant number of beta signups you’ll need to invest in user acquisition.
- Get help – There are several companies that specialize in beta testing and can help with this.
Things to watch out for during the Beta period:
- Don’t make users sit on your waiting list for too long. The longer they have to wait, more likely that they will forget about you or lose interest, resulting in a smaller % of them actually engaging when you do eventually launch.
- Don’t expect everyone on your list to be anxiously anticipating your launch. Most likely, only around 25–50% of the users on your list will actually engage.
Not communicating regularly with your list will also result in lack of interest. Be sure to invite them to be a part of building your product. You can keep them up-to-date on your launch date, send surveys and collect feedback. - To get the highest quality and engagement and feedback, make sure your beta signups are a part of your target market. It’s not very effective to just have people on your list just for the sake of having a list. If they are the right people, it is more likely that they will be more interested and engaged with the product.
- Do your best to remove any and all road blocks. Every difficulty your users face will result in fewer users engaging and providing feedback. Some things to watch out for: – Do testers need an invite code? Do they need to install a profile on their iOS device so you can collect their UDID? Do they need to confirm their email (or phone number) prior to access? Do they need to install TestFlight? Do they need to wait days or months to get access?
Some good techniques to increase engagement during the beta testing process:
- Keep users engaged and give them a reason to come back. Communicate with them via email and app notifications.
- Make it extremely easy for people to test and give feedback. Don’t make people jump through hoops.
- Create a formal testing process with deadlines, surveys, incentives, and regular communications. Users should know what they are testing, what you need from them, how they can help, and what is in it for them.
Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.
- Free online exposure – Take as much advantage of free online exposure as possible. Some great options:
-
Why You Want to Use Longitudinal Studies for Your Product Testing
The longitudinal study is a powerful tool for testing your product and learning about your users. Here are some examples of why you might use longitudinal studies.
User testing tends to be single-serving research. Whether it’s remote, unmoderated testing or an in-person moderated session, user testing tends to span a short period and gather a single set of data from each participant—a “snapshot” of his or her experience with your product.
A longitudinal study, on the other hand, is user testing in which you collect a data from the same participants multiple times over an extended period.
Longitudinal testing allows you to see how your users’ behaviors and attitudes change over time, and/or test out features and tasks that could not be performed naturally (or at all) in the single test session. It’s appropriate for both beta testing and post-release product testing.
Longitudinal studies are sometimes referred to as “Diary studies” as if the two terms are interchangeable. However, having your participants enter data into a diary is just one method of collecting data. You might use other methods instead of, or in addition to, a data diary.
In fact, you really have carte blanche to combine surveys, test data collection, and user instructions however you want. The flexibility of longitudinal studies is part of how they enable you to discover insights about your product and your users that you wouldn’t be able to otherwise.
Sound intriguing?
Below are some examples of why you might want to use longitudinal testing. The list may not include your team’s exact needs, but it should spark ideas about how you can use longitudinal studies to better understand your users and improve your product (or service or system or process).
Sound confusing?
Feel free to read more about longitudinal studies before or after you continue here.
Examples of why you’d want to use longitudinal studies
I don’t know you, but you might want to use longitudinal studies to help you to find out things like:
How users’ attitudes toward your product change over time
Longitudinal testing can show you how users’ attitudes about your product evolve over time. After you collect the data, you can map users’ journeys, and tie attitude changes to circumstances both inside and outside of the software application.
Example insight goals:
- Do users remain engaged with your app over time? If not, then where and why do your users become disenchanted?
- Is your mobile game still fun for users after they master the core game mechanics?
- How do task-completion failures and frustrations affect long-term engagement and enjoyment?
- How do users perceive your brand after significant interaction with your product and business?
Natural use and usage patterns over time
Over a longer period of time, users have a chance to use a product or service in ways that are more natural to them. As a result, you have a chance to see how users settle into usage patterns, how they discover and use features without prompting, etc.
Example insight goals:
- When and how often users use your product over time? Do they incorporate it into their daily or weekly habits?
- Find out whether internal and environmental prompts (content changes, internal and external notifications, etc.) have an impact on product usage.
- How often do users naturally use infrequently-used features? When and how often? Are there usability problems exposed by infrequent use? Do users find these features on their own without prompting?
- Why do users stop using a particular feature? (e.g., Do they forget about it? Do they find the function useless or frustrating?)
How users’ own data affects their behavior
In the limited time of a traditional test, it’s often necessary to use canned data to facilitate tests and evaluate success. With the time available in a longitudinal study, the user might be able to use their own data, which will be more meaningful to them than test data would.
Example insight goals:
- How do users behave differently when using their own data (versus using prepared test data)?
- How long does it take for users to add their own data to your social media app when they are self-directed?
- How do users’ preferences and activity change after they’ve added a “critical mass” of their own data?
How users handle long-term tasks
Some tasks take longer than can naturally occur within a single usability test session. For example, doing your taxes, or planning a vacation itinerary. In real life, users will start a task, leave, return, continue, leave again, etc.
Example insight goals:
- How well do users handle long-term tasks in your mobile app?
- Do your users want or need to break up tasks across multiple usage sessions (particularly in flows that were designed assuming completion a single session)?
- How well do users handle disengaging from and reengaging with a long-term multi-session task? Does usability decrease when the users spend more time away in between sessions?
Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.
-
4 Reasons NOT to Use Polished Prototypes in Early Usability Testing
Your usability testing might be negatively affected by an overly polished prototype.
In our current age of prototyping tools, it’s cheaper and easier than ever to create beautiful, clickable prototypes for usability testing. It is now feasible to sit your very first usability test participants in front of a fully interactive prototype that looks and acts like a finished product. You may be tempted to do just that… but there are reasons why you shouldn’t.
Here are four ways finished-looking prototypes may be harmful to your early-project usability testing.
1) Beauty is skewing your usability testing feedback
If a highly aesthetic design is alcohol, then the aesthetic-usability effect is “beer goggles”. Or, as Lidwell, Holden, and Butler stated more intelligently in Universal Principles of Design:
The aesthetic-usability effect describes a phenomenon in which people perceive more-aesthetic designs as easier to use than less-aesthetic designs—whether they are or not.
An aesthetically pleasing design makes people feel more positively about your product and consequently have a higher tolerance for design issues. This psychological effect is great news for when you start selling your shiny final product, but may mask usability issues and feedback in the meantime.
Testing with a lower-fidelity prototype—which would presumably be less intoxicatingly beautiful than your final product—may help reduce the mismatch between how your users perform with the product and what they say about it.
2) Root causes of issues are harder to determine
If you jump right to a hi-fi prototype for your initial user testing, it becomes more difficult to determine what impact your surface visual design is having on usability.
By running usability tests on lower-fidelity prototypes first, you can establish a usability baseline. From there, you can more accurately determine where the final visual design is helping or harming usability. Did the final design draw more attention to important information that users were occasionally missing? Did the final design make the buttons look less like buttons to your users? The answers to such questions will be clearer if you have a lower-fi usability baseline to compare to.
3) Tighter-lipped users
Your finished-looking prototype may be causing your users to hold back. A long-held tenet of usability testing is to use a prototype that intentionally looks a little rough.
As simply stated in The Wiley Handbook of Human Computer Interaction, “users are less likely to provide useful feedback if the design already looks finished.”
Reasons for this include:
- politeness;
- an inclination to focus on the details instead of underlying structure when provided with a polished product;
- and a general sense of futility in providing feedback on something that is essentially done.
You can (and should) inform your testers that the product they are testing is unfinished and that their feedback will be helpful in improving it. But such instruction will not completely overcome users’ conscious and subconscious tendencies when faced with a fully-polished design.
4) You’re distracting from your mission
You need to get the most out of the usability testing you have available to you. In any test session, you have a targeted set of things you want to accomplish and a limited amount of time to do it.
For an effective test, you need users to be able to focus on the parts you actually want to test, not distracted by elements that are superfluous at the time.
In early testing, you want to establish whether the skeleton of your design and the primary interactions are going to work. Extraneous visual design, text, and images have the potential to distract from the mission and become the subject of the feedback you receive from users.
If you get the skeleton working well first, you can more effectively test the skin later.
The upshot
When preparing your product for test, your goals should be different depending on what kind of testing you’re preparing for.
Going into beta testing, for example, you typically want your test product to be as complete and close to market-ready as you feasibly can make it.
When preparing for early usability testing, however, there are reasons why you don’t want the product to look quite so polished—even if it’s relatively easy to whip up high-fidelity, finished-looking prototypes.
Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.
-
GDPR Compliance for Startups: 9 Reasons Not to Freak Out
There’s a lot to GDPR compliance for startups to manage. You may be behind schedule or just generally stressed about it. Here’s why you shouldn’t panic.
[ Note & disclaimer: Learn about BetaTesting’s GDPR readiness and updated Privacy Policy here. The views in this article are of the independent contributing writer and should not be taken as legal advice. With that said, did anyone ever accomplish anything worthwhile by freaking out?]
As of May 25, 2018, the EU’s General Data Protection Regulation (GDPR) is in effect, affecting businesses and organizations worldwide.
Hopefully you’ve already progressed through the Five Stages of GDPR Grief™ and have arrived at acceptance. But the emotions don’t end there. Chances are you’re behind on your GDPR compliance work and have added panic to the stress you were already feeling. (Arguably these could be two more stages of GDPR grief, but I’ve already trademarked “Five”, so they’re not.)
The following list is here to make you feel a little better about your GDPR compliance status.
9 reasons not to freak out about your current level of GDPR compliance
Understand you should definitely keep moving forward with your GDPR compliance work.
But in the meantime, try to find some comfort in this list.
1. It’s not just you
A significant percentage of companies aren’t yet fully compliant with GDPR. (The Wall Street Journal says 60–85% are not.) Many businesses aren’t even fully aware of how GDPR affects them.
You have lots of company when it comes to lack of preparedness.
2. GDPR is not trying to destroy us
The purpose of GDPR is protecting people’s private data. Its purpose is not to destroy economies or fine companies into oblivion when they were sincerely trying to comply.
And there’s not really a deadline. GDPR isn’t like a nuclear bomb that goes off once and immediately decimates everyone who didn’t fully complete their bomb shelter in time.
May 25, 2018 is the start, not the end. It’s an ongoing process.
3. Regulators aren’t ready, either
The regulators who will be policing the GDPR rules aren’t ready yet either.
There is no single entity that enforces GDPR. Instead, it will be managed by a bunch of national and regional regulatory authorities. A recent Reuters report stated that “seventeen of 24 authorities who responded to a Reuters survey said they did not yet have the necessary funding, or would initially lack the powers, to fulfill their GDPR duties.”
4. There won’t be a lot of proactive investigating by regulators
In part because of their lack of readiness, authorities will be more reactive than proactive when it comes to identifying non-compliance.
From that same Reuters report: “Most respondents said they would react to complaints and investigate them on merit. A minority said they would proactively investigate whether companies were complying and sanction the most glaring violations.”
5. You’re one fish in a big ocean
There are tens of millions of businesses in the US and the EU alone. That’s not even counting charities and other organizations that may also be subject to GDPR.
Point is, there are tons of companies doing tons of business with EU residents. Yours isn’t likely to get more attention than any other.
6. Some of what you’ve heard is just fear mongering
Much of the reporting on GDPR is well-meaning but overly dramatic. But some groups, from security firms to sensationalist click-bait news sites, have a profit motive in scaring you about GDPR.
Don’t let those jerks freak you out.
7. There are other deterrents besides fines
Regulators aren’t likely to immediately jump to monetary fines in any infraction case.
UK Information Commissioner Elizabeth Denham says:
And while fines may be the sledgehammer in our toolbox, we have access to lots of other tools that are well-suited to the task at hand and just as effective.
[…] GDPR gives us a suite of sanctions to help organisations comply – warnings, reprimands, corrective orders. While these will not hit organisations in the pocket – their reputations will suffer a significant blow.
8. Actual fines will not be the maximum fine
That terrifying fine of 20 million euros or 4% of your yearly revenue? That’s the maximum possible fine by law for the worst abuses, not the blanket cost for any non-compliance.
In practice, when fines do happen, they’ll be proportionate to the actual infraction. The guidelines for how regulators should impose administrative fines state that, all things considered, the fine should be “effective, proportionate and dissuasive” to the offender. That doesn’t mean put the offender out of business.
9. If you’re actually trying to comply, you’ll be in less trouble
If you ever actually ran afoul of GDPR to the point of monetary fines, your prior efforts to comply with the law and take data privacy seriously will work in your favor.
Regulators are instructed to look at each case individually and consider the circumstances. This includes your past and present behavior. If you were intentionally flaunting the rules, you’ll receive a bigger fine. If you’ve been earnest in your attempts at compliance and you cooperate with regulators, you’ll be treated with more lenience.
All your efforts a not for nothing. Stop freaking out and get back to work.
Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.
-
Now is the Time to Try Adobe XD… and Make it Better
Adobe recently made its UX design and prototyping tool free to use. Now is a great time to help shape Adobe XD into the tool it needs to be.
As you may have already heard, Adobe announced a free version of Adobe XD, their UX design and prototyping tool for web and mobile applications.
The move to free is certainly a way for Adobe to gain market share in the splintered UI design / prototyping tools space. It also adds another entry point into Adobe’s Creative Cloud funnel, with potential for converting XD users into Creative Cloud subscribers down the road.
[Adobe’s announcement video on Youtube]
Still, Adobe’s shrewd business moves can also benefit you. In fact, right now is the perfect time for you to try out XD… and help Adobe eventually make it into the tool you need.
The free version of Adobe XD
The good news
The good news is the new free version of Adobe XD—called the “Starter Plan”—is almost the same as the paid version of the tool. Other than a few restrictions on your usage, the full range of features of the paid version are available to you in the free version. For the foreseeable future, anyway.
The limitations of the Starter Plan are:
- Share only one active project at a time (vs. unlimited);
- 2 GB of cloud storage (vs. 100 GB);
- Only a subset of TypeKit fonts—the FAQ says over 280—are available to you (vs. the full library of TypeKit fonts)
That’s not bad for a free product. It’s comparable to InVision’s free plan, which gives full access to their tool, but limits you to one active prototype project. (You delete the project if you want to start a new one.) To be clear, Starter Adobe XD allows for unlimited design projects, but you can only share one at a time.
For what it is, Adobe XD is worthwhile application. It combines designing and prototyping in one tool, and it’s quick and easy to swap between the two modes. XD is clean, built from scratch for its purpose, and doesn’t carry the baggage of Photoshop or Illustrator.
Adobe XD incorporates many de facto standard features—such as symbols and color swatches—that help designers create and maintain a large set of artboards. It also includes Repeat Grid, a not-so-standard but beloved tool that can really save you a bunch of time.
The bad news
The bad news is that Adobe XD is kind-of incomplete. There are things it just doesn’t do. And if any one of those things is something you need for your work, then Adobe XD can’t fully replace your current tools.
For example, I want the ability to prototype microinteractions, instead of being limited to full-screen transitions only. I’d at least like to be able to mark non-changing areas of the screen to be excluded from screen transitions. For example, if I have a fixed navigation area, I don’t want a new screen to slide in from the right with its own copy of the navigation bar, I want the navigation to stay put and the rest of the content to slide in. XD doesn’t currently give me any way to do that.
Competing UX design tool Sketch—which seems to be the primary inspiration for Adobe XD—supports third-party plugins that expand the native capabilities of that tool. For Adobe XD, however, there are no workarounds for what XD can’t do natively, in part because plugin support is one of the things it can’t do…
…yet. But it will soon.
Which brings me back to my original thesis.
You should try Adobe XD (and help them make it better)
Now that Adobe XD is free, you should definitely try it… and help make it into the tool you want it to be.
Adobe is clearly still working to expand and improve Adobe XD. And they seem to recognize that they must heed their users to get Adobe XD where they need it to be.
The path to a better Adobe XD
1) Feature requests via forum
Adobe is listening. Users are encouraged to submit, vote, and comment on feature requests in the Adobe XD UserVoice forum. For example, at the time of this writing the biggest vote-getter is called “Fixed elements in scrolling artboards (header, nav bar, etc.)”.
Adobe forum admins will note when a requested feature has started development, to eventually make it into one of the…
2) Monthly updates
Adobe updates Adobe XD every month and discusses the product roadmap. For example, the May 2018 update announcement discussed several requested features that were added to the tool this month, but also included hope that my wish for microinteractions will be partly addressed in the near future:
Over the coming months you can expect to see significant progress in advanced prototyping and animation capabilities, new team collaboration features, and support for extending XD via plug-ins […].
3) Plugin support and development
So plugin support is coming soon, too. Developers are encouraged to join in on plugin development as soon as it’s ready. The newly announced $10 million Adobe Fund for Design will also support Adobe XD plugin developers, who can apply for a slice of the funds.
Get on it.
Learn about how BetaTesting can help your company launch better products with our beta testing platform and huge community of global testers.
