Point 1. Success Metric
Product metric – conversion to purchase, retention, cost reduction for one iteration of a purchase or service to a user, reduction of order time, average check, etc, in an experimental sample or cohort. The metric is considered per user. It has maximum influence on the successful launch of your SaaS.
ROI – return on investment or return on investment. You invested in feature N rub. (development resources, third-party solutions, etc.) and you get an increase in costs per user (Average Cost – AC) – returned through the growth of the product metric and increased the average income from – ARPU. This approach removes any questions about products that “move buttons.” Also, it opens a way to use sales performance management from the very start of launching.
Similar but less accurate is the RICE method – ranking features through an audience (reach), impact on a critical metric (impact), confidence in a decision (confidence), and costs for implementation (effort). RICE is suitable for quickly evaluating a new backlog idea, but harder to generate money.
If the metric does not rise or fall, then the MVP has failed, and the costs will need to be “written off” and the feature removed from the product. Do not leave, do not look for a way to fix everything, but move on to the next hypothesis. If you go parts of a product without obvious meaning for it, then in the future, it will be necessary to maintain this functionality: spend time on regression and provide it with a certain level of reliability. These are future costs that will never pay off.
If you use several metrics, then others should either expect to sag, no more than a specific value or remain unchanged. If you expect growth, and this will affect the economy of functionality, add MVP to crucial metrics to measure success. But the more conditions, the less chance of success in the first version.
Problems with MVP without metrics begin after launch. The search for what has grown or how to measure it will lead to “analytical paralysis” – the inability to calculate and justify the development of features. Or even worse, to the “leap of faith”: “We can’t count, but we believe that in a year it will fire” (c).
For example, the critical metrics of a taxi booking application are conversion per trip, the number of trips per user per period, and retention. When we start talking about an MVP about a feature, for example, adding a selection of frequent addresses to the main screen, we discuss the increase in conversion to the trip; Due to which stage of the funnel it will happen, in which audience, and how the average number of user trips will grow.
Point 2. The most influential hypothesis and the most severe pain
To choose an MVP idea, ask yourself the following questions: is there a hypothesis that affects the critical metric more? How could it be found and confirmed? And what is the biggest problem that users can solve and influence the metric?
This stage is important for the proper investment of team resources. If a feature loses to others for a given metric, then winning ones must be done.
MVP can be complex and composite, for example, a new version of public transport routing, a new way to order a taxi, launch an entire product in a new city or country, as well as most marketplaces. Several individual features are immediately implemented in the MVP as a “football team.” Technologically, they may not be very connected with each other. Still, the part “protects” the user experience, and the part “attacks” – forms a new experience and is useful.
I recommend using the Kano model for a “set of basic services”. The model allows you to decompose features into the following categories: mandatory, attractive, satisfactory, and unimportant. A survey is conducted using questionnaires.
A mistake in this place can be expensive. In essence, the lack of common features will annoy the user, and he will not notice your improvement. Extra features will make development more expensive. Define the market “core set” by MVP composition. Next, form a questionnaire and get weights for features.
Point 3. Competitive environment
Before designing an MVP, determine what your competitors are doing. Look carefully at what products and solutions the user are currently solving this problem, including offline. I strongly do not recommend copying competitors and their solutions without product development. The presence of any features does not mean the benefits of the product. You do not know the introductory, do not know the results of the launch, and can not rely on the announced results at conferences. If you launched it personally and saw the results in the analytics system, then you can repeat the experience with social media kpis. At this point, you need examples of solutions and an understanding of what your user is facing.
Make a table with competitors and mini-features for each user scenario. Also, collect data on what quality solutions are already presented on the market and used by users. An approximate measurement at this stage is better than its absence.
Point 4. Basic quality
I often come across a situation where they did not think about basic quality, and the final low-quality product is called MVP. How to distinguish an awful product from MVP?
The basic quality of MVP is the likelihood that the user will be able to solve their problem using MVP. For example, the probability that the user will find his bus in the application; sees the nearest car sharing car and can book it; that the car or bus will arrive at the time that you showed in the application. If there is not enough data for measuring quality, then in the first run, collect this data, and then go on to measure product metrics.
If you are making an analog of competitors, then you need to know and understand their quality, the cost of achieving this level. For example, when we launched the car-sharing layer in Transport, we needed to know and understand what coverage we had at the time of launch; what coverage was on the market for direct and indirect competitors; which of the partners we need and what efforts need to be made for a competitive and higher-level product.
In my experience, in most cases, it is enough to maintain about 80% of the quality metric – at a frequency of use once a week or less; 90% and higher – with more frequent cases. Very high and close to 99% – if the case is daily and for many hours: an application for drivers or an application for operators to work in a call center, where the shift duration is 8 hours, and it is used for more than 20 days per month. With a quality metric of 60% or lower, you can run it if you have a breakthrough, a wow reaction when testing a prototype, a market without a single competitor, or everything in the market with low quality.
If the health and life of a person depend on your service, there is a regulation on the market – then the basic quality should be different. I would not expect good user metrics in an airline with 80% flight success.
It is useful, but not necessary, to have in stock a quick way to achieve quality, including through the use of third-party technology. Talk to suppliers and collect offers from them. For each launch, you need to independently and experimentally understand the boundary of basic quality, and already at later stages, through experiments, understand how quality affects product metrics.
You can improve quality without changing technology by choosing a smaller region or niche. Or do a launch on a test group, notifying them in advance of low basic quality.
If none of the methods above worked, look for a way to improve quality, but relying on ROI. You need to determine a sufficient level of quality to test the hypothesis and put these tasks in the backlog.
Point 5. Interface Design and Prototype
The product looks in analytics, as users cannot find the functionality and do not understand it, and exclaims: “What kind of users are unreasonable, really they can’t guess to go to the settings, fail at six levels, where the “Show layer” toggle switch is located.”
Therefore, check the prototypes on users: if there is a UX-lab, then go there with the prototypes in Invision or Principle, if not, then the corridor, street, smoking room (smoking is harmful to your health), a cafe with dinners, etc., it can be anywhere where there are product users. If you can quickly make fake or prepared data, do it, but it’s important – quickly.
Now, during a pandemic, test prototypes using Skype or Zoom, they have a function to share the screen. This method is also suitable for remote products and remote work. You can search for respondents on Facebook or freelancer sites. For a small fee, you will find people who spend an hour of their time and tell you what and how it works. The sample may be skewed and unrepresentative, but even such a test is better than its absence.
What you need for a good interview:
- Use the same script for all interviews.
- Do not sell or impose your decision.
- Change the prototype if users “stumble” on your decision 3-4 times in a row.
- Finish testing and modifying the prototype when there is a minimum level of UI to solve the problem.
At the same time, at the testing stage, it is crucial to cut off the “weak” parts of complex functionality and get closer to the level of “sticks and ropes,” or minimum functionality. This is the development cost that affects your ROI. If at the stage of testing the prototype users “with pain” have no interest or difficulty in using, the probability of success is reduced. Yes, it is difficult to assemble a representative sample for testing prototypes, but it is not worth being “deaf” to signals.
Be careful and lay the time and resources to “stop” the scripts. If some part of the product does not work or is under development, then put a stub with information about this for the user so as not to burden the support with the questions “why doesn’t work here” and not to scoop up calls to PR. In general, user questions and appeals are a good sign that speaks of motivation and the desire to solve your problems with your product. Consider a way to compensate and warn if you are already selling MVP for real money.
Point 6. Measurement
A very big mistake is not to measure metrics when starting MVP or to postpone measurements to later versions. You allow a “leap of faith” in the product and jump with the thought that you can “land well” through time. Without data validation, this is a very dangerous strategy.
To verify the hypothesis, it is necessary to measure metrics. Take care of the parameters and analytics system in advance. It is required that you have the essential events spelled out, and they correctly fall into analytics, as well as sample calculations and methods for obtaining them are prepared.
The most accurate results are obtained by A / B test. But when there are very few users and a long time to collect them, you can look at the cohorts in the new release, compared to the previous one. Seasonality, market influence, and the activity of competitors can play against you in this case. These effects are better removed by A / B test. In most cases, the outcome should be visible at the MVP level – you should get a statistically significant improvement result.
If you hear phrases from the team or the leaders such as “it hasn’t gotten worse”, “we didn’t work in vain – we didn’t spoil anything”, “no, we need to improve the quality, and then in the air and the rocket will fly” –– stop. It should be different: one metric grew by so much, the second – by so much – in the end it became better.
Point 7. Marketing
I like the concept with MVP and MMP (minimum marketable product). The difference is in the level of the necessary quality and the audience for which the attraction works. If the base quality is below 80%, writing press releases and massively engaging an audience is not worth it. Focus on the necessary and sufficient sampling for measurement. This is more important because, most likely, several more iterations are waiting for you before telling about your “breakthrough functionality”. I believe that the version for mass attraction is the second and later versions of the product with the search for convergence of the CAC model
How to distinguish them? “If you are launching your first version and you are not ashamed, then you are launching it too late,” Reed Hoffman tells us. Therefore, if you are “ashamed”, promotion is very targeted (contextual advertising, a banner for a small part of the audience within the product) to collect statistically significant launch data.
I worked with products that have a broad audience: we formed a sufficient sample size. We rolled out functionality to% of the total audience. In advance, we prepared answers to questions from users for support, and, after a positive result of launching MVP, they told and made the functionality available to a broad audience.
Point 8. Development
After the MVP has worked, the loop is reaccessed. An improvement now follows the path of minimally significant improvement. With measurements at every stage.
Item 9. One more thing. How to sell an MVP?
You worked with the MVP designer, tested on users, prepared the design, present everything to the team. The team replies that “we came here to change the world and didn’t sign up to do such nonsense,” or “this craft will not take off, because everything’s very crunchy, let’s we’ll do it normally, and it will be normal.” How to avoid this?
Specialists like to do their job well or very well, but at the expense of the company, and your task is to save the company’s resources when testing product hypotheses. Find a compromise and make sure that the short-term and planned loss of quality with step-by-step verification will benefit everyone. The experts will begin to produce high-quality, more straightforward verification solutions themselves. Report the results. The received data and form expectations before the start.
MVP is a minimally viable product. For product minimality, look at one of the most critical problems of users, competitors, and look for sufficiency already at the prototype stage. To ensure product viability, compile product metrics, and verify them. And do not be afraid of simple iterations – they have power. Improvements should be on a “solid” foundation, and “empty” features or services should be closed first.