Why forecasting matters to startups… and how to do it

“How can something seemingly so permanent come about so quickly?”… my first thoughts as I drove through the Oregon high desert looking at perfectly conical dormant volcanos:

 Volcano

Volcanos like this take 10,000-50,000 years to form – a millisecond in geological time – yet dominate the landscape and ecosystem around them.

The most successful tech companies are just like volcanos. Most investors would agree that a number of the largest market cap companies to be in 15 years are tech companies yet to be discovered. Fifteen years ago, this was true of Google, Facebook, LinkedIn, Twitter and Apple 2.0. Think about how much these companies have changed us in this short time.

Human beings are bad forecasters because the gravity of our minds collapses time to a point

As a VC, the volcano analogy is powerful to me. If you were a shaman who predicted the Mount Mazama explosion 7,000 years ago that created Crater Lake, you’d be pretty esteemed and have many goats. I am looking for shamans (disruptive entrepreneurs) and future volcanos (the next Google). Easy said, much harder to actually forecast.

Human beings have a terrific handle on space but not time. We have biased and filtered views of the past and very fuzzy views of the future. We heavily value the now and recent past and heavily discount the far past and most of the future. In other words, our minds have a powerful gravity that collapses time to a point:

Point

Evolutionarily, this makes sense. When life expectancy for cavemen was 30 years, the far future didn’t matter at all, and meeting basic food and water needs now heavily overshadowed a few weeks out. The past was also hard to objectively weight in decision making when there was no writing, recording or video – only as good as our memory. So, we are not wired to be visionaries, disruptive entrepreneurs or VCs.

Forecasting is containing our biases and preparing for multiple paths from A to B – understanding risks and how to live with them

I started thinking about time and forecasting while reading Nate Silver’s Signal in the Noise, and borrow some concepts from Silver here. Foremost is the difference between prediction and forecast. A prediction is a belief – a point outcome that you induce from myriad data points and biases in your head. Predictions also tend not to have a “how” associated with them – how will it happen? For example, a crowdfunding startup I meet with says it will have 10% of the market in five years. They simply believe it or “hand wave” it.

A forecast is very different. A forecast may involve a point outcome with a “confidence” range about it or set of possible outcomes with probabilities attached. A forecast is deduced from a buildup of data and assumptions. It is path dependent, and the path is described. As importantly, attempt is made to minimize bias. An example here would be a local delivery business (let’s call it LicketySplit – Uber for local delivery) that proved the model in one city, has operational metrics from that city and applies them to a three year scaling model for a 20 city rollout. They also include an upside, nominal and downside case. Sensitivities to assumptions are known. Note that I would differentiate forecasting from budgeting. Forecasting leads to a budget; it is the “why” and “how” behind a budget.

Also important is the difference between uncertainty and risk. Uncertainty is the unknown: “When we launch this product, we don’t know whether consumers will like it.” Risks are possible outcomes with probabilities and business impacts understood. An example narrative on risk is: “We did consumer research before launch, benchmarked against other product launches and think there is a 70% chance that women aged 30-40 will buy the product in high volumes – enough to build a very successful business. We also think there is a 35% chance that middle aged men will also like the product, presenting a lot of upside. However, there is a ~30% chance that no one likes the product and we are unsuccessful. We have put a strong marketing plan in place to optimize these probabilities.”

Here is a visual way to think about this “risk based forecasting”:

 2x2

There is as much value in the forecasting process as in the result

Many people resist this type of forecasting: “The only thing we know about the model is that it will be wrong… too many assumptions and guesses”. That is always true of a point prediction but often untrue of a forecast with outcome probabilities, where the actual will fall somewhere in the range. Regardless of where it falls, you are prepared. The first and last investment I made in a company with a team that resisted forecasting became a partial write-down in 9 months. The team was not mentally or operationally prepared for the downside state that materialized and kept burning cash without revenue materializing.

The process of forecasting is also helpful for understanding the sensitivity of your startup to factors you may not perfectly control. In a startup growth forecast, two key examples of this are sales cycle time and days payable. For some startups, increasing either by 25% can increase monthly burn by 50%. This is a “hypersensitivity”. Understanding hypersensitivities allows a defensive posture in the downstate while maintaining the ability to lean into the upside. Of course every startup is hypersensitive to revenue!

As startups (and their markets) mature, they become easier to forecast because there is more data on internal and market metrics. But even early stage startups can be forecasted. There are several very early stage companies in the HPVP portfolio that hit or exceed their aggressive forecasts every month. Maybe this is because forecasts can be self-fulfilling too… another value in forecasting.

Risk based forecasting in five steps (Using LicketySplit)

1. Define the biggest risks (or opportunities) to your startup

There are lots of little risks in a business like LicketySplit: sales cycle risks, customer satisfaction/renewal risk, etc. But all of these are easily adjustable assumptions in a model that we can test and understand later. In this step, I’m talking about identifying really BIG risks – often binary challenges over which you have very little control. For LicketSplit, it might be that Uber itself enters the market with the same offering. It is very hard to know the probability of this happening, so the point is to define a possible outcome (and business approach to it) such that we can survive if it happens. With reading and snooping, you can probably estimate with some accuracy whether the probability is high (>50%) or low (<25%). If >50%, you might decide to shut down the business right now! Here we’ll assume the probability is low around 25%.

On the flip side, LicketySplit is working on a really BIG opportunity to provide delivery service to the largest regional dry cleaning chain. The CEO handicaps it at 50%. So there are four “states of the world”, each with a rough probability as shown below. This is a form of scenario planning with scenarios numbered from 1 (best) to 4 (worst).

Scenario

(Note: In addition to forecasting for an existing startup, this type of scenario planning is helpful for identifying the next big thing in a market or industry where a current or future gap exists and a new startup is needed!)

 2. Define the key metrics (assumptions) in the startup

What are the key metrics of your startup? Here are some example metrics for LicketySplit, which sells their service via B2B sales reps to SMBs and then monetizes per delivery that its SMB customers use.

  • B2B Sales: SMBs/sales/month, sales cycle, rep ramp efficiency, etc
  • Revenue: Order/SMB/month, price per order, etc
  • Gross margin: Cost per order, capacity utilization, etc
  • Cash management: Days receivable, billing frequency, etc

3. Build a model

Build a simple excel model that reflects how your business works and grows over the next 12-18 months. The model is built on your key metrics. There are two ways to model a startup.

  • Top down: In a top down model you say, “the market is $X big; we start from zero and by year 3 capture Y% of it.” That defines revenue (or users for B2C) which then drives cost associated with it – tech spend to deliver the product and sales spend for direct sales (B2B) or CAC (B2C). For startups, I don’t like top down models. A lot of startups’ markets are opaquely defined, and in most cases success comes from good general execution not by competing with others for market share. Top down models always feel more like predictions than forecasts to me.
  • Bottom up: Instead of top line being the driver of the rest of the model, the costs to acquire top line (sales people in B2B or CAC spend in B2C) drive top line growth based on all the assumptions outlined above. Other costs (tech, G&A, etc) flow from the top line and product development needs. If you find that the revenue line is not growing fast enough, hire more sales people or spend more on CAC… if you can afford it. 

4. Find evidence or benchmarks to support metric assumptions and ranges

For each metric (assumption), find data in your business or others’ to define a reasonable range of how the metric will actually perform in your startup. For LicketySplit, let’s say we have evidence from their first market that the B2B sales cycle is 45 days. Based on this, we’ll consider a range of 30 to 60 days in other cities for the No-Uber scenarios but 45 to 90 days for Yes-Uber scenarios given Uber will have lots of competing sales people out there mixing the message.

The more established your startup or industry is, the more you know your metrics and the narrower your ranges can be. If you have no data yet on a metric, look for outside benchmarks in similar companies/startups, online blogs or comprehensive industry resources like the Pacific Crest SaaS Survey.

5. Apply the model to your scenarios and check sensitivity:

For LicketySplit, the model will greatly aid planning for Scenarios 1, 2 and 3. In the No-Uber scenarios, modeling ranges of delivery cost structure, payment terms and various B2B sales efficiency metrics will show the range of funding necessary to support the business… and which metrics cash is “hypersensitive” to so you can watch them closely.

For scenario 3, sales efficiency will be the big issue with more competition in the market. Here you will test forecast performance with a 45-90 day sales cycle range instead of the 30-60 in the No-Uber cases.  The impact of halving sales efficiency will not be pretty, but fortunately LicketySplit will have a big customer already on board to help support the company.

The model is not very useful in planning for Scenario 4 (Yes-Uber, No-Big Customer). In this case, sales metrics will be tough, and there is no major customer to rely on. Here I might ignore the model and simply ask, “how long can we survive on the current fundraise without revenue while we pivot?”

This type of forecasting helps you ASK GOOD QUESTIONS and starts you on the path of answering them. How long can we survive if things beyond our control really suck? How much sooner do I need to raise capital if my sales people are half as efficient? How much dilution can I avoid (by raising less equity) if I have slightly better payment terms from my customers? Again, the process matters as much as the result.

Added 7/23/14: For another helpful view on startup forecasting, see Hunter Walk’s recent post HERE

6 thoughts on “Why forecasting matters to startups… and how to do it

  1. Simply want to say your article is as astonishing. The clarity for your publish is simply
    excellent and i could think you are an expert in this subject.
    Fine together with your permission allow me to take hold of your RSS feed to stay up to date with imminent post.
    Thank you a million and please continue the gratifying work.

  2. It’s in reality a nice and useful piece of info.
    I am glad that you simply shared this helpful information with us.

    Please keep us informed like this. Thank you for sharing.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.