AgilePracticeLibrary
THE LEAN STARTUP
the following are my notes from [The Lean Startup] by Eric Ries. see also [Ries' blog] and [TheLeanStartup.com]
INTRO
- entrepreneurs are everywhere - startup is human institution designed to create new products and services under conditions of extreme uncertainty
- entrpreneurship is management
- validated learning - empirical testing of business hypotheses
- build-measure-learn - cycle, MVP makes faster cycling/learning possible
- innovation accounting - metrics that matter for growth, learning
VISION
DEFINE
a comprehensive theory
- vision and concept
- product development
- marketing and sales
- scaling up
- partnerships and distribution
- strucutre and org design
- vision - rarely changes
- strategy - changes sometimes, as fundamental assumptions are tested
- product - changes often, toward optimization
LEARN
validated learning
- not an excuse for failures
- based on observed behavior, in live tests
- observe what customers do, not what they say they want
- test with real customers
- learning is the essential unit of progress for startups
EXPERIMENT
every product = an experiment, and vice versa, taken directly to customers
- more accurate data, observing real behavior
- interact with real customers, learn about their needs
- allow to be surprised when customers behave in unexpected ways
break down vision into component parts
- leap-of-faith assumptions, the things we can't prove or know with facts
- two most important assumptions: value hypothesis and growth hypothesis
answer all four of these questions, not just last one:
- do consumers recognize that they have the problem we're trying to solve?
- if there was a solution, would they buy it?
- would they buy it from us?
- can we build a solution for that problem?
STEER
LEAP
- strategy is based on assumptions, leaps of faith
- first challenge is to build an organization to test those key assumptions
- many assumptions are not exceptional, have some basis in fact or experience
- need to isolate key leaps of faith, about value/growth, not paper over them
- "analogs" show where business can work, "antilogs" show where it might not work
- walkman as analog for ipod (mobile music works), napster antilog - want it free
- need to get out and learn the territory directly, see how people live/work/use/value
- exploration/observation leads to customer archetype, an essential guide for product dev
- customer archetype is a hypothesis, not a fact
- need to avoid "just do it" avoidance of research, and also analysis paralysis
- need to study the right things: the truth about key hypotheses
TEST
minimum viable product (MVP)
- helps start the process of learning quickly
- not necessarily the smallest product imaginable, should be fastest way to get through the first cycle of build-measure-learn
- purpose is to start learning, not achieve perfection
- unlike "protype," MVP tests product but also key business hypotheses
- target early adopters, target small groups
- challenge is to avoid temptation to make it perfect, even non-automated
- dropbox was started as "video mvp" - essentially an animation of the product
- concierge mvp has humans taking care of processes ultimately to be automated
- MVPs allows cycling through multiple versions or different opportunities
- even low quality MVP can serve the building of high quality product
- sometimes low quality allows for discovery of value in roughness or work arounds, allowing to avoid building the expensive things you thought you needed (craig's list)
- MVPs have risks, branding risk (can create a different brand), legal risks (impact on patenting options), morale risk, competitors (not really focused on stealing ideas from startups)
- need to prepare ee's and investors for MVP failures, helps to have locked in contract/agreements for several rounds
- need system approach to measuring progress, to not be overcome by failed experiments... innovation accounting
MEASURE
- startup's job is to (1) rigorously measure where it is right now, confronting hard truths that assessment reveals, and then (2) devise experiments to learn how the real numbers closer to the ideal reflected in the business plan.
- innovation accounting guards against entrepreneurial optimism and against languishing in the land of the living dead, with a company that's not failing but not growing.
- three steps (learning milestones): (1) use an MVP and establish a baseline for growth and value, key rates, (2) attempt to tune the engine from the baseline toward the ideal, optimize product and learn what works or doesn't to drive key metrics (3) pivot or persevere decision regularly
- use cohort analysis: study sales, activation, conversion rates for monthly cohorts of new leads/customers... see changes in monthly experience as product changes, e.g. registration, logins, one use, multiple uses, paid, referred others.
- establish baseline - test riskiest, most important assumptions first
- avoid vanity metrics - gross revenues, total customers - need to track/match recent changes with recent metrics - not just optimize product, but optimize and learn which changes generate desired customer behaviors
- split tests run two versions of a product/feature change against each other in two different (small) groups, for short time... pull the one that doesn't work as well... gives immediate feedback and helps refine understanding of what customers want and don't want, based on observed behavior.
- kanban - do/doing/done plus VALIDATED buckets, limit (e.g. three pieces) work items/features in each one, require validation in split test before it can be moved off board
- metrics should be
- actionable (clear cause and effect) - cohort analysis, split tests, kanban
- accessible (open shared data in clear understandable form, available to dev team)
- auditable (easy to track and calculate so no errors, spot check with real customers)
- innovation accounting => product prioritization decisions, customer targeting decisions (and who to listen to), courage to constantly test the grand vision/key assumptions
PIVOT (OR PERSEVERE)
- pivot is decision to change strategy, vs. product optimization decisions, based on test of key business assumptions (value, customer, growth engine, etc)
- schedule meetings in advance, less than a few weeks too often, more than a few months is not frequent enough
- innovation acctg leads to faster pivots
- startup's runway is not cash and burn rate, but number of strategic pivots it can still make, the number of opportunities it has to make a fundamental change to biz strategy
- pivots require courage - vanity metrics conceal need to change, sap motivation - "launch it and see what happens" always works, you always see what happens, but might take time to see what really happens, ambiguous results - acknowledging failure of part of strategy is tough on morale, usually requires throwing work away.
- failure to pivot - get caught up in optimization and vanity metrics, fail to see plateau and fail to use learning milestones to understand what's driving growth, or not ** can still be growing but not making progress on strategic learning, esp in shift from early adopters to mainstream adopters, who might need different product features
- catalog of pivots:
- zoom-in pivot - a single feature becomes the whole product
- zoom-out pivot - product becomes feature of larger product
- customer segment pivot - product solves real problem for real cust's but not the group we've been chasing/serving so far
- customer need pivot - sometimes in getting to know customers, we discover the need we're serving isn't that important, need to shift to another need
- platform pivot - move from mobile app, for instance, to other platforms, sometimes multiple times
- business architecture pivot - low volume/high profit to low margin/high volume, or b2b to b2c, changes length and nature of sales cycle
- value capture pivot - shifts to how the business monetizes value, how it gets paid
- engine of growth pivot - sticky, viral or paid
- channel pivot - how product is distributed, sold
- technology pivot - sometimes find different ways to solve the customer problem
- a pivot is a strategic hypothesis - to test fundamental business assumptions
ACCELERATE
key assumptions
- what products do customers want?
- how will our business grow?
- who is our customer?
- which customers should we listen to? (and which ignore)
BATCH
- small batches, even batches of one, more efficient production and faster learning
- toyota mastered small batches after wwii when couldn't afford large-batch equipment
- use "andon cord" to "stop the line" when defects noticed
- small batches moves toward continuous deployment - (1) hardware becoming software in devices, cars, etc. - (2) fast production changes/product evolution, (3) 3D printing and rapid prototyping tools.
- small batches in education => School of One
- large batches result in lots of rework, slower feedback, higher inventories, slower adapt
- PULL don't PUSH - lean inventory lets sales pull production through distribution channel, lean startup lets hypotheses that want/need testing pull work from development teams
GROW
sustainable growth: new customers come from the actions of past customers
four ways existing customers drive new customer growth:
- word of mouth
- side effect of use (fashion, hotmail invite in every msg footer, network effects)
- paid advertising/sales
- repeat use
can burn resources debating choices between:
- finding new customers
- serving existing customers better
- improving overall quality
- driving down costs
engines of growth
- sticky, viral, paid (ads, sales, mktg)
- designed to point to small set of metrics to focus on
sticky
- rely on long-term retention
- track attrition rate or churn rate (attrition net of new customers) very carefully
- need to make customer acquisition > customer attrition rate
- track also: rate of compounding = net % gain per period, same effect as financial compounding => drives growth over time, independent of revenue, quality, etc - numbers of customers and revenue per customer are not what drive growth in this situation
viral
- not just word of mouth, but powered by USE of product: fashion, network, hotmail
- track viral coefficient: # of new customers who will use product for each new customer who signs up. <1.0 is nearly flat, =1.0 is linear up trend, >1.0 is explosive, exponential
- no reason to try to maximize revenue from customers, cuz that's not what's driving growth here
paid
- cost per signing up new customer, cost per acquisition (CPA), drives growth
- growth requires revenue > cost of acquiring customers
- to increase growth, increase rev per customer or decrease cost per acquisition
- cost includes ads, sales staff, other marketing
- cpa will normalize across industry, trend toward rev per customer, >>> need to get better at getting revenue out of customers
summary
- can have more than one engine active, but should focus on one as primary
- each engine's key metric tells you if you're close to marc andreesen's "product/market fit" that drives explosive growth
- all engines of growth eventually run out >>> need new startups inside biz, or new customer groups to reach with same product
ADAPT
an adaptive organization
- automatically adjusts its process and performance to current conditions... and that conscious adaptation creates its own orientation program for new hires
- speed needs regulation - can't trade quality for time, accepting defects to go fast now, will slow you down later
- as quality and learning increase, product complexity increases over time, too
- challenge: achieve scale and quality in just-in-time fashion
- adaptive process, natural feedback loop, force you to slow sown and prevent kinds of problems that are currently causing waste - as efforts pay off, you naturally speed up again
the five whys
- to make incremental investments and evolve processes gradually
- root of every technical issue is a human issue
- ask WHY five times... this is happening, why? and why that? and why that? ...
- consistently make incremental investments, proportional to the problem identified at each level of the inquiry
- make small investments/fixes for small problems and more if they recur
- natural speed regulator - more problems pull more investment in process
- use for any kind of failure - tech fault, business results miss expectations, unexpected customer behavior changes, etc.
- some say 5whys + small batches is enough to generate whole of lean startup method
- must avoid: "5 blames" targeting "bad people" rather than bad systems/process
- must include: everyone connected with problem, absentees become targets for blame
- executive mantra: "if a mistake happens, shame on us for making it so easy to make that mistake" >>> take systems-level view
- have people who made the mistake lead effort to prevent repeats
- turns up unpleasant realities within org, esp at beginning
- calls for investments that slow development (even though only temporarily)
- need authority present, supporting, insisting on good practice at difficult moments
- need mgr/exec buy-in and support - for more than one round - commit to ongoing use
- start small, with new issues, be specific: don't try to process old baggage, small avoids blame, specific makes need for addressing clear to all and gains attendance at meeting
- appoint a 5whys master: for each area where it's used, senior enough to make sure stuff gets done
- if too many issues, pick a subset/cluster of the whole
- keep meetings short and identify relatively simple changes at each level
- brief the process with explanation of purpose (adaptation, not blame) and example from book or elsewhere
- commit to 5whys as learning, iterative experience and learning curve for master and all participants
simplified version of 5 whys - when trust/mutual support are low
- ask teams to adopt these two rules: (1) be tolerant of all first-time mistakes, blame systems not people, and (2) (work together) to never allow same mistake to happen again
- simplified version not a long-term solution: invites debate about what constitutes the "same mistake repeating" and about individual vs. categories of mistakes
learning requires more than culture shift
- treat work as system
- deal with batch size and cycle time
- might require new tools and platform changes to support faster ways of working
INNOVATE
need to balance needs of existing customers with challenge of finding new ones, manage existing lines of business while exploring new models and opportunities
nurture disruptive innovation
- teams need scarce but secure resources - internals need to be protected from budget raids and org politics
- independent development authority - teams need complete autonomy to develop and market new products, within limits of their mandate - need to be pan-functional so need to get outside experise/approvals doesn't slow build-measure-learn cycle
- personal stake in outcomes - not to be overly reliant on financial, esp in nonprofit/govt
creating platform for experimentation (esp for internal startups)
- protect parent from start-up not vice versa - experiments can threaten existing customer relationships, brand image, lines of business
- hiding innovation in black box, off-site, skunkworks creates suspicion long-term, not sustainable innovation and learning
- innovation sandbox - empowers innovation teams out in the open - contains the impact without constraining the methods/people
sandbox rules
- any team can create true split-test experiment that affects only the sandboxed parts of the product or service or only certain customer segments or territories
- one team must see the whole experiment through from end to end
- no experiment can run longer than a specified amount of time (usually a few weeks for simple feature experiments, longer for more disruptive innovations)
- no experiment can affect more than a specified number of customers (usually expressed as % of customer base)
- every experiment has tobe evaluated on te basis of a single standard report of five to ten (no more) actionable metrics
- every team that works inside the sandbox and every product that is built must use the same metrics to evaluate success
- any team that creates an experiment must monitor the metrics and customer reactions (support calls, socual media reaction, forum threads, etc.) while the experiment is in progress and abort it if something catastrophic happens.
more on sandboxing
- sandboxes should start small
- customers are real and long-term relationships are encouraged (unlike market and concept tests)
- cross functional teams and clear leader are free to build and market within the sandbox, required to report success/failures w/ actionable metrics (cohorts, splits, kanban, etc)
- true experiments easy to judge - top-level metrics move or they don't - teams learn immediately how customers react
- rest of org, even including potential detractors/saboteurs, required to learn about innovation acctg and learning milestones
- sandbox promotes rapid iteration - stable team, small batches, clear feedback
- internal or external >>> same sequence of accountability
- build an ideal model of disruption based on customer architype(s)
- launch an MVP to establish baseline
- attempt to optimize toward ideal
- pivot or persevere
- as sandbox innovation teams succeed, need to integrate results into company portfolio
cultivating management portfolio
- four challenges/pieces of work in every company:
- innovation, r&d - creating disruption, sandboxing
- scale/growth - integrate into portfolio/strategy, PR, marketing, sales, biz dev implications, competition and imitation
- optimization - extend lines, add features, combat commodification, operations excellence, increase margins
- old news - manage op costs/legacy products, outsourcing, automating, cost reduction
- manage each differently, w/handoffs b/w crossfunctional teams, members self-select
- entrepreneur is job title, management role, leading innovation
sandbox evolution
- sandbox should expand over time, increase experimenting with more scope
- sandbox might start slow, but will succeed over time as long as get constant feedback, small batches, actionable metrics, accountable for learning milestones
- sandbox eventually victim of own success, innovators become defenders of status quo, need to open new sandbox(es)/team(s)
- when lean startup becomes status quo, changes and challenges to it should be subject to same test/practice that it originated from - stay true to empirical principles, even when operating on lean startup itself
- lean startup not a defined blueprint, tactics, procedures - a framework for learning
- shift from functional teams to crossfunction is big, slowdown at first, from optimizing functions to optimizing the whole validated learning cycle - feels worse before it feels better - trading intangible problems in old system for tangible problems in new - understanding and communicating theory/practice helps set/reset expectations as antidote - set/reset expectations early and often!
EPILOGUE
- stop wasting time
- stop relying on untested assumptions and rules of thumb
- stop hiding what we don't know behind vanity metrics
- stop making routine work more important than creativity
- stop making mechanization (processes, tools, structures) more important than people
- stop making plans more important than agility
- stop wasting time