The (un)Common Logic Framework for Sustainable Growth

Sustainable growth is not a vision statement, it is a system of choices that compound. Most teams know the slogans, fewer can convert them into daily habits that raise revenue without eroding margins, morale, or customer trust. The gap lives in the details: which numbers matter, which constraints bite first, and which decisions should remain reversible. Over two decades building and advising product companies, I have watched the same pattern repeat. When growth sticks, leaders design for compounding, not headlines. They manage constraints instead of averages. They price for behavior, not vanity metrics. They teach the company to reason the same way on a boring Tuesday as they do at an offsite.

image

I call this the (un)Common Logic framework because its core practices sound obvious in a meeting, yet remain strangely rare in execution. The moves are logical, but uncommon in the pressure cooker of targets and runway. The point is not to be clever. The point is to be repeatable.

Why the name matters

(un)Common Logic is a reminder that a business is a network of contingent truths. Ideas that read well on a slide often ignore the messy edges that decide outcomes. Take lifetime value, a metric that tempts teams into heroic claims. Without clean retention measurement and a time-bounded payback rule, LTV becomes math cosplay. Or look at price increases that ignore procurement lead times and finance calendar locks, then miss the only windows when customers could have accepted change.

The framework forces logic all the way down to the level where a sales rep, a product manager, and a support agent see the same picture. If they can explain why a metric moved, what constraint is active, and which bet is reversible, growth starts to feel less like a gamble and more like a craft.

Principle 1: Define success in measurable, survivable terms

Growth that cannot survive a cash crunch is not growth, it is theater. Start with two explicit definitions: a north-star outcome and a survivability guardrail.

A good north star is the smallest composite metric that connects revenue to customer value. For a usage-based SaaS, it might be weekly active teams completing a meaningful action, multiplied by average paid units per team, multiplied by price per unit. It lets you ask real questions. If acquisition surges but weekly active teams per cohort fall, you are buying problems. If price per unit rises while paid units shrink, you may be taxing adoption.

Survivability guardrails keep you in the game long enough for compounding to matter. For subscription businesses, I recommend a cash payback target by cohort rather than blended CAC payback. A reasonable starting point is 9 to 12 months for mid-market, 3 to 6 for SMB, stretching to 18 for enterprise if gross margins exceed 80 percent and churn risk is low. Set a hard stop on net burn relative to runway. When a team sees that a price discount extends cohort payback beyond the limit, they do not need permission to say no.

Define your acceptable failure rate as well. If your experiment program requires 80 percent wins, you are not experimenting, you are confirming. Mature teams expect win rates around 10 to 30 percent, with outsized impact concentrated in a handful of ideas.

Principle 2: Build compounding loops, not campaigns

Campaigns spike. Loops stack. A loop converts effort into an asset that improves future performance without equivalent future cost. The most dependable loops connect acquisition, activation, and retention.

A practical loop might look like this: targeted content attracts qualified readers with a specific pain. Product onboarding reflects that pain by prefilling setup steps based on the referral context. Activated users see early value within the first session, which increases trial conversion. Satisfied users trigger a gentle in-product prompt that surfaces a relevant case study or invites a referral, which in turn refuels acquisition at lower incremental cost. The same content that brought them in now helps them advocate.

Numbers tell the story. A team I worked with grew from 1,200 to 3,500 trials per month in a year. The big lift did not come from buying clicks. We tuned activation from 27 percent to 42 percent by compressing time to first value from 3 days to 90 minutes. Trial-to-paid moved from 12 percent to 20 percent. Churn on the first paid cycle dropped from 8 percent to 5 percent after we added a single use-case checklist to onboarding. The acquisition budget barely changed, but revenue grew 2.1 times because the loop fed itself.

Beware false loops that burn human capital. A sales hero loop looks like this: heavy discounting to hit quarter end, followed by rushed onboarding, leading to support overload and weak adoption, leading to renewals that require further discounting. On paper, it is a loop. In reality, it compounds fatigue and kills pricing power.

Principle 3: Manage constraints, not averages

Averages hide the bottlenecks that govern throughput. If your demo-to-close rate averages 28 percent, the useful question is not how to move 28 to 30. The question is whether a single step throttles capacity. Maybe on-time proposals lag at 60 percent because legal review takes five days. Maybe you have enough leads, but opportunity creation stalls because one segment requires integrations that your PS team cannot staff.

I borrow from the theory of constraints and adapt it to growth operations. Identify the current bottleneck, subordinate other activities to it, elevate it with targeted fixes, then find the next bottleneck once the first moves. Do not spray improvements across the funnel without this discipline.

I once mapped a mid-market funnel that looked healthy on averages. Marketing hit pipeline targets, SDRs booked meetings, account executives closed at a decent clip. Yet revenue flatlined. Root cause analysis found a single constraint: security reviews took 21 days on average, and half the deals died in that limbo. We built a security portal with standard artifacts, created a pre-approved control map tied to SOC reports, and trained AEs to start the process at discovery. Review time fell to 8 days, close rates rose, and marketing spend finally translated into ARR. The fix did not require more top-of-funnel budget, only attention to the real constraint.

Constraints shift as you grow. Early-stage, the constraint is usually demand or activation. Mid-stage, it is often pricing clarity or sales cycle friction. Later, it might be partner enablement or procurement pathways. Teams that keep a living constraint map avoid the trap of polishing metrics that do not change outcomes.

Principle 4: Make bets reversible, and learn on a clock

Many growth decisions are two-way doors if you design them that way. Price anchoring tests, onboarding flows, subject lines, feature naming, landing page structure, even parts of packaging can be reversed without scarring the brand, as long as you set guardrails. Others are one-way doors, like entering a highly regulated vertical, signing exclusivity with a distributor, or sunsetting a core plan. Use speed where reversibility exists, and deliberation where it does not.

Learning suffers when time becomes a suggestion. Set a test cadence with enough throughput to find truth before the quarter ends. Weekly or biweekly experiment reviews keep momentum. Tie each test to a metric that resolves ambiguity. If the success criteria can be argued after the fact, the test was poorly designed.

I like a simple rule for experiment bandwidth: maintain at least three times as many ready-to-run tests as active slots. It reduces idle time when a test stalls. Track your cycle time from idea to decision, not just win rates. A team that ships ten tests with 20 percent wins but 7-day cycle time will often beat a team that ships four tests with 40 percent wins and 21-day cycles, because the faster group learns three times as much per month.

Principle 5: Price for value and behavior, not bravado

Pricing is not a number, it is a system that shapes who buys, how they adopt, and whether they stay. Good pricing absorbs real constraints like procurement thresholds, forecastability for finance, and the difference between value discovery and value capture.

Three practical moves change pricing outcomes:

    Anchor with tiers that map to actual workflow differences, not imagined segments. If your product supports 3 distinct job-to-be-done patterns, create three tiers with aligned entitlements. Resist the urge to invent five tiers because competitors have them. Skywriting extra tiers confuses buyers and hides your economic engine. Align price meters with customer-perceived value. Usage meters work when customers readily link the meter to outcomes they care about, like messages sent for a communications API or seats for a collaboration tool. Meters that track obscure technical activity create billing anxiety and churn. If you must meter a proxy, bundle it with a clear capacity narrative, for example, project credits that tie to a familiar unit of work. Keep a path to expand without re-negotiation. Expansion-friendly design reduces sales friction and protects CAC efficiency. Transparent add-ons, annual true-ups, and soft caps that trigger advisory notices build trust. A well-crafted 7 to 12 percent annual price rise tied to documented improvements often lands better than a chaotic two-year jump that resets procurement cycles.

Numbers again keep you honest. Healthy net revenue retention for mid-market SaaS often sits in the 110 to 130 percent range. If you need 140 percent NRR to make the model work, either your base price is too low, your acquisition is too expensive, or your product relies on unnatural expansion behaviors. Rather than forcing expansion with dark patterns, fix the value story and the meter.

Principle 6: Scale judgment with an operating rhythm

Even strong strategies dissolve without a cadence that scales judgment. I prefer a weekly rhythm with a few standing conversations, each with crisp inputs and decisions. Meetings do not create growth, but absent the right ones, entropy wins. Teams that rely on ad hoc heroics eventually find themselves in firefighting loops.

Here is the checklist I offer CEOs who want their calendar to teach the company how to think:

    Monday 30 minutes: metric review against north star and guardrails, with one narrative memo that explains three biggest moves. No slide decks. If a metric is red, agree whether it is a constraint or a noise blip. Tuesday 45 minutes: experiment council approves new tests, kills stalled ones, and assigns owners. Maintain the 3x ready backlog. Wednesday 45 minutes: pipeline and pricing checkpoint, not a beatdown. Focus on proposal cycle time, discount discipline, and security or procurement blockers. Thursday 60 minutes: product adoption review over cohorts, not blends. Identify friction in the first-session or first-week experience. Friday 30 minutes: postmortem or pre-mortem on one critical initiative. Write it down. Institutional memory compounds like capital.

Two lists are allowed, so the above is one.

The cadence works because it creates predictable spaces where data meets judgment. You do not need a complicated BI stack to start. A shared doc with stable definitions beats a flashy dashboard with shifting filters. When definitions stabilize, you can translate into dashboards without re-litigating every number.

A field example: bending a mid-market SaaS curve

A few years back, a mid-market workflow tool sat at 9 million ARR with flat growth. CAC payback hovered around 16 months, churn on the first renewal stayed at 11 percent, and sales cycles drifted to 74 days. The board wanted expansion into enterprise, but the economics could not support the longer cycles.

We applied the (un)Common Logic framework in three waves across 120 days.

Wave one defined survivable success. The team set a 12-month cohort payback cap, with exceptions only for deals above 100k ARR that met strict margin and multi-year prepay terms. The north star combined weekly active teams completing a core workflow with paid units per team and price per unit. This reconciled product, sales, and finance.

Wave two attacked the active constraint. Discovery revealed that legal and security reviews delayed half of deals. The product team built a self-serve security pack with DPA templates, a control matrix mapped to SOC reports, and a sandbox for IT validation. We trained AEs to initiate the pack at the first meeting. Proposal turnarounds improved from 6 days to 2, security reviews fell from 19 to 9 days, and cycle time midline moved to 54 days within six weeks.

image

Wave three tuned compounding loops. We cut the onboarding steps from 14 to 7 and introduced templates that mirrored the top three use cases, reducing time to first value from 2.4 days to under 2 hours. Activation rose from 31 percent to 48 percent. Trial-to-paid improved from 14 to 21 percent. We also reworked pricing, moving from a seat-only model to a blended model with seats plus usage credits, aligned with the value customers reported. This allowed gentle expansion as teams adopted more workflows without renegotiating contracts.

By month four, new logo ARR rose 38 percent quarter over quarter. CAC payback dropped to 11 months. First renewal churn fell to 7 percent. The company still wanted enterprise, but now it had mid-market unit economics that could subsidize longer cycles without starving the core.

None of this required heroics. It required picking the right constraint, designing for reversibility, and letting loops do the heavy lifting.

Edge cases: when slower is faster

Not every business should push the gas in the same way. A few patterns call for restraint.

Heavily regulated verticals punish rapid packaging changes. If your customers need internal validation from compliance or IT, frequent price or plan tweaks erode trust and prolong cycles. In those cases, batch changes to align with predictable budget and review windows, even if it slows nominal experiment cadence.

Network effects can create illusions of inevitability. Teams sometimes mistake community noise for durable retention. A social product that rides a trend can inflate DAU, then discover weak core loops once the cultural moment fades. The cure is brutal cohort analysis and a threshold for meaningful action that is harder to game than a login.

Hardware businesses, or software riding on specialized devices, face supply chain constraints that sabotage reversible bets. When a firmware update touches certification, it is not a two-way door. Here, simulation and staged rollout discipline matter more than speed. Cycle time is governed by the slowest validation step, so you must subordinate the rest of the system accordingly.

Deep enterprise solutions may need proof of value before value capture. If a Fortune 500 buyer treats your category as a multi-year transformation, your payback math should incorporate pilot-to-rollout pathways and executive sponsorship timelines. You can still run fast experiments on messaging and adoption aids, but pricing, contracting, and integration rhythms will resist weekly change.

Data, but only the useful kind

I like metrics that close the loop between action and cash. Three stand out.

Paid cohort payback, measured from the date costs are committed to the date cumulative gross profit from that cohort turns positive. It punishes sloppy CAC accounting and forces attention to gross margin.

Time to first value, defined clearly for your product. First value is not a congratulations screen, it is the first completed action that predicts the decision to buy or stay. For a payroll tool, it might be the first successful payroll run. For analytics, it might be the first dashboard saved and shared https://emiliouihf065.tearosediner.net/decoding-growth-with-un-common-logic with more than one teammate. This number is the most sensitive leading indicator of conversion and early churn.

Proposal cycle time, measured from verbal intent to signed order form. It isolates downstream friction that marketing and top-of-funnel metrics cannot see, and it reveals whether legal, security, or procurement need system fixes.

Dashboards only help if they stop arguments. Write metric definitions as short paragraphs with examples and anti-examples. If a team reads a number and immediately asks which filters were on, the metric is not done. When you meet, lead with a short narrative memo that says what moved, what likely caused it, and what decision you need. Protect the memo from slide bloat. Slides tempt decoration.

Teams and incentives that support the system

The hardest part of (un)Common Logic is cultural. It asks for transparency that many incentive plans undermine. If sales earns more by discounting deep near quarter end, and support bears the renewal pain later, no amount of rhetoric will fix the loop. If product is rewarded for feature count, and marketing for lead count, the system floods itself with noise.

Tighten the link between incentives and compounding outcomes. For sales, put a portion of variable pay on proposal cycle time and discount discipline, not just bookings. For product, tie a part of evaluation to activation and cohort retention, not launch dates. For marketing, use qualified pipeline and trial-to-activation as co-equal goals with volume. For customer success, balance NRR with measurable adoption behaviors, so expansions are earned, not extracted.

Teach reversible versus one-way decisions in onboarding. New managers should know which changes they can ship with a small experiment, and which require a cross-functional design doc and pre-mortem. The goal is not to slow people, it is to speed them by clarifying lanes.

A second field note: the price change nobody noticed

A B2B tool serving finance teams wanted to raise prices after shipping two marquee features. The instinct was a headline increase at renewal. We resisted. Procurement policies at half their accounts capped auto-approval at a 10 percent rise. Anything beyond that triggered a 90-day review. We chose a quieter path.

We introduced a value-indexed tiering model where the new features lived, made migration a one-click in-app action, and set a soft cap on legacy plans that suggested advisory outreach once usage hit 80 percent of included capacity. Then we published a two-page economic note, not a hype release, showing how the features mapped to reduced manual hours and fewer audit exceptions.

Within three months, 28 percent of accounts had self-migrated to the new tier. The average effective price rise was 13 percent, with less discounting than historical deals. Churn did not budge. The quiet arithmetic of value beat the spectacle of a blunt price hike.

A compact comparison: good growth vs unsustainable growth

    Good growth compounds through loops and shrinks future effort per dollar. Unsustainable growth requires ever-increasing inputs to stand still. Good growth respects cash payback and margin guardrails. Unsustainable growth hides CAC in corners and celebrates vanity metrics. Good growth manages the current constraint and re-maps it as the system changes. Unsustainable growth optimizes averages and misses the bottleneck. Good growth uses reversible bets for speed and reserves ceremony for one-way doors. Unsustainable growth adds ceremony everywhere or nowhere. Good growth prices to match perceived value and forecastability. Unsustainable growth prices to hit a quarter, then inherits churn.

This is the second and last list.

Getting started over 90 days

You do not need to rewire the company to benefit. In the first week, write down your north star and survivability guardrails in a single page. In two weeks, run a constraint discovery session that traces one won and one lost deal from first touch to cash, noting delays and their owners. In three weeks, rewrite your activation path to reduce time to first value by half, even if it feels too simple. In a month, inventory price meters against customer-perceived value and remove one source of billing anxiety. By day 60, your experiment council should be shipping weekly with clear stop rules. By day 90, review cohorts by paid month and adjust CAC to match the true payback. This is not busywork. Each move purchases compounding.

The habit that makes it stick

Leaders often ask for the silver bullet. There is none. There is a habit: defend a small set of truths, improve them on a schedule, and resist the urge to chase averages. The calendar carries culture more reliably than slogans. If your week protects the metric memo, the experiment council, the pipeline and pricing checkpoint, the adoption review, and one honest postmortem, people learn how to think the same way when you are not in the room.

The market changes. Competitors copy features. Channels saturate. What persists is a way of reasoning that turns chaos into a few clear moves. That habit is the heart of (un)Common Logic. It looks obvious on a whiteboard and feels rare when the pressure rises. Do it long enough, and the uncommon becomes your normal. Growth follows, not because you chased it, but because you built a system that earns it.