Every company wants to grow. Every company wants to succeed. Few do.
The best and worst companies in the world all have ambitious goals, but only the best few achieve them. Why is that? One answer is that it’s the underlying systems and principles that these companies have built that enables them to reach those goals.
In this post, I’ll walk through some of the principles that I and the growth team here at Ramp have practiced and used to achieve some of our unprecedented growth in just the past couple years since our inception as a company.
For reference, Ramp launched our first product in February 2020. By February 2021, annualized run rate revenue hit $12 million. And a year after that, we crossed $100 million. Now at the end of 2023, we’re far beyond that number.
The main principles and learnings I’ll go over that have helped us scale growth:
But first, what does a “growth” team do? The specific activities of a growth team can vary from company to company. But here at Ramp, for most of the time since we created the team, the main north star has been to increase Ramp’s number of customers (increase potential sales revenue). Every company needs to build a great product or service, but it also needs to sell and distribute it. We on the growth team figure out how to continually improve that distribution, and our methods encompass everything from things like writing a simple email campaign, to building a free chrome extension that drives engagement, to creating a data processing system that gives us unique insights to discover companies that would highly benefit from Ramp’s product.
At Ramp, our growth engine for acquiring new customers is fueled by a culture rooted in first principles thinking. This is the approach of stripping down ambiguous problems to their most basic elements — the fundamental truths and constraints — and building solutions from there. It's about questioning every assumption and not being biased by surface-level factors. This is foundational to all the other principles here.
This thinking is what enables us to break down walls, try novel ideas, and set ambitious goals.
A basic example – at one point, we realized that the number of email responses that our sales reps were receiving had grown to large levels, and that their abilities to triage and respond to the most promising prospects was being slowed down and was resulting in worse performance.
The obvious basic solution to this? Hire more people. That’s not what we did though. We started by thinking about what the constraints were and what the ideal outcomes would be. After some thinking, we came to the idea of creating a lightweight system that would be a visual overlay in the sales reps’ email clients. It automatically did AI classification and prioritization of the emails for them, and it also gave them datapoints to use to help better convert those prospects. This way, they could prioritize responding to the best companies with the highest identified interest in Ramp and deprioritize ones that were not interested.
This internal platform-level improvement helped make our reps more efficient – and not only could they handle an even larger volume of prospects per rep now, but they also increased their ability to turn those potential interested contacts into actual sales. Absolute volume was up and conversion rates were up. (Our reps are now probably some of the most productive in the country due to this and other things we’ve built).
A recommendation for teams is to always have your north star metric in mind and use that to baseline prioritize your estimated return on investment (investment of both time and dollars) across the different projects you can work on.
Sometimes, it can be too easy to always think analytically and continue to optimize only what you can see in front of you. This is not ideal. We don’t want to keep working on things like conversion rate optimization ad infinitum.
It’s important to also take a step back, do a macro analysis from first principles, and realize that we can actually add tens of millions more dollars of sales pipeline in the longer term by focusing on an entirely new channel or idea rather than just continuing to optimize an existing channel.
At the same time, you also shouldn’t assume that an existing channel is done growing though. There have been a multitude of times where we’ve continued to think of new impactful ideas that have improved existing channels and processes by meaningful amounts.
We should and can be doing both (launching new speculative ideas and also optimizing existing ones). Ultimately, it comes down to prioritization and first principles thinking, which are difficult but can be learnt through practice.
Let’s talk about experimentation. As opposed to normal product engineering work where you might know upfront the general requirements of the final product/system/feature you’re trying to build, for us, we often need to experimentally determine what possible tactics can work before building the bigger system. Thus, we first estimate the return on investment for an idea, and if it seems worthwhile, we then launch the idea as a quick MVP (minimum viable product) experiment. After launching the experiment, we monitor the results against some metric (such as how many new potential sales it brought in), and if we see that the metric is significantly positive, we will then productionize the experiment into a full product build as needed so that it continues to bring in recurring new sales for the company.
To get learnings and figure out what does and doesn’t work though, you need to try many things. Here on the growth team, we expect more than 2/3's of our experiments to fail. If they all succeeded of course, then this would be too easy. So it’s all about trying new things to get signal and more data points especially when those experiments fail. A failed experiment is only a failure if we truly learned nothing from it and have no follow ups from it.
A way to set yourself up for that success is when you create a hypothesis for an experiment you want to run, you want to be rigorous (such as by reducing the number of confounding factors) with it so that if the experiment fails, you don’t have to throw away all the results.
We are not in the business of blindly launching experiments; we’re in the business of growing revenue. It all comes back to the north star and what is best for the long term.
For the initial MVP experiment, ideally you can build and launch it in a few hours and at most a couple of days. The key is speed while prioritizing the overall return on invested effort. For example, for the email triage system mentioned in the prior section above, we built that in a single day and launched it to one sales rep. After observing that their performance improved in the MVP, we then rolled it out to all reps and improved the functionality of the system to make it a lot more reliable as well.
Especially for any traditional engineering team thinking about doing growth, I think the really important thing to learn here is that at the MVP experiment stage, you need to be okay with not writing any code, or with just writing a quick and scrappy script. You don’t want to spend multiple days or weeks writing the code for a big system that turns out to produce no impact without validating it first.
At the same time, once you do want to productionize an experiment that has gone well, then your team needs to have the solid engineering capabilities to build robust, scalable, and observable systems that work.
MVP thinking and production quality thinking are therefore both essential. Creating repeatable growth requires being able to navigate between these two modes, while being excellent at both too.
Relatedly, in order to truly get long-term velocity (and not just short-term speed), it’s also important to build tools and use code where needed to create the leverage to run more and better experiments more easily too.
But again, these all come down to what’s happening in your specific business and reasoning critically about what is most relevant in your specific situation. What I’ve said here can’t be directly applied to every business without putting thought into it.
Furthermore, you need to dive into the weeds. Looking at only the highest level metrics is never enough.
Asking the right questions about why the data in a chart looks a certain way or making your own cuts of the data often leads to uncovering broken systems or new opportunities. (and of course, hopefully you even have the ability to measure and view your data clearly in the first place)
Whenever you look at a chart, you should be able to either see a problem, see a new idea, or have a question about the chart that will lead to one of the first two outcomes.
But, beyond just looking at the charts, the real next step is to go and dive into the individual data points, and see what was actually said by those potential customers in those email replies, form submissions, sales calls, etc.
And for the many cases where you don’t have data yet, use your creativity, intuition, and reasoning abilities to try an experiment that will give you new knowledge, no matter if the experiment outcome is positive or negative.
There were many times where I or other team members would spend multiple hours going through an existing process or funnel, and mapping it out to find a totally new idea we could try on some new audience that we hadn’t thought of before when only looking at the high level numbers.
As a tangible detail, to become more data-hungry, we’ve benefited from adopting modern data tooling (like dbt and Snowflake) that has allowed us to build our data and analytical systems in a scalable way.
This principle is the one closest to my heart, and very relevant to my team. I personally believe that most companies out there in the world are 100x under-optimized due to their rigid and constrained organizational structures and cultures.
In an engineering context, when people talk about “full-stack” engineers, they refer to an engineer who encompasses the whole stack of technologies needed, from frontend web development to backend development. This contrasts with an engineer who focuses on only frontend or only backend work. Specialization is powerful and necessary in many situations, generalization in others. I've found generalization to be essential to success on a growth team.
I translate the same full-stack terminology to apply to any role. Especially on my team, I try to prioritize having everyone across the business operations, engineering, and sales members that make up our growth team to be truly full-stack. Business operations members should be able to write SQL queries, run some python scripts, and know how to look at technical documentation, and at the same time, engineering members should be able to think of and estimate the impact of new experiments, write the product plans for them, and create the analytics dashboards needed to monitor them. Of course there are limits and places where separation makes sense. We obviously don’t expect business operations members to be writing actual production code, but an understanding of the technical capabilities that exist is useful for piecing together new experiments and ways of doing things that you wouldn’t have the foundational knowledge for otherwise.
Having this kind of multi-talented team has led to many great instances such as when a business operations person found a bug in a data pipeline that was causing metrics to look off and it changed our understanding of how a certain channel was working, or where an engineer due to their understanding of business impact and prioritization dropped two lower-value experiments in favor of a new idea he had that ended up bringing in multiple millions of dollars in revenue.
I think this is where more traditional growth marketing teams miss out. They do great work, but are also missing out on many more capabilities that could be opened up to them with more multi-disciplinary teams and capabilities.
Being full-stack has allowed my team to be much more efficient. Otherwise, every point of communication and dependence on someone else adds more overhead, slows things down, leads to reduced context, and less kinetic energy in the person a task is being transferred to. So don’t let yourself be blocked or slowed down. Just do it.
And again going back to the point of long-term velocity, it’s also important to enable your partnering teams and others as well. Doing everything yourself is not the point of my statements. Doing it all yourself would be bad prioritization and bad delegation. The point is to not let the small things slow you down and make something take multiple days when it could have been finished today.
The velocity you unlock from strong teams like this will beget more velocity, a better culture, and a more ambitious team. In the end, you fundamentally need people who want to get things done and can go and do those things without unnecessary overhead.
We have used all of these principles to create one of the most sophisticated growth engines at Ramp in the world of B2B businesses. This system we’ve built powers a major part of our growth, and it has benefitted from us thinking from first principles to use data and engineering in novel ways to create unique teams that can accelerate velocity and launch experiments that drive tens to hundreds of millions of dollars in sales pipeline and real revenue.
Of course, all this growth is possible because we have a real addressable market and a great product that solves real problems and makes businesses across the world far more efficient than they ever were before. You can’t sustainably grow something that isn’t solid.
As a last point, I think even here at Ramp where we’ve built a really good growth team, we’re still not perfect and we're still getting better at maximizing these principles. But, we have our foundations in place, and we’ll keep improving and learning, and ultimately, we’ll continue growing, both our business, and ourselves.