The public cloud has been around for more than two decades. AWS launched its first services in 2002.
Yes, folks, we’ve been running on cloud for 23 years! It took us all some time to embrace the new reality, and once we did - to grapple with the complexity this has brought to the table. Managing cloud spend was a skill we all needed to learn and - just as the cloud itself - it has evolved over the years.
With time, this skill has grown into a discipline of its own that we now call FinOps. Being a fairly young discipline, it’s still not very well understood. This post presents the phases of FinOps evolution as they usually occur at an organization - quite in accord with how they were developing in the industry as a whole until this very day. In the end I’ll also try to predict how FinOps is going to change looking forward.
After all - what’s more exciting than trying to predict the future? But let’s start from the beginning.
Observational FinOps - the Infancy
Pay as you go - the financial promise of the cloud was simple. It relieved us of the grim uncertainty of capacity planning. But now a new burden landed upon us - we needed to count how much we consume. When eating from a buffet - only your self-restraint can protect you from overeating.
In the beginning, cloud providers weren’t as good at exposing the cost and usage data as today, so a lot of this we had to measure ourselves or by using third-party tools.
The initial FinOps is mostly about accessing and collecting that data, that’s why I call it “observational”. At this stage, even having a clear picture of how much you consume and how much you pay for it (preferably before getting unpleasantly surprised by the cloud bill) means being ahead of the crowd.
All this is made even more complex by the fact that all cloud providers have their own format for reporting the cost and usage data. Only now this is starting to change thanks to FOCUS - FinOps Open Cost and Usage Specification - an open-source technical specification for cloud billing data that defines clear requirements for cloud vendors to produce uniform cost and usage datasets.
And of course, this is where it all starts - observational FinOps is a necessary but not sufficient component of every organization’s FinOps strategy.
Analytical FinOps - the Childhood
Of course, just collecting the data is not enough.
Even with modern, granular cost reporting provided by cloud providers, it’s often hard to understand where the money is. Each managed service we pay for consists of the costs of underlying compute, network, and storage resources.
The cost of data transfer depends on the geographical locations of the source and the destination. Some resources stay idle but still contribute to the cost.
Analyzing the data and extracting meaning from it is the vital step toward actual optimization.
This is how we find potential waste and come up with action items for driving efficiency, performance, and cost improvement. This is also what allows us to detect anomalies and eventually define automated guardrails.
Attributional FinOps - The Adolescence
The moment we start analyzing the financial data coming from our infrastructure, we realize not all of our services were created the same. Yes - each component in our information systems incurs cost, but it (hopefully) also generates value. Sometimes, calculating that value is a much harder task than evaluating its cost of operation. E.g.: how do you calculate the value of the development and testing environments? Or the value of backup and restore systems?
But one thing is clear - to manage the costs of infrastructure, we need to be able to attribute the often undifferentiated cost of resources to specific services.
This starts out with foundational practices such as resource tagging, but can quickly get more complicated. For example, how do you attribute the cost of traffic going through an ingress load balancer if it gets distributed to multiple services? How do you attribute the cost of running a specific container in a Kubernetes cluster?
Attributional FinOps is a very important phase - because that’s where we go full cycle and bring the financial data back to the engineers, allowing them to evaluate the impact of their component reliability and performance on the overall system cost. Thus, starting to close the FinOps feedback loop.
Applied FinOps - the Early Adulthood
Just as with DevOps, actual FinOps benefits lie in creating and maintaining balancing feedback loops in order to achieve the desired level of reliability and performance without compromising the speed of delivery. All that while staying within the budget. Once we’ve collected and analyzed the data, it’s time to apply changes in order to achieve our financial goals. Applied FinOps includes the following practices:
- Smart use of CUDs (Committed Use Discounts)
- Spot/Preemptible instance utilization
- Right-sizing
- Data Tiering
- Waste identification and elimination
- Outsourcing or Insourcing (whatever is considered more cost-effective)
Of course, applying these practices requires a deep understanding of the system we’re applying them to.
A very critical system may have very low 95th percentile utilization, while another component may have very high utilization because of a logical bug.
Moreover, in most organizations, this FinOps application is done as an afterthought, in a reactive fashion. What we really want to do is integrate FinOps into the design of our systems.
Architectural/Design FinOps - the Adulthood
And that’s where we come full cycle. You see - a long time ago, before the age of the cloud and unlimited resources, the cost of running a system, the amount of resources it consumes used to be one of the important design considerations. Then, the public cloud and the abundant startup credits have made us carefree. And FinOps was needed to get us back on track.
Truly evolved FinOps requires us to go back to the drawing board in order to design our systems with cost in mind. This is a process that relies on feedback from all of the previously described FinOps practices to identify bottlenecks and the most costly parts of the system and redesign while balancing cost, reliability, and performance.
This can start with rewriting the most resource-intensive parts of our code in a native language like C or Rust, continue into smart use of queueing and caching and evolve into re-evaluating our autoscaling strategies. As a side note, autoscaling and microservices are among the most costly architectural approaches out there. While they may make a lot of sense in highly complex heavy-load systems (if designed correctly), in simpler systems, they may easily become a burden and a source of waste.
Understanding the level of complexity required by our system and the financial trade-offs in its design is what architectural FinOps is all about.
Automated FinOps - the Maturity

All the phases of FinOps evolution until now can be seen as the steps of the so-called FinOps Feedback Loop. They don’t have to be sequential, but they definitely feed into one another. Maintaining such a feedback loop throughout our engineering process is time-consuming and requires unwavering discipline.
To put it simpler - it’s hard and yes, often boring. After all, who likes bean counting? And with the growing complexity and diversity of components in the systems we manage, it gets increasingly difficult to understand what to do or analyze the impact of our actions on the financial bottom line.
And like in any other case where the algorithm is known but costly, the solution is automation!
FinOps automation is about codifying the analysis and application of FinOps knowledge so that it occurs continuously throughout the software delivery lifecycle. Many existing automation techniques, such as automated backups or auto-scaling, become FinOps practices when we add the cost consideration into the mix. At its essence, automated FinOps is about continuous evaluation and automated balancing of thе inherently conflicting concerns of performance, reliability, and cost.
A good example of tools integrating automated FinOps is Karpenter - the modern auto-scaling tool for Kubernetes that takes into account the financial implications of auto-scaling. And of course, what we do at PerfectScale - where we allow the user to continuously apply automated optimization of Kubernetes workloads based on a predefined optimization strategy and polices, per workload, namespace, or cluster.
And if you ask me, this is the only way to do FinOps in 2025. Managing costs and resource allocations with spreadsheet calculations doesn’t scale and eventually leads to burnout. Without automation, FinOps will either be a never-ending catch-up game or will keep introducing rigid guardrails that drive engineers crazy and become a hurdle to innovation.
The Future - Integrated FinOps.
All right! We’ve covered all of the phases of modern-day FinOps. From observational to automated. So, what do we see looking into the future? Well, of course, we will see automation evolve and AI/ML capabilities augment the existing FinOps observability and analysis.
But I think the real shift we already see starting to happen is the integration of FinOps practices - from observation to automation - into the platforms we’re running our software on.
That’s how we make FinOps accessible and proactive, that’s how we build FinOps automation we can rely on to continuously optimize our infrastructure in full alignment with our business goals.
That’s what we are currently helping our customers to achieve.
And what stage of FinOps evolution are you at? Are you still counting beans, or are you already automating your FinOps?
