The growing complexity and interconnectedness of cloud-native and Kubernetes environments has placed a growing burden on managing them. Static configurations for container sizing, scaling thresholds and node type selection in Kubernetes clash with the dynamic nature of resource consumption and demand.
This means that engineers struggle to avoid under and overprovisioning cloud resources by manually adjusting them to constantly shifting needs. The result? Millions of dollars are wasted on idle resources or crippling application performance during peak demand.
ScaleOps, a Tel Aviv-based startup, is on a mission to automate the management of cloud environments.
“Experienced engineers spend hours trying to predict demand, running load tests and tweaking configuration files for every single container. It's impossible to manage this at scale,” said Yodar Shafrir, ScaleOps' co-founder and CEO. “We realized there's a huge need for a context-aware platform that can optimize these constantly changing environments automatically, adapting to changes in demand in real-time."
In a recent step toward meeting that need Shafrir pointed out, ScaleOps announced $21.5 million in funding for a fully automated cloud-native resource orchestration platform. The company wants to continue to allow organizations to focus on their core business objectives and dramatically reduce cloud costs. To date, ScaleOps has attracted a customer base of companies that use the platform to fully automate their production environments, achieving around 80% cloud cost savings and delivering better-running applications.
The fully automated platform continuously optimizes and manages cloud-native resources during runtime. The platform is installed in two minutes on any cloud provider, on-premises and in air-gapped environments.
Additionally, ScaleOps ensures application scaling matches real-time demand. Instead of static allocations, it allocates resources dynamically, automatically rightsizing containers based on application needs. The platform also ensures every container runs in the most suitable node type, significantly cutting cloud costs.
"The only way to free engineers from ongoing, repetitive configurations and allow them to focus on what truly matters is by completely automating resource management down to the smallest building block: the single container," said Shafrir. "By employing AI, the ScaleOps platform is context-aware and autonomously handles resource management for engineers, lowering infrastructure costs and delivering better performance."
The Seed and Series A funding rounds were led by Lightspeed Venture Partners, NFX and Glilot Capital Partners.
Edited by Alex Passett