Edge Delta rakes in $63M for its distributed approach to data observability
Databases are growing at an exponential rate these days, and so when it comes to real-time data observability, organizations are often fighting a losing battle if they try to run analytics or any observability process in a centralized way. Today a company called Edge Delta that’s built an observability platform from a different perspective — based on edge computing — is announcing a round of funding to double down on its business. The startup has raised $63 million, money that it will be using both to expand how it integrates with different services — it already supports some 50 technologies — and to expand its business overall.
Quiet Capital is leading the round, with BAM Elevate, Earlybird Digital East, Geodesic Capital, Kin Ventures, strategic backer ServiceNow and previous backers Menlo Ventures, MaC Venture Capital, and Amity Ventures also participating. The round follows on from a $15 million Series A less than a year ago, in June of 2021.
Edge Delta aims its tools at DevOps, site-reliability engineers and security teams — groups that focus on analyzing logs, metrics, events, traces and other large data troves, often in real time, to do their work. The modern architecture of databases makes this complicated, with information potentially distributed across Kubernetes containers, Lambda, ECS and EC2 and more. Typical analytics services are constructed around sending data to the cloud and analyzing it in a centralized way, but this has become an untenable approach the larger the data troves become, especially if the aim is for real-time analytics.
Edge Delta is working in a space that already has a number of significant players, including the likes of Splunk, New Relic and Datadog. In fact, Splunk’s ex-CTO led Menlo’s first investment into Edge Delta, which is saying something about the different approach and its reception among peers in the space. (But Ozan Unlu, the founder and CEO of Edge Delta, was quick to tell me that he doesn’t see his startup as a Splunk direct competitor, something that is sometimes implied: “No, we partner with Splunk to get the most out of it!” he exclaimed.)
Typically, as we’ve pointed out before, observability services use agents that sit and run on a customer’s machine, which compress data, encrypt it and then send it on to its final destination. Edge Delta has built an agent that begins to analyze at the local level, including letting organizations run machine learning modules on those nodes, too, producing results that are specific to that database but also results that are typically returned faster.
“Our special sauce is in this distributed mesh network of agents,” Unlu said. “It makes us much more unique.”
Edge Delta provides a second layer of observability and analysis that combines analytics from across a system after those local graphs have been built, but the bottom line is that it believes the results are faster and more accurate, and are less of a strain an organizations’ resources overall.
The company works with big companies that need to handle large amounts of data, typically across hybrid environments across containers and clouds, in real time. Customers include Super League Gaming, the AI-based screening startup Fama Technologies, Panasonic, WebScale, T-Mobile, VMware, and more. Unlu said that using Edge Delta for observability can result in the mean time for resolving critical issues jumping “from hours and days to minutes to solve a production outage.”
In that regard, it’s also touching on a very important theme that continues to grow in enterprise: the increasing role of automation in handling some of the bigger tasks in DevOps, security and site reliability so that engineers have more time to focus on the parts of the jobs that only they can do.
It also means that these teams can now analyze all of their data rather than just parts of it, because they are doing it at the edge. (Using the other approach, there is just too much to upload all of it.) Because it’s automated in real time, Unlu said, “we are not forcing anyone to try to predict the future. When you distribute queries upstream, you are no longer forced to neglect parts of your data. It pains me to think about customers neglecting portions of their data because of financial [or operational] limitations.”
That approach and its traction so far is compelling enough that investors are knocking.
- 3 views
- 0 Comment