120ms vs 1.6s: VAST Data Draws the Line
The data stack is starting to show its seams, and the people closest to it can feel the tension before the dashboards even load. What used to pass as “modern” now stalls under real-time pressure, and the gap between ingestion and insight is no longer a technical inconvenience. It is becoming a business liability.
For years, the lakehouse has been sold as the great unifier. Store everything cheap, query it later, stitch the rest together overnight and call it modern. It worked, until the world stopped waiting. AI systems do not batch their curiosity. Markets do not pause for compaction jobs. And suddenly the gap between “data stored” and “data usable” feels less like engineering debt and more like strategic drag.
That is the pressure sitting underneath March 24, 2026, when VAST Data hosts its live session, Beyond the Data Lake: Benchmarking the VAST DataBase for Real-Time Lakehousing. Not a keynote circus. Not a product parade. A benchmark conversation, which is a polite way of saying someone is finally willing to put numbers where opinions have been living rent free.
The backdrop matters. Coming out of VAST Forward 2026 in Salt Lake City, where VAST AI OS 5.5 stepped forward with native SQL analytics, Kubernetes integration, and a deeper push into unified data infrastructure, this webinar reads like the follow through. Less vision, more proof. Less promise, more pressure test.
At the center of that tension is a single claim VAST has already put into the wild. On a 10 billion row table, point queries clocking in at roughly 120 milliseconds, compared to 1.6 seconds on an Apache Iceberg based approach. Same problem space, very different tempo. In a world chasing real time, that delta is not academic. That is the difference between reacting and explaining why you did not.
Jason Russler, Technical Director, Alliances at VAST Data, sits close to the core of this narrative as the author of the company’s transactional lakehousing analysis. He is not positioned as a mascot for the idea. He is the one who wrote the math behind it. And while no formal speaker sheet is public, his fingerprints are already on the argument being made.
Jeff Denworth, Co-Founder at VAST Data, has been carrying the broader thesis across stages like SC25 and beyond, consistently pushing the idea that the architecture itself is the constraint. Not the tools layered on top. Not the queries. The foundation.
What VAST is really challenging here is the comfort zone. Apache Iceberg and the modern lakehouse ecosystem solved governance and scale, but they still lean on a rhythm that assumes time is available. VAST is arguing that time is the one resource no longer negotiable.
So this session becomes less about a database and more about a question. If your system cannot handle updates, queries, and decisions in the same breath, is it still infrastructure or just well organized delay
Because the next wave of builders is not choosing tools based on elegance. They are choosing based on latency, concurrency, and whether the system blinks when reality speeds up. And March 24, 2026 feels like one of those moments where the room, even a virtual one, quietly decides what it is willing to tolerate next.









