ElastixAI Raises $18M in Seed Funding for FPGA-Based AI Inference Platform
ElastixAI just walked out of stealth with $18M in seed funding and a message that lands like a clean right hook to the GPU status quo. Seattle stays swinging.
First, respect where it is due. Congratulations to Mohammad Rastegari, Co-founder and CEO, and Saman Naderiparizi, Co-founder and CTO, along with co-founder Mahyar Najibi. Builders who have been here before. Xnor.ai alumni. Apple operators. Machine learning minds forged in real production, not slide decks. When people who have already sold one AI company for around $200M decide to run it back, you pay attention.
The $18M seed round was announced on February 24, 2026. On top of the roughly $16M raised in 2025 led by FUSE, with participation from Catapult Ventures, Tyche Partners, Liquid 2 Ventures, and DNX Ventures. That brings publicly disclosed funding to about $34M. Capital with conviction. Investors who understand infrastructure is where the real leverage lives.
ElastixAI is not building another model. They are building the engine room. A software platform that converts off-the-shelf FPGA-based servers into high-efficiency AI supercomputers for generative AI. Large language models included. The claim is bold and specific. Up to 50x lower total cost of ownership for LLM inference compared to legacy GPU workflows. Up to 80% reduction in power consumption.
The secret sauce sits in proprietary post-training optimizations and tight ML hardware co-design. Instead of waiting years for custom silicon to catch up to model innovation, ElastixAI uses reconfigurable FPGAs to keep pace. Models evolve. Hardware adapts. Economics improve. The name fits. Elastic is not just branding. It is architecture.
And here is the strategic subtext. Generative AI is not constrained by imagination. It is constrained by inference cost and energy. If you can materially bend that curve, you do not just compete. You redefine who can afford to deploy intelligence at scale.
This is aimed squarely at enterprise partners, data center operators, and AI model providers who feel the burn of GPU-heavy stacks. A drop-in replacement for many legacy inference workflows. Less power. Lower TCO. Same ambition.
ElastixAI is betting that the next chapter of AI will not be won by the biggest model alone, but by the smartest infrastructure underneath it. And with Mohammad Rastegari and Saman Naderiparizi steering the build, that bet does not feel theoretical. It feels engineered.









