Back to articles

Jellyfish’s AI Engineering Trends Report Shows the AI Coding Wars Are Already Operational

Jellyfish’s March 2026 AI Engineering Trends report analyzed 37M pull requests across 1,000+ companies, revealing how AI coding adoption is reshaping software engineering operations.

Software engineering has entered its casino era. Chips flying across the table. CTOs pacing like bond traders after espresso number 6. One company claims AI cut delivery timelines in half. Another swears autonomous agents are already handling production work while the humans sit back pretending they planned it all along. Then Jellyfish walked in with receipts. The Boston-based software engineering intelligence company released its March 2026 AI Engineering Trends benchmark after analyzing engineering telemetry tied to more than 700 companies, 200,000 engineers, and 20M pull requests during the initial release cycle. Subsequent benchmark updates expanded the dataset to more than 1,000 companies and 37M PRs. That scale matters because the AI coding conversation has suffered from a credibility problem. Too many demos. Too many LinkedIn philosophers pretending autocomplete is artificial general intelligence.

Jellyfish brought operational data instead of vibes. The benchmark found that more than half of companies in the study now use AI coding tools consistently. Top-quartile adopters generated roughly 2x the pull request throughput of lower adopters over a 3-month period. Autonomous agent activity remains relatively early, but the trajectory is climbing fast enough to make engineering leaders rethink workforce structure, review pipelines, and software governance. This is no longer a debate about whether AI can write code. The market has already moved beyond that question. The real question now is whether AI can materially change how software organizations operate, scale, and compete.

What Happened

Jellyfish published the March 2026 edition of its AI Engineering Trends benchmark as part of a broader push into AI impact measurement for enterprise engineering teams. The benchmark aggregates telemetry across engineering workflows, AI coding tools, pull request activity, and software delivery systems. According to Jellyfish, the report evaluates AI adoption patterns across tools including GitHub Copilot, Cursor, Claude, Gemini Code Assist, Amazon Q Developer, Windsurf, and Augment.

Nicholas Arcolano, Ph.D., Head of Research at Jellyfish, emerged as the central public voice behind the report. That distinction matters because the rollout was notably research-led rather than founder-led. In a startup ecosystem addicted to executive theater, Jellyfish positioned operational data as the lead character. Nicholas Arcolano stated that AI coding tools have become “the default option for engineering teams” and pointed to measurable correlations between deeper AI integration and higher delivery throughput. The company framed the benchmark as continuously updated rather than static. Early benchmark snapshots referenced 20M pull requests and 700+ companies, while later updates expanded the dataset to 37M PRs and more than 1,000 companies, signaling that Jellyfish intends to operate the benchmark as an evolving industry measurement layer rather than a one-time marketing asset.

Why Jellyfish’s AI Benchmark Matters

Software engineering leaders are dealing with a problem that feels strangely similar to financial markets during the early Bloomberg Terminal era. Massive amounts of information exist while very little context exists. Teams are deploying AI coding assistants at high speed while executives struggle to determine whether productivity gains are real, temporary, inflated, or quietly destructive. Jellyfish is trying to become the system that measures the difference.

The company positions itself as a software engineering intelligence and AI impact platform. In practical terms, Jellyfish wants to own the operational dashboard enterprises use to evaluate AI adoption, engineering throughput, delivery performance, and emerging agentic workflows. That positioning matters because the AI coding market is quickly becoming crowded with tool vendors selling acceleration while few companies are focused on independent measurement infrastructure. The benchmark’s operational metrics go beyond simplistic code-generation claims. Jellyfish tracks AI adoption rates, pull request throughput, autonomous agent contribution, token consumption, revert rates, and engineering workflow activity, creating a much more nuanced picture of software development economics.

Customer Signals From Box and TaskRabbit

The strongest validation in the report came from external operators rather than internal executives. Julia Gan, Sr. Director of Technical Program Management and Engineering Chief of Staff at Box, described using Jellyfish to gain visibility into AI adoption and engineering impact across development teams. Tom Osowski, Engineering Manager of Partnerships at TaskRabbit, cited measurable operational improvements after integrating Augment with Jellyfish telemetry.

According to TaskRabbit, the integration contributed to a 50% reduction in issue cycle time alongside a doubling in deployment rates and epics resolved per month. That combination of workflow telemetry and customer proof points gives Jellyfish stronger positioning than many AI analytics vendors currently flooding the market. A surprising number of AI infrastructure companies still operate like magicians protecting tricks. Jellyfish operates more like a market analyst publishing indicators. Different psychology. Different business strategy.

The Bigger Market Shift Behind AI Coding Adoption

The broader software engineering market is entering an uncomfortable transition phase. For nearly 2 years, the AI coding conversation revolved around novelty, demo videos, prompt engineering threads, and junior developers posting screenshots of generated snake games while venture capitalists declared the end of software engineering as a profession every Thursday afternoon. Meanwhile, large engineering organizations started operationalizing AI quietly.

That quiet operationalization is what makes Jellyfish’s benchmark strategically important. The report signals that enterprise adoption has already moved beyond experimentation and into measurable workflow integration. The next phase will likely revolve around agent orchestration, governance, workflow automation, review systems, and cost management. The benchmark already hints at that future. Jellyfish reported accelerating autonomous agent activity among advanced adopters, suggesting software teams are beginning to move from AI-assisted development toward partially autonomous engineering workflows.

That changes organizational structure. It changes management. It changes hiring. It probably changes what junior engineering roles look like over the next 5 years. Somewhere inside those 37M pull requests sits the tension hanging over the entire software industry: what happens when AI systems stop functioning like assistants and start functioning like labor? That conversation is approaching much faster than most executives publicly admit.

Frequently Asked Questions

What is Jellyfish?

Jellyfish is a Boston-based software engineering intelligence company that provides operational analytics and AI impact measurement for engineering organizations.

The AI Engineering Trends report is Jellyfish’s benchmark study analyzing AI coding adoption, engineering throughput, and software delivery metrics across enterprise engineering teams.

How large was Jellyfish’s dataset?

The March 2026 benchmark initially analyzed data tied to 700+ companies, 200,000 engineers, and 20M pull requests. Later updates expanded the dataset to more than 1,000 companies and 37M PRs.

Nicholas Arcolano, Ph.D., Head of Research at Jellyfish, served as the primary research spokesperson associated with the benchmark.

Which companies were referenced in the report?

The benchmark and related announcement referenced companies including Box, TaskRabbit, DraftKings, Keller Williams, and Blue Yonder.

What did Jellyfish find about AI coding adoption?

Jellyfish found that more than half of companies in the study consistently use AI coding tools and that top AI adopters achieved roughly 2x pull request throughput compared to lower adopters.