LiveCatch the news and tech sessions from London’s premier AI data platform event. Tune in >
LAUNCHMongoDB 8.3 is built for the sub-100ms retrieval & zero downtime AI demands. Read blog >
AI DATAStop fighting your data layer. Get the memory & retrieval agents need to scale. Read blog >
Blog home
arrow-left

AI Is Changing What Customers Need From a Database. MongoDB 8.3 Is Built for It

May 7, 2026 ・ 3 min read

Today, we announced at .local London that MongoDB 8.3 is built for the speed AI demands—and our customers can't afford to wait.

The data layer has to move at AI speed

The old contract between databases and the applications on top of them was simple: databases improve slowly, and architectures evolve around them. AI has changed that contract.

The workloads our customers are shipping today—agents retrieving at sub-100ms, retry storms hitting in milliseconds, multi-region deployments that can't trade compliance for latency—were edge cases 18 months ago. Now they're the baseline.

MongoDB 8.3, generally available today, is our fourth significant release in 19 months. These releases compound. Customers running on 8.0 have seen 36% faster reads and 59% higher throughput for updates. 8.3 adds another 35% to write throughput, 45% to reads, and 15% to ACID transactions over 8.0 — without changing a line of application code.

Enterprises like Adobe, running the most demanding AI in production, have made the requirements clear: sub-100ms retrieval, sub-second context updates, zero downtime. That's what MongoDB Atlas is built for.

That's the commitment: when the data platform keeps pace, our customers can focus on shipping.

MongoDB 8.3 brings 35% more writes, 45% more reads, and 15% more ACID transactions.

Run anywhere. Stay secure.

Where you run your agents isn't just an infrastructure decision anymore. Now, it's a critical compliance and security decision as well. While most platforms force a trade-off between global reach and necessary control, with 130 regions across AWS, Google Cloud, and Microsoft Azure, Atlas doesn’t force you to compromise. Atlas even enables clusters spanning multiple providers simultaneously.

Avalara and Iron Mountain both took the cloud-agnostic path, modernizing on Atlas so they could meet their customers wherever they ran. The deployment shape changes. The data layer underneath doesn't.

What's shifted in the last year is the pressure on both ends. Customers want retrieval and embedding capabilities closer to their users, in more places, on more clouds. They also want more authority over the residency of their data. Those two demands used to be in tension. They don't have to be.

Cross-region connectivity for AWS PrivateLink, generally available today, is the clearest example. Traffic between Atlas clusters in different AWS regions stays on the AWS private backbone, with no public internet exposure. Security and compliance teams get the guarantees they need. Engineering teams design around fewer edge cases. Nobody has to make a trade-off.

Built to keep pace

Every capability in this post addresses friction that technical leaders have been engineering around for years. They solve different problems, but share one objective: to eliminate the infrastructure trade-offs that slow down production of AI.

The AI workloads our customers will run 18 months from now will look different from those today. That's not a risk. That's the point. Four significant releases in 19 months isn't a marketing number. It's a signal about how seriously we take the current pace of change, and our commitment to staying ahead of it for our 65,200+ customers.

Getting agents to retrieve the right information, accurately and at speed, is where embeddings and memory come in. Pablo Stern covers that in his blog, The Bottleneck in Enterprise AI Isn't the Model. It's the Data.

MongoDB Resources
MongoDB Products|Atlas Learning Hub|MongoDB University|Documentation