Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,640
|
Comments: 51,255
Privacy Policy · Terms
filter by tags archive
time to read 6 min | 1001 words

I’m convinced that in hell, there is a special place dedicated to making engineers fix flaky tests.

Not broken tests. Not tests covering a real bug. Flaky tests. Tests that pass 999 times out of 1000 and fail on the 1,000th run for no reason you can explain with a clean conscience.

If you've ever shipped a reasonably complex distributed system, you know exactly what I'm talking about. RavenDB has, at last count, over 32,000 tests that are run continuously on our CI infrastructure. I just checked, and in the past month, we’ve had hundreds of full test runs.

That is actually a problem for our scenario, because with that many tests and that many runs, the law of large numbers starts to apply. Assuming we have tests that have 99.999% reliability, that means that 1 out of every 100,000 test runs may fail. We run tens of millions of those tests in a month.

In a given week, something between ten and twenty of those tests will fail. Given the number of test runs, that is a good number in percentage terms. But each such failure means that we have to investigate it.

Those test failures are expensive. Every ticket is a developer staring at logs, trying to figure out whether this is a genuine bug in the product, a bug in the test itself, or something broken in the environment. In almost all cases, the problem is with the test itself, but we have to investigate.

A test that consistently fails is easy to fix. A test that occasionally fails is the worst.

With a flaky test, you don't just fix something and move on. You spend two days isolating it. Reproducing it. Building a mental model of a race condition that only manifests under specific timing, load, and cosmic alignment.

The tests that do this are almost always the integration tests. The ones that test complex distributed behavior across many parts of the system simultaneously. By definition, they are also the hardest to reason about.

The fact that, in most cases, those test failures add nothing to the product (i.e., they didn’t actually discover a real bug) is just crushed glass on top of the sewer smoothie. You spend a lot of time trying to find and fix the issue, and there is no real value except that the test now consistently passes.

We have a script that runs weekly, collects all test failures, and dumps them into our issue tracker. This is routine maintenance hygiene, to make sure we stay in good shape.

I was looking at the issue tracker when the script ran, and the entire screen lit up with new issues.

Just looking at that list of new annoyances was enough to ruin my mood.

And then, without much deliberate planning, I did something dumb and impulsive: I copy-pasted all of those fresh issues into Claude and told it to fix them. Then I went and did other things. I had very low expectations about this, but there was not much to lose.

A few hours later, I got a notification about a pull request. To be honest, I expected Claude to mark the flaky tests as skipped, or remove the assertions to make them pass.

I got an actual pull request, with real fixes, to my shock. Some of them were fixes applied to test logic. Some were actually fixes in the underlying code.

And then there was this one that stopped me cold. Claude had identified that in one of our test cases, we were waiting on the wrong resource. Not wrong in an obvious way — wrong in the kind of way that works perfectly 99.9998% of the time and silently fails 0.0002% of the time.

The (test) code looked right. We were waiting for something to happen; we just happened to wait on the wrong thing, and usually the value we asserted on was already set by the time we were done waiting.

Claude found it. In one pass. For the price of a subscription I was already paying. For reference, that single “let me throw Claude at it” decision probably saved enough engineering time to cover the cost of Claude for the entire team for that month.

Let me be precise about what happened and what didn't. Claude did not fix everything. Some of the "fixes" it produced were pretty bad, surface-level patches that didn't address the real cause, or things that were legitimately out of scope.

You still need an engineer reviewing the output. And you still need judgment.

But it got things fixed, quickly, without needing two days to context-switch into the problem space. And the things it did fix well, it fixed really well.

The work it compressed would have realistically taken one developer a week or two to grind through — and that's assuming you could get a developer to focus on it for that long in the first place. Flaky test investigation is the kind of work that quietly kills team morale.

Engineers start dreading CI. They start treating red builds as background noise. That's how quality degrades silently. Leaving aside new features or higher velocity, being able to offload the most annoying parts of the job to a machine to do is… wow.

Based on this, we're building this into our actual workflow as an integral part of how we handle test maintenance. Failures are collected, routed to Claude, and it takes a first pass at triage and repair. Then we create an issue in the bug tracker with either an actual fix or a summary of Claude’s findings.

By the time a human reviews this, significant progress has already been made.

It doesn't replace the engineer. But it means the engineer is doing the interesting part of the work: judgment, review, architectural reasoning. Skipping the part that requires staring at race condition logs until your vision blurs.

This isn’t the most exciting aspect of using a coding agent, I’m aware. But it may be one of the best aspects in terms of quality of life.

time to read 6 min | 1198 words

No, the title is not a mistake, nor did I use my time travel pass to give you insights from the future. Bear with me for a moment while I explain my thinking.

From individual contributor to oversight role

I started writing RavenDB in a spare bedroom, which turned into an office. The project grew from a sparkle in my head that wouldn’t let me sleep into a major project in very short order.

Today, I want to talk about a pretty important stage that happened during that growth phase. Somewhere between having five and ten full-time developers working on RavenDB, I lost the ability to keep track of every single line of code that was going into the project.

I had been the primary developer for years at this point, I wrote the majority of the code, and I was the person making all the key decisions in the project. And then, gradually, I… wasn't that guy anymore.

There were too many moving parts, too many developers, too many decisions happening in parallel for me to have my hands on all of it. That was the whole point of growing the team, dividing the tasks among the team members, and getting good people to do things so I didn’t have to do it all myself.

What I didn't expect was how much it would bother me. Moving from being the primary developer to a supervisory role didn’t mean that I lost the ability to write code. In fact, in many cases, I could “see” what the solution for each issue should be.

I just didn’t have the time to do that, nor the capacity to sit with every single developer on every single issue and craft the right way to solve it. I'd hand a feature to a developer knowing that the way they were going to handle it would not be mine.

That doesn’t mean it would be wrong, but it wouldn’t be the same. It might need a review cycle or two to get to the right level for the product, or they wouldn’t consider how it fits into the grand scheme of things, etc.

And let’s not talk about the time estimates I got. I’m willing to assume that my personal timing estimates are highly subjective and influenced by my deep familiarity with the codebase.

But still. Multiple days for something that felt like it should be a two-hour job was hard to sit with.

I carried around a background level of frustration for quite some time. It killed me that the pace of development wasn’t up to what I wanted it to be. “If I could just have the time to sit and write this”, I kept thinking, “we would be done by the end of the week.”

There was progress, to be clear, but nothing was moving fast enough. Everywhere I looked, we had stalled.

And then something happened. It didn’t happen all at once, but in the space of a month or two, features started to land. Each team had been heads-down on something for quite a while, and by some coincidence of timing, they all finished around the same time.

Suddenly, we moved from “we have nothing to ship” to “we can’t have so many new features all at once”. I realized that I would be able to ship things faster, for sure. I could do two new features, maybe even three, in that same time frame. That would require head-down coding for the entire duration, of course.

Reading that last paragraph again, I have to admit that I may be letting some hubris color my perception 🤷😏.

I wouldn’t be able to deliver the sheer quantity of features that the team was able to deliver.

What had felt like months of stagnation turned out to be parallelism in action.

Yes, some of the code wasn't the same code that I would write. And some of the architectural decisions weren't the ones I'd have made. That didn’t make them wrong, mind. And those developers were working on things I was not working on. And the sum total of what got built was something I could never have done solo.

Treating coding agents as junior developers?

I think about that experience constantly now, because I'm living a version of it again, except the new team member is Claude. Working with AI coding agents today feels remarkably like working with a junior developer who is also a savant.

They've read everything. They know an enormous amount. They can produce working code quickly and confidently across a staggering range of domains. And yet they're also genuinely ignorant in ways that will surprise you: missing context, misreading intent, optimizing for the wrong thing, occasionally producing something that is confidently and completely broken.

This is not a criticism. This is just what it's like. And I've dealt with this before. There are clear parallels between mentoring junior engineers and looking at the output from an AI agent.

There is an assumption that you need to get perfect output from a coding agent. But you are not likely to get perfect output from a human developer. Even experienced developers benefit greatly from reviews, guidance, etc. Junior developers need more of that, of course, but they can still bring value, even if their output goes through several iterations.

For coding agents to bring real value, you need to consider them in the same light.

The shift that happened with my developer team is the same shift that's happening now with AI agents.

Instead of writing every line yourself, you start spending time on the bigger picture: here's the overall direction, here's the architectural constraint, here's what done looks like. Then you review the outputs.

Talking to a coding agent is a little different from discussing a feature with a dev and reviewing their code days later, except that the agent delivers the output in the time it takes to get coffee.

The fact that this cycle is done in a short amount of time means that you still have all the knowledge in your head. You can catch drift before it becomes technical debt.

The cost of going in the wrong direction is greatly reduced, which means that you can be far more radical about how you approach these tasks.

Unnatural impulses as a developer

I wonder if a lot of developers are facing challenges in this area specifically because they don’t have the managerial experience needed for this new aspect of the work.

I have been writing code with Claude recently. And the short feedback cycle means that I’m loving it. I'm not abdicating the technical judgment, mind. I'm applying it differently.

I'm writing the high-level design, not the implementation. I'm doing the review, not the first draft. And I'm being honest with myself that the output, while it isn’t always what I would write, is covering ground I simply would not have covered otherwise.

I have been doing this for a long time and it feels quite natural. I also remember that this was a difficult transition for me at the time.

For those who want to better understand how they can get the most value from coding agents, you are probably better off looking into project management theory rather than optimizing your agents.md file.

time to read 4 min | 650 words

One of our team leads has been working on a major feature using Claude Code. He's been at it for a few days and is nearly done. To put that in context: this feature would normally represent about a month of a senior developer's time.

He did the backend work himself — working with Claude to build it out, applying his knowledge of how the system should behave, reviewing, adjusting, and iterating. He handled only the backend, and when I asked him about the frontend, he said: "I'm going to let Matt’s Claude handle that."

Context: Matt is the frontend team lead.

Note the interesting phrasing. He didn't say "I'll do the UI later" or "Claude’ll handle the UI." He deferred to the frontend lead who has the domain expertise to drive that part.

That's not a throwaway comment. That's an important statement about how work should be divided in the age of AI agents.

Here's the thing: I've told Claude to build a UI for a feature, pointed it at the codebase, and it figured out how the frontend is structured, what patterns we use, and generated something I could work with. It wasn’t a sketch or a wireframe diagram, it was actually usable.

I got a functional UI from Claude in less time than it would take to write up the issue describing what I want.

That UI was enough for me to explore the feature, do a small demo, etc. I’m not a frontend guy, and I didn’t even look at the code, but I assume that the output probably matched the rest of our frontend code.

We won’t be using the UI Claude generated for me, though. The gap in polish between what I got and what a real frontend developer produces is enormous. I got something I could play with, but it was very evident that it wasn’t something that had received real attention.

For the time being, it was more than sufficient. The problem is that even leaning heavily on AI, the investment of time for me to do it right would be significant. I'd need to understand our frontend architecture, our conventions, our component library, how state flows, and what our designers expect. All of that would take real time, even with an AI doing most of the code generation.

That is leaving aside the things that I don’t know about frontend that I wouldn’t even realize I need to handle. I wouldn’t even know what to ask the AI about, even if it could do the right thing if I sent it the right prompt.

Contrast that with the frontend team. They know the architecture of the frontend, of course, and they know how things should slot together and what concerns they should address. They know when Claude's suggestion is on the right track and when it's going to create a mess three layers down. Effectively, they know the magic incantation that the agent needs in order to do the right thing.

What does this say about AI usage in general? Given two people with the same access to a smart coding agent like Claude or Codex, both performing the same task, their domain knowledge will lead to very different results. In other words, it means that Claude and its equivalents are tools. And the wielder of the tool has a huge impact on the end result.

The role of expertise hasn't diminished. It's shifted. The expert is no longer the person who can produce the artifact. They're the person who can direct the production of the artifact correctly and efficiently. That's a different skill profile, but it's no less valuable and the leverage is higher.

We're still figuring out what this means structurally. But the instinct to say "that's not my domain, let the person who knows it handle the AI that does it" is correct. Domain knowledge determines the quality of the output, even when the AI is doing all the typing.

time to read 6 min | 1146 words

You read the story a hundred times: “I told Codex (or Claude, or Antigravity, etc.) to build me a full app to run my business, and 30 minutes later, it’s done”. These types of stories usually celebrate the new ecosystem and the ability to build complex systems without having to dive into the details.

The benchmarks celebrate "one-shotting" entire applications, as if that's the relevant metric. I think this is the wrong framing entirely. Mostly because I care very little about disposable software, stuff that you stop using after a few days or a week. I work on projects whose lifetime is measured in decades.

AI agent-driven development isn't about the ability to use a one-shot prompt to generate a full-blown app that matches exactly what the user wants. That is a nice trick, but nothing more, because after you generate the application, you need to maintain it, add features (and ensure stability over time), fix bugs, and adjust what you have.

The process of using AI agents to build long-lived applications is distinctly different from what I see people bandying about. I want to dedicate this post to discussing some aspects of using AI agents to accelerate development in long-lived software projects.

Code quality only matters in the long run

The key difference between one-off work and long-lived systems is that we don’t care about code quality at all for the one-off stuff. It's a throwaway artifact. Run it, get your answer, move on. I am usually not even going to look at the code that was generated; I certainly don’t care how it is structured.

If I need to make any changes, or have to come back to it in six months, it is usually easier to just regenerate the whole thing from scratch rather than trying to maintain or evolve it.

When you're talking about an application that will live for a decade or more - or worse, an existing application with decades of accumulated effort baked into it - what happens then? The calculus changes completely. How do you even begin to bring AI into that kind of system?

It turns out that proper software architecture becomes more relevant, not less.

Software architecture as context management for AI

Think about what good software architecture actually gives you: components, layers, clear boundaries, and well-defined responsibilities. The traditional justification is that this lets you make small, careful, targeted changes. You know where to go, and you can change one thing. You slowly evolve things over time. Your changes don't break ten others because not everything is intermingled.

Now think about how an AI operates on a codebase. It works within a context window. That constraint isn't unique to AI, people do that too. There is only so much you can keep in your head, and proper architecture means that you are separating concerns so you can work with just the relevant details in mind.

When your architecture is clean, the AI can focus on exactly the right piece of the system. When it isn't, you're either feeding the AI irrelevant noise or hiding the context it actually needs from it.

Good architecture, it turns out, is also a good AI interface. And the reason this works is the same as for people: it reduces the cognitive load you have to carry while understanding and modifying the system. For AI, we just call it the context window. For people, it is cognitive load. Same term, same concept.

Beyond the mechanical benefits, good architecture gives you two things that I think are underappreciated in this conversation.

The first is structural comprehension. You don't need to have every line of a large codebase in your head. But you do need a genuine mental model of how data flows, how components relate, and where things live. That's only possible if the architecture actually reflects the system's intent.

When using AI to generate code, you need to have a proper understanding of the flow of the system. That allows you to look at a pull request and understand the changes, their intent, and how they fit into the greater whole. Without that, you can't meaningfully review the code. You're just rubber-stamping diffs you don't have a hope of understanding.

The second is that the work has shifted. We're moving from "how do I write this code?" to "how do I review all of this code?". Nobody is going to meaningfully maintain 30,000 lines a day of dense AI code. At that point, the codebase has escaped human comprehension, and you've lostthe game. This isn’t your project anymore, and sooner or later, you’ll face the Big Decision.

Turtles all the way down

I hear the proposed solution constantly: "I have an agent that writes the code, an agent that tests it, an agent that reviews the reviews, and so on." This is, I think, genuinely insane for anything that matters.

We already have evidence from the field that this doesn’t work. Amazon has had production failures from AI-generated code produced through exactly these kinds of layered-AI pipelines. Microsoft's aggressive approach to AI integration has shown what happens when AI-generated code enters production with minimal meaningful human oversight.

In both of those cases, the “proper oversight” was also provided by AI. And the end result wasn’t encouraging for this pattern of behavior. For critical systems that carry real consequences, "AI supervising AI" is not a thing.

AI works when you treat it as a tool in your hands, not as an autonomous system you've delegated to. An engineer who understands architecture and can look at a diff and say "this is right" or "this is wrong, and here's why" is much more capable with AI than without it.

An engineer who has offloaded comprehension to the machine is flying blind; worse, they are flying very fast directly into a cliff wall.

What should you do about it?

When we treat AI agents as a tool, it turns out that not all that much needs to change. The current processes you have in place (CI/CD, testing, review cycles, etc.) are all about being able to generate trust in the new code being written. Whether a human wrote it or a GPU did is less interesting.

At the same time, we have decades of experience building big systems. We know that a Big Ball of Mud isn’t sustainable. We know that proper architecture means breaking the system into digestible chunks. Yes, with AI you can throw everything together, and it will sort of work for a surprisingly long time. Until it doesn’t.

With a proper architecture, the scope you need to keep track of is inherently limited. That allows you to evolve over time and make changes that are inherently limited in scope (thus, reviewable, actionable, etc.).

“The more things change, the more they stay the same.” It is a nice saying, but it also carries a fundamental truth. Using AI doesn’t absolve us from the realities on the ground, after all.

time to read 5 min | 813 words

Like everything else, we have been using AI in various forms for a while now, from asking ChatGPT to write a function to asking it to explain an error, then graduating to running it on our code in the IDE, and finally to full-blown independent coding assistants.

Recently, we shifted into a much higher gear, rolling it out across most of the teams at RavenDB. I want to talk specifically about what that looks like in practice in real production software.

RavenDB is a mature codebase, with about 18 years of history behind it. The core team is a few dozen developers working on this full-time. We also care very deeply about correctness, performance, and maintainability.

With all the noise about Claude, Codex, and their ilk recently, we decided to run some experiments to see how we can leverage them to help us build RavenDB.

The numbers that got my attention

We started with features that were relatively self-contained — ambitious enough to be real work, but isolated enough that an AI agent could take them end-to-end without stepping on core aspects of RavenDB.

The first one was estimated at about a month of work for a senior developer. We completed it in two days. To be fair, a significant portion of that time was spent learning how to work effectively with Claude as an agent, learning the ropes and the right discipline and workflows, not just the task itself.

The second was estimated at roughly three months for an initial version. It was delivered in about a week. And we didn't just hit the target — we significantly exceeded the planned feature set.

In terms of efficiency, we are talking about a proper leap from what we previously could expect.

This isn't vibe coding

I want to be direct about something: this is not "prompt it and ship it." There is a discipline required here. The AI can move very fast, explore a lot of ground, and generate code that looks right, but isn’t. Code ownership and engineering responsibility don't go away; they become much more demanding.

I personally sat and read 30,000 lines of code. I had to understand what was there, push back on decisions, redirect the approach, and enforce the standards that RavenDB has built up over many years.

Those 30,000 lines of code didn’t appear out of thin air. They were the final result of a lot of planning, back and forth with the agent, incremental steps in the right direction (and many wrong ones, etc.).

To be fair, 30,000 lines of code sounds like a lot, right? About 60% of that is actually tests, and about half of the remaining code is boilerplate infrastructure that we need to have, but isn’t really interesting.

The juicy parts are only around 5,000 lines or so.

In many respects, this isn’t prompt-and-go but feels a lot more like a pair programming session on steroids.

What AI agents give you is the ability to explore the problem space cheaply and quickly. After we had something built, I had a different idea about how to go about implementing it. So I asked it to do that, and it gave me something that I could actually explore.

Being able to evaluate multiple different approaches to a solution is crazy valuable. It is transformative for architectural decisions.

Having said that, using a coding agent to take all the boilerplate stuff meant that I was able to focus on the “fun parts”, the pieces that actually add the most value, not everything else that I need to do to get to that part.

What this means going forward

AI agents are going to amplify your existing engineering culture, for better or worse.

A lot of the cost of writing good software is going to move from actually writing code to reviewing it. For many people, the act of writing the code was also the part where they thought about it most deeply.

Now the thinking part moves either upfront, at the planning phase, or to the end, when you look at the pull request. Reading a pull request, you could reasonably expect to see code that has already been reasoned about and properly tamed.

Now, in some cases, this is the first time that a human is actually going to properly walk through the whole thing. To ensure proper quality, you also need to shift a lot of your focus to that part.

The bottleneck for good software is going to be the review cycle, the architectural approach, and an experienced team that can actually evaluate the output and ensure consistent high quality.

Without that, you can go very fast, but just generating code quickly is a losing proposition. You’ll go very fast directly into a painful collision with a wall.

We are still settling down and trying to properly understand the best approach to take, but I have to say that this experiment was a major success.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. API Design (10):
    29 Jan 2026 - Don't try to guess
  2. Recording (20):
    05 Dec 2025 - Build AI that understands your business
  3. Webinar (8):
    16 Sep 2025 - Building AI Agents in RavenDB
  4. RavenDB 7.1 (7):
    11 Jul 2025 - The Gen AI release
  5. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
View all series

Syndication

Main feed ... ...
Comments feed   ... ...