Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,633
|
Comments: 51,252
Privacy Policy · Terms
filter by tags archive
time to read 8 min | 1542 words

I have run into this post by John Rush, which I found really interesting, mostly because I so vehemently disagree with it. Here are the points that I want to address in John’s thesis:

1. Open Source movement gonna end because AI can rewrite any oss repo into a new code and commercially redistribute it as their own.

2. Companies gonna use AI to generate their none core software as a marketing effort (cloudflare rebuilt nextjs in  a week).

Can AI rewrite an OSS repo into new code? Let’s dig into this a little bit.

AI models today do a great job of translating code from one language to another. We have good testimonies that this is actually a pretty useful scenario, such as the recent translation of the Ladybird JS engine to Rust.

At RavenDB, we have been using that to manage our client APIs (written in multiple languages & platforms). It has been a great help with that.

But that is fundamentally the same as the Java to C# converter that shipped with Visual Studio 2005. That is 2005, not 2025, mind you. The link above is to the Wayback Machine because the original link itself is lost to history.

AI models do a much better job here, but they aren’t bringing something new to the table in this context.

Claude C Compiler

Now, let’s talk about using the model to replicate a project from scratch. And here we have a bunch of examples. There is the Claude C Compiler, an impressive feat of engineering that can compile the Linux kernel.

Except… it is a proof of concept that you wouldn’t want to use. It produces code that is significantly slower than GCC, and its output is not something that you can trust. And it is not in a shape to be a long-term project that you would maintain over the years.

For a young project, being slower than the best-of-breed alternative is not a bad thing. You’ve shown that your project works; now you can work on optimization.

For an AI project, on the other hand, you are in a pretty bad place. The key here is in terms of long-term maintainability. There is a great breakdown of the Claude C Compiler from the creator of Clang that I highly recommend reading.

The amount of work it would require to turn it into actual production-level code is enormous. I think that it would be fair to say that the overall cost of building a production-level compiler with AI would be in the same ballpark as writing one directly.

Many of the issues in the Claude C Compiler are not bugs that you can “just fix”. They are deep architectural issues that require a very different approach.

Leaving that aside, let’s talk about the actual use case. The Linux kernel’s relationship with its compiler is not a trivial one. Compiler bugs and behaviors are routine issues that developers run into and need to work on.

See the occasional “discussion” on undefined behavior optimizations by the compiler for surprisingly straightforward code.

Cloudflare’s vinext

So Cloudflare rebuilt Next.js in a week using AI. That is pretty impressive, but that is also a lie. They might have done some work in a week, but that isn’t something that is ready. Cloudflare is directly calling this highly experimental (very rightly so).

They also have several customers using it in production already. That is awesome news, except that within literal days of this announcement, multiple critical vulnerabilities have been found in this project.

A new project having vulnerabilities is not unexpected. But some of those vulnerabilities were literal copies of (fixed) vulnerabilities in the original Next.js project.

The issue here is the pace of change and the impact. If it takes an agent a week to build a project and then you throw that into production, how much real testing has been done on it? How much is that code worth?

John stated that this vinext project for Cloudflare was a marketing effort. I have to note that they had to pay bug bounties as a result and exposed their customers to higher levels of risk. I don’t consider that a plus. There is also now the ongoing maintenance cost to deal with, of course.

The key here is that a line of code is not something that you look at in isolation. You need to look at its totality. Its history, usage, provenance, etc. A line of code in a project that has been battle-tested in production is far more valuable than a freshly generated one.

I’ll refer again to the awesome “Things You Should Never Do” from Spolsky. That is over 25 years old and is still excellent advice, even in the age of AI-generated code.

NanoClaw’s approach

You’ve probably heard about the Clawdbot ⇒ Moltbot ⇒ OpenClaw, a way to plug AI directly into everything and give your CISO a heart attack. That is an interesting story, but from a technical perspective, I want to focus on what it does.

A key part of what made OpenClaw successful was the number of integrations it has. You can connect it to Telegram, WhatsApp, Discord, and more. You can plug it into your Gmail, Notes, GitHub, etc.

It has about half a million lines of code (TypeScript), which were mostly generated by AI as well.

To contrast that, we have NanoClaw with ~500 lines of code. Not a typo, it is roughly a thousand times smaller than OpenClaw. The key difference between these two projects is that NanoClaw rebuilds itself on the fly.

If you want to integrate with Telegram, for example, NanoClaw will use the AI model to add the Telegram integration. In this case, it will use pre-existing code and use the model as a weird plugin system. But it also has the ability to generate new code for integrations it doesn’t already have. See here for more details.

On the one hand, that is a pretty neat way to reduce the overall code in the project. On the other hand, it means that each user of NanoClaw will have their own bespoke system.

Contrasting the OpenClaw and NanoClaw approaches, we have an interesting problem. Both of those systems are primarily built with AI, but NanoClaw is likely going to show a lot more variance in what is actually running on your system.

For example, if I want to use Signal as a communication channel, OpenClaw has that built in. You can integrate Signal into NanoClaw as well, but it will generate code (using the model) for this integration separately for each user who needs it.

A bespoke solution for each user may sound like a nice idea, but it just means that each NanoClaw is its own special snowflake. Just thinking about supporting something like that across many users gives me the shivers.

For example, OpenClaw had an agent takeover vulnerability (reported literally yesterday) that would allow a simple website visit to completely own the agent (with all that this implies). OpenClaw’s design means that it can be fixed in a single location.

NanoClaw’s design, on the other hand, means that for each user, there is a slightly different implementation, which may or may not be vulnerable. And there is no really good way to actually fix this.

Summary

The idea that you can just throw AI at a problem and have it generate code that you can then deploy to production is an attractive one. It is also by no means a new one.

The notion of CASE tools used to be the way to go about it. The book Application Development Without Programmers was published in 1982, for example. The world has changed since then, but we are still trying to get rid of programmers.

Generating code quickly is easy these days, but that just shifts the burden. The cost of verifying code has become a lot more pronounced. Note that I didn’t say expensive. It used to be the case that writing the code and verifying it were almost the same task. You wrote the code and thus had a human verifying that it made sense. Then there are the other review steps in a proper software lifecycle.

When we can drop 15,000 lines of code in a few minutes of prompting, the entire story changes. The value of a line of code on its own approaches zero. The value of a reviewed line of code, on the other hand, hasn’t changed.

A line of code from a battle-tested, mature project is infinitely more valuable than a newly generated one, regardless of how quickly it was produced. The cost of generating code approaches zero, sure.

But newly generated code isn’t useful. In order for me to actually make use of that, I need to verify it and ensure that I can trust it. More importantly, I need to know that I can build on top of it.

I don’t see a lot of people paying attention to the concept of long-term maintainability for projects. But that is key. Otherwise, you are signing up upfront to be a legacy system that no one understands or can properly operate.

Production-grade software isn’t a prompt away, I’m afraid to say. There are still all the other hurdles that you have to go through to actually mature a project to be able to go all the way to production and evolve over time without exploding costs & complexities.

time to read 2 min | 203 words

miscellaneous, development

In 2008, the movie Eagle Eye came out. I remember watching that at the time and absolutely loving this movie. It is an action movie, so enjoying it once is the sole criteria that I have. Surprisingly, I got flashbacks of this movie repeatedly in the past few weeks.

I think it is safe to talk about “spoilers” for a movie that is old enough to drive, so the core idea in this movie is that an AI wants to perform a certain action, but is prevented from doing so. It then comes up with a pretty convoluted approach to bypassing those limits. I’m intentionally vague here, because the movie is actually good and you should watch it.

The key here, which is the reason that I remember an 18 years old movie, is that we are actually seeing this behavior today with AI agents. It is an entirely relatable phenomenon to see the agent running into an obstacle, and then trying to bypass it using crazier and crazier techniques.

The movie aged particularly well in this regard, because what was a plot device in there is a daily occurrence in our lives now. For reference, see this Tweet.

time to read 9 min | 1674 words

I am working a bit with sparse files, and I need to output the list of holes in my file.

To my great surprise, I found that my file had more holes than I put into it. This probably deserves a bit of explanation.

If you know what sparse files are, feel free to skip this explanation:

A sparse filereduces disk space usage by storing only the non-zero data blocks.Zero-filled regions ("holes") are recorded as file system metadata only.

The file still has the same “size”, but we don’t need to dedicate actual disk space for ranges that are filled with zeros, we can just remember that there are zeros there. This is a natural consequence of the fact that files aren’t actually composed of linear space on disk.

Filesystems grow files using extents (contiguous disk chunks).A file initially gets a single extent (e.g., 1MB).Fast I/O is maintained as sequential data fills this contiguous block.Once the extent is full, the filesystem allocates a new, separate extent (which will not reside next to the previous one, most likely).The file's logical size grows continuously, but physical allocation occurs in discrete bursts as new extents are dynamically added.

If you are old enough to remember running defrag, that was essentially what it did. Ensured that the whole file was a single continuous allocation on disk. Because of this, it is very simple for a file system to just record holes, and the only file system that you’ll find in common use today that doesn’t support it is FAT.

At any rate, I had a problem. My file has more holes than expected, and that is not a good thing. This is the sort of thing that calls for a “Stop, investigate, blog” reaction. Hence, this post.

Let’s see a small example that demonstrates this:


#define _GNU_SOURCE
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>


int main()
{
    const off_t file_size = 1024LL * 1024 * 1024;
    int fd = open("test-sparse-file.dat", O_CREAT | O_RDWR | O_TRUNC, 0644);
    fallocate(fd, 0, 0, file_size);
    
    off_t offset = 0;
    while (offset < file_size) {
        off_t hole_start = lseek(fd, offset, SEEK_HOLE);
        if (hole_start >= file_size) break;
        
        off_t hole_end = lseek(fd, hole_start, SEEK_DATA);
        if (hole_end < 0) hole_end = file_size;
        
        printf("Start: %.2f MB, End: %.2f MB\n", 
               hole_start / (1024.0 * 1024.0),
               hole_end / (1024.0 * 1024.0));
        
        offset = hole_end;
    }
    
    close(fd);
    return 0;
}

If you run this code, you’ll see this surprising result:


Start: 0.00 MB, End: 1024.00 MB

In other words, even though we just use fallocate() to ensure that we reserved the disk space, as far as lseek() is concerned, it is just one big hole. What is going on here?

Let’s dig a little deeper, using filefrag:


$ filefrag -b1048576 -v test-sparse-file.dat 
Filesystem type is: ef53
File size of test-sparse-file.dat is 1073741824 (1024 blocks of 1048576 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..      23:     165608..    165631:     24:             unwritten
   1:       24..     151:     165376..    165503:    128:     165632: unwritten
   2:      152..     279:     165248..    165375:    128:     165504: unwritten
   3:      280..     407:     165120..    165247:    128:     165376: unwritten
   4:      408..     535:     164992..    165119:    128:     165248: unwritten
   5:      536..     663:     164864..    164991:    128:     165120: unwritten
   6:      664..     791:     164736..    164863:    128:     164992: unwritten
   7:      792..     919:     164608..    164735:    128:     164864: unwritten
   8:      920..    1023:     164480..    164583:    104:     164736: last,unwritten,eof
test-sparse-file.dat: 9 extents found

You can see that the file is made of 9 separate extents. The first one is 24MB in size, then 7 extents that are 128MB each, and the final one is 104MB.

Amusingly enough, the physical layout of the file is in reverse order to the logical layout of the file. That is just the allocation pattern of the file system, since there is no relation between the two.

Now, let’s try to figure out what is going on here. Do you see the flags on those extents? It says unwritten. That means this is physical space that was allocated to the file, but the file system is aware that it never wrote to that space. Therefore, that space must be zero.

In other words, conceptually, this unwritten space is no different from a sparse region in the file. In both cases, the file system can just hand me a block of zeros when I try to access it.

The question is, why is the file system behaving in this manner? And the answer is that this is an optimization. Instead of reading the data (which we know to be zeros) from the disk, we can just hand it over to the application directly. That saves on I/O, which is quite nice.

Consider the typical scenario of allocating a file and then writing to it. Without this optimization, we would literally double the amount of I/O  we have to do.

It turns out that this optimization also applies to Windows and Mac, but the reason I ran into that on Linux is that I used the lseek(SEEK_HOLE), which considers the unwritten portion as a sparse hole as well. This makes sense, since if I want to copy data and I am aware of sparse regions, I should treat the unwritten portions as holes as well.

You can use the ioctl(FS_IOC_FIEMAP) to inspect the actual file extents (this is what filefrag does) if you actually care about the difference.

time to read 1 min | 172 words

I needed to export all the messages from one of our Slack channels. Slack has a way of exporting everything, but nothing that could easily just give me all the messages in a single channel.

There are tools like slackdump or Slack apps that I could use, and I tried, but I got lost trying to make it work. In frustration, I opened VS Code and wrote:

I want a simple node.js that accepts a channel name from Slack and export all the messages in the channel to a CSV file

The output was a single script and instructions on how I should register to get the right token. It literally took me less time to ask for the script than to try to figure out how to use the “proper” tools for this.

The ability to do these sorts of one-off things is exhilarating.

Keep in mind: this isn’t generally applicable if you need something that would actually work over time. See my other post for details on that.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. API Design (10):
    29 Jan 2026 - Don't try to guess
  2. Recording (20):
    05 Dec 2025 - Build AI that understands your business
  3. Webinar (8):
    16 Sep 2025 - Building AI Agents in RavenDB
  4. RavenDB 7.1 (7):
    11 Jul 2025 - The Gen AI release
  5. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
View all series

Syndication

Main feed ... ...
Comments feed   ... ...
}