Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,527
|
Comments: 51,163
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 267 words

This happened a few minutes ago, I got a call from an unknown number. That was my wife’s work number, and she called to ask me an urgent question, it seems:

 

“Can you tell me how to compress a PDF file?” she asked.

For the next part, it might be better if I paint you the whole picture. Imagine bullet time, where everything slows down, and I start to analyze the question and my possible answer. The following thoughts run through my mind during that time.

  • PDF files are already compressed by default.
  • Pretty sure that the file format is already using compression.
  • You could strip unneeded elements from the file, removing fonts is one example, I think.
  • If there are images, can probably downscale or re-sample them to reduce their size.
  • What about just running this through Zip?
  • Where did this question come from?

That took about two seconds in real time. The decision tree for any possible answer here grew exponentially. I had to make a call.

“No, that isn’t easily possible,” I answered.

I got some more details as well.

“This is for uploading a document to the XYZ system, it only accepts up to 4MB files, but this PDF is 5.5MB. I guess I can just scan this document as two separate pages instead of one, right?”

A workaround found, and a detailed dive into lossless vs. lossy compression compared to the file format choice avoided, I agreed that this was probably the best option and finished my coffee, pondering the ethical dilemma of answering the actual question or the intended question.

time to read 6 min | 1163 words

Today I had to look into the a customer whose RavenDB instance was burning through a lot of I/O. The process is somewhat ingrained in me by this point, but I thought that it would make for a good blog post so I’ll recall that next time.

Here is what this looks like from the point of view of the disk:

image

We are seeing a lot of reads in terms of MB/sec and a lot of write operations (but far less in terms of bandwidth). That is the external details, can we figure out more? Of course.

We start our investigation by running:

sudo iotop -ao

This command gives us the accumulative time for threads that do I/O. One of the important things that RavenDB is to tag its threads with the tasks that they are assigned. Here is a sample output:

  TID  PRIO  USER     DISK READ DISK WRITE>  SWAPIN      IO    COMMAND
 2012 be/4 ravendb    1748.00 K    143.81 M  0.00 %  0.96 % Raven.Server -c /ravendb/config/settings.json [Follower thread]
 9533 be/4 ravendb     189.92 M     86.07 M  0.00 %  0.60 % Raven.Server -c /ravendb/config/settings.json [Indexing of Use]
 1905 be/4 ravendb     162.73 M     72.23 M  0.00 %  0.39 % Raven.Server -c /ravendb/config/settings.json [Indexing of Use]
 1986 be/4 ravendb     154.52 M     71.71 M  0.00 %  0.38 % Raven.Server -c /ravendb/config/settings.json [Indexing of Use]
 9687 be/4 ravendb     185.57 M     70.34 M  0.00 %  0.59 % Raven.Server -c /ravendb/config/settings.json [Indexing of Car]
 1827 be/4 ravendb     172.60 M     65.25 M  0.00 %  0.69 % Raven.Server -c /ravendb/config/settings.json ['Southsand']

In this case, we see the top 6 threads in terms of I/O (for writes). We can see that we have a lot of of indexing and documents writes. That said, thread names in Linux are limited to 14 characters, so we probably need to give better names to them.

That is part of the task, let’s look at the cost in terms of reads:

  TID  PRIO  USER    DISK READ>  DISK WRITE  SWAPIN      IO    COMMAND
11191 be/4 ravendb       2.09 G     31.75 M  0.00 %  7.58 % Raven.Server -c /ravendb/config/settings.json [.NET ThreadPool]
11494 be/4 ravendb    1353.39 M     14.54 M  0.00 % 19.62 % Raven.Server -c /ravendb/config/settings.json [.NET ThreadPool]
11496 be/4 ravendb    1265.96 M      4.97 M  0.00 % 16.56 % Raven.Server -c /ravendb/config/settings.json [.NET ThreadPool]
11211 be/4 ravendb    1120.19 M     42.66 M  0.00 %  2.83 % Raven.Server -c /ravendb/config/settings.json [.NET ThreadPool]
11371 be/4 ravendb    1114.50 M     35.25 M  0.00 %  5.00 % Raven.Server -c /ravendb/config/settings.json [.NET ThreadPool]
11001 be/4 ravendb    1102.55 M     43.35 M  0.00 %  3.12 % Raven.Server -c /ravendb/config/settings.json [.NET ThreadPool]
11340 be/4 ravendb     825.43 M     26.77 M  0.00 %  4.85 % Raven.Server -c /ravendb/config/settings.json [.NET ThreadPool]

That is a lot more complicated, however. Now we don’t know what task this is running, only that something is reading a lot of data.

We have the thread id, so we can make use of that to see what it is doing:

sudo strace -p 11191 -c

This command will track the statistics on the systems calls that are issued by the specified thread. I’ll typically let it run for 10 – 30 seconds and then hit Ctrl+C, giving me:

% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 90.90    3.868694         681      5681        82 futex
  8.28    0.352247           9     41035           sched_yield
  0.79    0.033589        1292        26           pwrite64
  0.03    0.001246          52        24         1 recvfrom
  0.01    0.000285         285         1           restart_syscall
  0.00    0.000000           0         2           madvise
  0.00    0.000000           0         2           sendmsg
------ ----------- ----------- --------- --------- ----------------
100.00    4.256061                 46771        83 total

I’m mostly interested in the pwrite64 system call. RavenDB uses mmap() for most of its data access, so that is harder to read, but we can get a lot of information from the output. Now I’m going to run the following command:

sudo strace -p 11191 -e trace=pwrite64

This will give us a trace of all the pwrite64() system calls from that thread, looking like this:

pwrite64(315, "\365\275"..., 4113, 51080761896) = 4113
pwrite64(315, "\344\371"..., 4113, 51080893512) = 4113

There is an strace option (-y) that can be used to show the file paths for system calls, but I forgot to use it, no worries, I can do:

sudo ls -lh /proc/11191/fd/315

Which will give me the details on this file:

lrwx------ 1 root root 64 Aug  7 09:21 /proc/11783/fd/315 -> /ravendb/data/Databases/Southsand/PeriodicBackupTemp/2022-08-07-03-30-00.ravendb-encrypted-full-backup.in-progress

And that tells me everything that I need to know. The reason we have high I/O is that we are generating a backup file. That explains why we are seeing a lot of reads (since we need to read in order to generate the backup).

The entire process is mostly about figuring out exactly what is going on, and RavenDB is very careful about leaving as many breadcrumbs as possible to make it easy to follow.

time to read 2 min | 234 words

I run into a task that I needed to do in Go, given a PFX file, I needed to get a tls.X509KeyPair from that. However, Go doesn’t have support for PFX. RavenDB makes extensive use of PFX in general, so that made things hard for us. I looked into all sorts of options, but I couldn’t find any way to manage that properly. The nearest find was the pkcs12 package, but that has support for only some DER format, and cannot handle common PFX files. That was a problem.

Luckily, I know how to use OpenSSL, but while there are countless examples on how to use OpenSSL to convert PFX to PEM and the other way around, all of them assume that you are using that from the command line, which isn’t what we want. It took me a bit of time, but I cobbled together a one off code that does the work. The code has a strange shape, I’m aware, because I wrote it to interface with Go, but it does the job.

Now, from Go, I can run the following:

As you can see, most of the code is there to manage error handling. But you can now convert a PFX to PEM and then pass that to X509keyPair easily.

That said, this seems just utterly ridiculous to me. There has got to be a better way to do that, surely.

time to read 3 min | 563 words

Following a phone screen, we typically ask candidates to complete some coding tasks. The idea is that we want to see their code and asking a candidate to program during an interview… does not go well. I had a candidate some years ago that was provided with a machine, IDE and internet connection and walked out after failing for 30 minutes to reverse a string. Given that his CV said that he has 8 years of experience, I consider myself very lucky.

Back to the candidate that prompt this post. He sent us answers to the coding tasks. In Node.JS and C++. Okay, weird flex, but I can manage. I don’t actually care what language a candidate knows, especially for the junior positions.

Given that we are hiring for junior positions, we’ll usually get solutions that bend the question restrictions. For example, they would do a linear scan of a file even when they were asked not to. For the most part, we can ignore those details and focus on what the candidate is showing us. Sometimes we ask them to fix a particular issue, but usually we’ll just get them to the interview and ask them about their code there.

I like asking candidates about their code, because I presume that they spent some time thinking about it and can discuss the topic in some detail. At one memorable interview I had a candidate tell me: “I didn’t write this code, I have no idea what is going on here”. I had triple checked that this is indeed the code they sent and followed up by sending the candidate home, sans offer. We can usually talk with the candidate about what drove them to certain decisions, what impact a particular constraint would be on their code, etc.

In this case, however, the code was bad enough that I called it. I sent the candidate a notification about the issues we found in their code, detailing the 20+ critical failures that we found in the space of a few minutes of looking at it.

The breaking point for me was that the tasks did not actually work. In fact, they couldn’t work. I’m not sure if they compiled, I didn’t check, but they certain were never even eyeballed.

For example, we asked the candidate to build a server that would translate messages to Morse code and cause the server speaker to beep in Morse code. Nothing particularly fancy, I think. But we got a particular implementation for that. For example, here is the relevant code that plays the Morse code:

image

The Node.js version that I’m using doesn’t come with the relevant machine learning model to make that actually happen, I’m afraid.

The real killer for me was this part:

You might want to read this code a few times.

They pass a variable to a function, set it to a new value and expect to see that new value outside. Basically, they wanted to use an out parameter here, which isn’t valid in JavaScript.

That is the kind of fairly fundamental issue in understanding the flow of code in a program. And that is something that would never have worked.

I’m okay with getting sub optimal solutions, I’m not fine with it never have been actually looked at.

time to read 8 min | 1540 words

As I’m writing this, there seem to be a huge amount of furor over the US elections. I don’t want to get into that particular mess, but I had a few idle thoughts about how to fix one of the root issues here. The fact that the system of voting we use is based on pen & paper and was designed when a computer was a job title for a practical mathematician.

A large part of the problem is that we have to wait for a particular date, and there are polls, which seems to be used for information and misinformation at the same time. This means that people are basically trying to vote with insufficient information. It would be much more straightforward if the whole system was transparent and iterative.

Note: This is a blog post I’m writing to get this idea out of my head. I did very little research and I’m aware that there is probably a wide body of proposals in the area, which I didn’t look at.

The typical manner I have seen suggested for solving the election problem is to ensure the following properties:

  • Verifying that my vote was counted properly.
  • Verifying that the total votes were counted fairly.
  • (maybe) Making it hard / impossible to show a 3rd party who I voted to.

The last one is obviously pretty hard to do, but is important to prevent issues with pressuring people to vote for particular candidates.

I don’t have the cryptographic chops to come up with such a system, not do I think that it would be particularly advisable. I would rather come up with a different approach entirely. In my eyes, we can have a system where we have the following:

  • The government issues each voter with a token (coin) that they can spend. That will probably get you to think about blockchains, but I don’t think this is a good idea. If we are talking about technical details, let’s say that the government issues a certificate to each voter (who generates their own private key, obviously).
  • The voter can then give that voting coin to a 3rd party. For example, but signing a vote for a particular candidate using their certificate.
  • These coins can then be “spent” during election. The winner of the election is the candidate that got more than 50% of the total coins spent.

As it currently stands, this is a pretty horrible system. To start with, this means that it is painfully obvious who voted for whom. I think that a transparent voting record might be a good idea in general, but there are multitude of problems with that. So like many great ideas in theory, we shouldn’t allow it.

This is usually where [complex cryptography] comes into play, but I think that a much better option would be to introduce the notion of brokers into the mix. What do I mean?

While you could spend your coin directly on a particular candidate, you could also spend it at a broker. That broker is then delegated to use your vote. You may use a party affiliated broker or a neutral one. Imagine a 2 party system when you have the Olive party and the Opal party. I’m using obscure colors here to try to reduce any meaning people will read into the color choices. For what it’s worth, red & blue as party colors have opposite meaning in the states and Israel, which is confusing.

Let’s take two scenarios into considerations:

  • A voter spend their coin on the Olive Political Action Committee, who is known to support Olive candidates. In this case, you can clearly track who they want to vote for. Note that they aren’t voting directly for a candidate, because they want to delegate their vote to a trusted party to manage that.
  • A voter spend their coin on a Private Broker #435. While they do that, they instruct the broker to pass their vote to the Olive PAC broker, or a particular candidate, etc.

The idea is that the private broker is taking votes from enough people that while it is possible to know that you went through a particular broker, you can’t know who you voted for. The broker itself obviously know, but that is similar to tracking the behavior of a voting booth, which also allows you to discover who voted to whom. I think that it is possible to set things up so the broker itself won’t be able to make this association, however. Secured Sum would work, probably. A key point for me is that this is an issue that is internal to a particular broker, not relevant to the system as a whole.

So far, I don’t think that I’m doing too much, but the idea here is that I want to take things further. Instead of stopping the system there, we allow to change the vote. In other words, instead of having to vote blindly, we can look at the results and adjust them.

In the Apr 2019 Israeli election, over 330 thousands votes were discarded because they didn’t reach the minimum threshold. That was mostly because the polls were wrong, because I don’t think that people would have voted for those parties if they knew that they are on the verge. Being able to look at that and then adjust the votes would allow all those votes to be counted.

Taking this further, once we have the system of brokers and electronic votes in place, there is no reason to do an election once every N years. Instead, we can say that the coins are literal political capital. In order to remain in office, the elected officials must keep holding over 50% of the amount of spent coins. It would probably be advisable to actually count these on a weekly / bi-weekly basis, but doing this on a short time intervals means that there is going to be a lot more accountability.

Speaking from Israel’s point of view, there have been sadly numerous cases where candidates campaigned on A, then did the exact opposite once elected. There is even a couple of famous sayings about it:

  • We promised, but didn’t promise to fulfil.
  • What you see from here you can’t see from there.

Note that this is likely to result in more populist system, since the elected officials are going to pay attention to the electorate on an ongoing basis, rather than just around election time. I can think of a few ways to handle that. For example, once seated, for a period of time, you’ll need > 50% of the coins to get an elected official out of office.

A few more details:

  • For places like the states, where you vote for multiple things at the same time (local, state, federal house & senate, president), you’ll get multiple coins to spend, and can freely spend them in different locations. Or, via a broker, designate that they are to be spend on particular directions.
  • A large part of the idea is that a voter can withdraw their coin from a broker or candidate at any time.
  • Losing the key isn’t a big deal. You go to the government, revoke your pervious certificate and get a new one. The final computation will simply ignore any revoked coins.
  • The previous point is also important to dealing with people who died or moved. It is trivial to ensure that the dead don’t vote in this scheme, and you can verify that a single person don’t have coins from multiple different locations.

The implications of such a system are interesting, in my eyes. The idea is that you delegate the vote to someone you trust, directly or indirectly. That can be a candidate, but most likely will be a group. The usual term in the states in PAC, I believe. The point is that you then have active oversight by the electorate on the elected officials.

Over seven years ago I wrote a post about what I most dislike in the current election system. You are forced to vote on your top issue, but you usually have a far more complex system of values that you have to balance. For example, let’s say that my priorities are:

  • National security
  • Fiscal policy
  • Healthcare
  • Access to violet flowers

I may have to balance between a candidate that want to ban violet flowers but propose to lower taxes or a candidate that wants to raise taxes and want to legalize violet flowers. Which one do I choice? If I can shift my vote as needed, I can make that determination at the right political time. During the budget month, my votes goes to $PreferredFiscalPolicy candidate and then if they don’t follow my preferences on violet flowers, I can shift.

This will have the benefit of killing the polls industry, since you can just look at where the political capital is. And it will allow the electorate to have a far greater control over the government. I assume that elected officials will then be paying very careful attention to how much political capital they have to spend and act accordingly.

I wonder if I should file this post under science fiction, because that might make a good background for world building. What do you think of this system? And what do you think the effects of it would be?

time to read 4 min | 618 words

Exactly 9 years ago, Hibernating Rhinos had a major breakthrough. We moved to our own offices for the first time. Before that, I was mostly working from a home office of clients’ locations.  Well, I say we, but I mean I. At the time, the change mostly involved me having to put on some shoes and going out of the house to work alone in a big empty office. The rest of the team at the time was completely remote.

I got the office because I needed to. Some people can manage a proper life / work balance while working from home. I find it very hard. I’m the kind of person that would get up at 2 AM to get something to drink, see a new mail notification on the monitor, and start working until 8 AM. Having a separate office was hugely beneficial for me.  The other reason was that it allowed me to hire more people locally. The first real employee I had was hired within three months of moving to the new office.

That first office was great, but small. Just 5 rooms about about 120 m² (1300 ft²). We stayed in the office until we got to about 12 people. At this point, we really didn’t have enough room to swing a cat (to be fair, we didn’t have an office cat, nor a real good reason to want to swing one). We moved offices in 2015, from the center of the industrial zone of the city to the periphery of the business zone). The new offices were 250 m² (2700 ft²) and gave us a lot of room to expand, it also had two major advantages. It was nice to be able to walk downstairs and be able to walk to pretty much anywhere we needed to and we no longer had to deal with having a garage next door.

When we moved to the 2nd office, it felt like we had a huge amount of room, but it filled up quite quickly. It was certain that we would outgrow the new place in a short order, so we started looking for a permeant home that would suffice for the next 10 years or so. We got one, smack down in the center of the business zone of the city. Next door to city hall, actually. Well, I say “got one”. What we actually got was a piece of paper and a hole in the ground. Before we could move into the new offices, they had to be built first.

We stayed in the second office space for 3 years, but we run out of room before the new offices were ready. So we moved for the third time. Because our new offices weren’t ready, we moved to a shared working space (like WeWork). We planned on being there for a short while, but it ended up for over a year. The plus side, we were able to expand much more easily. We hired quite a few people this year and was able to simple add more offices as we grew. The down side was that this is very much not our office, so we really want to move.

This week, however, we are going to finally move. The new offices have more than enough space  415 m² (4500 ft²) for the new five to ten years of growth, it covers two floors in a brand new location, centrally located and beautifully done. I’m not posting any pictures because the vast majority of our own team haven’t seen it yet (we have a unveiling party tomorrow), but I’m super happy that we got to this point and just had to share in the blog.

time to read 2 min | 339 words

imageI used the term “Big Red Sales Button” in a previous post, and got a question about it. You can see an illustration of that on the right.

The Big Red Sales Button (BRSB from now on) is a metaphor used to discuss how sales can impact an organization. It is common for the sales team to run into new customer requirements. Some of them are presented as absolute requirements (they usually aren’t).

I have found that the typical response of the sales person at this point is to reply “of course we can do that”, go back to the office and hit the BRSB and notify the dev team that they have $tooShortTimeFrame to implement said feature.

In one very memorable case, I remember going over contract details and trying to figure out what we need to do there. Right there, in a minimum seven figures contract, there was a clause that explained what the core functionality of the system and the set of features that were required for it to be accepted.

Most of it was pretty normal business application, nothing too strange. But section 5.1.3.c was interesting. In it, in dense legalese, there was a requirement to solve the traveling salesman problem. To be fair, it wasn’t actually that problem, it was a scheduling problem and I used the traveling salesman as the name for it because it is easier than trying to explain NP complete issues to layman.

I’ll spoil the ending of this post and reveal that I did not solve an NP complete problem. I cheated like hell and actually solved the issue they had (if you limit the solution space, you drastically simplify the cost of a solution).

Sometimes, the BRSB is used for a good purpose. If you have something that can help close a major sale, and it isn’t outrageous to implement it, for example. But in many cases, it is open for abuse.

time to read 2 min | 330 words

I spent the last couple of days in the O’Reilly Architecture Conference and HIMSS (Healthcare Information and Management Systems Society) Conference. During that time, I had the chance of listening to quite a few technical marketing spiels.

Some of them were technically very impressive, but missed the target by a planet or two. I came up with a really nice analogy for how such presentations do a great disservice for their purpose.

Consider the following:

This non-steroidal drug has been clinically tested and FDA approved will cease the production of prostaglandins and has a significant antiplatelet effect. It’s available in tablet and syrup forms and is suitable for IVs. May cause diarrhea and/or vomiting.

This is factual (at least as much as I could make it), I assume that if you are a medical professional you might be able to work out possible uses for this drug. But the most important thing that is missing from this description? What does this do?

This is Ibuprofen and you take it to ease your headache (among many other uses). It can also protect help you avoid blood clots.

I intentionally chose this example, because it is a very obvious one (and I just came back hearing way too much medical stuff). You begin by telling me how this will ease the pain. In many ways, I consider technical marketing to be composed of the following steps:

  • Whatever this product can actually ease the pain.
  • Whatever this customer actually experience the pain.

For example, if you are promising to have a faster than light bullet-train to Mars,  that is going to cast some… doubt on your claims. On the other hand, it doesn’t matter to me if you can cut down my commute time in half if I can get to work while not leaving my house.

If the customer experienced the pain and believe that you can actually help there, you are most of the way there. All that is left is just negotiating, barrier removal, etc.

time to read 3 min | 512 words

imageI’m going to feel like an old man for this post, but if you were born post 1995, it is likely that you have no idea what I’m talking about in this post, crazy as this sounds to me.

Before there was a phone in every pocket, there were land lines. It is like today’s phone, but much larger, you could only do voice calls and if you wanted to screen your calls you needed to buy another appliance. If you’ll watch the first few sessions of Friends, you’ll see how important a detail that can be. If you were out of the house or office and needed to place a call, you could use something called a public phone booth or a pay phone.

Sadly, the easiest way I can convey what this was is to invoke the Tardis. A small booth in which you had a public access phone. Because phone calls used to cost a lot, these phone had a way to drop some coins or tokens into the phone to pay for the phone call.

As a child, I didn’t have a wallet and still needed to occasionally make calls. Being stuck without cash at hand wasn’t such a strange thing so there was another way to perform the call. You could reverse the charge, instead of the person placing the call paying for it, you could call collect. In that case, the person answering the call would be paying for it. Naturally, since money is involved, you need the other party to accept the charge before actually charging them.

At some point in time, you called a special number and told the operator what number you wanted to do a collect call. The operator would ring this number and ask for permission to connect the call and charge the receiver. I think that the rate for a collect call was significantly higher than the normal call, so you wouldn’t normally do that.

As part of the system automation, the phone company replaced the manual operator collect call with an automated system. You would record a short message, which would be played to the other party. If they wanted to accept the call (and the charge), the could press 1 on the phone, or disconnect to avoid the charge.

As a kid, I quickly learned that instead of telling the other party who is calling and why (so they would accept the call), I could just tell them what my phone number is. In this way, they would write down the number, refuse the call and then call me back. That would avoid the collect toll charge.

I remember that at some point the phone company made the length of the collect hello message really short, but I got around that by speaking really fast (or sometimes by making two separate calls). I remember having to practice saying the phone number a few times to get it done in the right time.

time to read 2 min | 396 words

I just had a discussion with a colleague about a fix of non trivial code. The question was what comments should go into the code to explain what was going on.  If you care to know, this related to the prefetching strategy that is used by RavenDB to reduce the amount of I/O that is required (especially on slow disks). The details don’t actually matter. The problem is that there are multiple relatively complex issues there, from managing I/O to thread safety in the critical code path (using dirty reads intentionally), etc.

The problem with doing this is that the code is complex but it is a fairly straightforward progress from the kind of code we usually write in performance sensitive sections. The fear was by over commenting the code, we’ll get ourselves into a situation where we’ll be making the code too malleable to change. This is the kind of code that sits in the perf critical section, you change it after fasting for a day or two (with strong encouragement on meditation about little vs. big endian and why half endian is so rare).

In other words, in practice. You change it when you have reason, and you back up that change with a battery of performance tests. Anything from the usual benchmarks to running production loads on various machines to poring over system traces.

Given the amount of effort that is expected from any changes to this code, I consider it to be a good idea for people who read it to understand that there is a hurdle there that must be jumped before it should be modified. Thus, we decided to skip some of the comments on the reasoning behind the overall design. Here is the most important comment in this code, this is there to explain a particular choice of value and the reasoning that must be applied when it is changed.

What about the whole complexity of the prefetching in general? That isn’t document in code, because reading code comments scattered throughout will make it very hard to grok. This is detailed in the architecture guide that go over these details.

For myself, I find it really awesome to go over a codebase and figure out what reasoning lie behind the code. But when I have people working on my projects? It is better to give them a hand than a riddle.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}