Ayende @ Rahien

It's a girl

Memorable code

public class Program
{
    static List<Thread> list = new List<Thread>();
    private static void Main(string[] args)
    {
        var lines = File.ReadAllLines(args[0]);

        foreach (var line in lines)
        {
            var t = new Thread(Upsert)
            {
                Priority = ThreadPriority.Highest,
                IsBackground = true
            };
            list.Add(t);
            t.Start(line);
        }

        foreach (var thread in list)
        {
            thread.Join();
        }

    }

    private static void Upsert(object o)
    {
        var args = o.ToString().Split(',');
        try
        {
            using(var con = new SqlConnection(Environment.CommandLine.Split(' ')[1]))
            {
                var cmd = new SqlCommand
                {
                    Connection = con, 
                    CommandText = "INSERT INTO Accounts VALUES(@p1, @p2, @p3, @p4,@p5)"
                };

                for (var index = 0; index < args.Length; index++)
                {
                    cmd.Parameters.AddWithValue(@"@p" + (index + 1), args[index]);
                }

                try
                {
                    cmd.ExecuteNonQuery();
                }
                catch (SqlException e)
                {
                    if(e.Number == 2627 )
                    {
                        cmd.CommandText = "UPDATE Accounts SET Name = @p2, Email = @p3, Active = @p4, Birthday = @p5 WHERE ID = @p1";
                        cmd.ExecuteNonQuery();
                    }
                }
            }
        }
        catch (SqlException e)
        {
            if(e.Number == 1205)
            {
                var t = new Thread(Upsert)
                {
                    Priority = ThreadPriority.Highest,
                    IsBackground = true
                };
                list.Add(t);
                t.Start(o);
            }
        }
    }
}
Tags:

Published at

Originally posted at

Comments (54)

Entity Framework, The Five Years Plan, And Building Software on Future Features

When Entity Framework came out, there was a lot of excitement, and a lot of people picked it up. I was fairly confused about that, because I didn’t really understand why. One of the major reason that people kept saying is that “it might have problems now, but we are doing this to get an early start for what comes down the road”.

Indeed, at the time, Microsoft has some really interesting plans for Entity Framework and EDM:

Long-term we are working to build EDM awareness into a variety of other Microsoft products so that if you have an Entity Data Model, you should be able to automatically create REST-oriented web services over that model (ADO.Net Data Services aka Astoria), write reports against that model (Reporting Services), synchronize data between a server and an offline client store where the data is moved atomically as entities even if those entities draw from multiple database tables on the server, create workflows from entity-aware building blocks, etc. etc.

I emphasized some parts of that, because I think it is really interesting to look back at those statements in hindsight. We are about 3 years after the fact, and we can see that most of those promised projects actually came about. None of them actually uses the EDM, however. At the time, however, there was a lot of talks and plans about That One Model and how you would define it once and use it across all Microsoft products. I even recall one DotNetRocks show how SQL Server is probably going to move from the relational model to the EDM Model, as part of a company wide effort to go to a Model Based architecture, etc.

This is important specifically because of the ending statement of the blog post.

So the differentiator is not that the EF supports more flexible mapping than nHibernate or something like that, it's that the EF is not just an ORM--it's the first step in a much larger vision of an entity-aware data platform.

What actually happened is the landscape of database tooling in 2011 is drastically different than it was in 2008. The needs ,requirements and usage scenarios are changing with respect to the Cloud, No SQL, Sharding and more. One of the oft repeated phrases about Entity Framework at the time is that it is not an OR/M, it is so much more than that. Go and read the recent posts on the EF Design blog. You will see a lot of stuff about Entity Framework as an OR/M. You’ll see none at all about the “much larger vision of an entity aware data platform”.

That isn’t actually surprising, many of us in the community called out the impracticalities of such a vision at the time.

The point of this post isn’t to pick on Entity Framework, (in hindsight, a lot of the furor about Entity Framework seems overblown, actually) but it is to talk about something that is quite important.

It is very easy to talk about what you are going to do in the future, there is no actual commitment there, you can plan however you like, but the further in time you go, the least likely those plans are going to happen. Not because whoever made those plans lied, but simply because circumstances change. And when that happen plans are either going to change or become irrelevant.

The other aspect of this to that is that you should very rarely try to base your own decisions on what someone else is saying that they are planning to do that far down the road. Especially if it means that you are going to take a currently inferior product just so you would be familiar with it when it becomes great (part 23.13.B, section 12A in the Grand Plan). You should base your decisions on the current and upcoming stuff, not stuff that is so far in the future, the entire industry is going to change twice before the due date.

Sure, you probably want to keep an eye on what is going on and what the future plans are, but it isn’t really a good idea to base your decisions on that. I mean, if you were listening to the 2008 PDC, you would have bet the farm on Oslo…

Tags:

Published at

Originally posted at

Comments (15)

Steamlining RavenDB, Part II - Why RavenDB Lazy Requests are in the Advanced section?

I really like notion of reducing the number of remote calls, so why did I stick the Lazy Requests feature of RavenDB in the session.Advanced section of the API and not put it in the center, directly off the session?

The answer is that I expect Lazy Requests to be a very powerful feature, but at the same time, it isn’t important enough a feature for us to justify increasing the surface area of the session. One of the main goals with RavenDB is simplicity and power. The simple stuff should be simple. We actually consider this a bug if you can’t pick up RavenDB and start using it in ten minutes or less.

That does not means that we don’t add powerful features, but we are careful in ensuring that those features won’t contaminate the Getting Started scenario.

Another consideration is that as powerful as Lazy Requests are, the common best practice for RavenDB is already reducing the number of requests drastically, so we mostly need them for occasional use, vs. common usage. One we figured that in many cases, using Lazy Requests is a rare thing, the decision where to put it became much simpler. In other words, it doesn’t really matter if you are making two queries vs. one. It matters a lot more if you are doing 30.

One of the more interesting aspects of designing RavenDB is actually in the exposed API. We are working hard to make sure that this API is as simple and predictable as possible. I am more than willing to give users options to solve specific problems, but it is important to consider that at its core, RavenDB is a database, and as such, what people mostly care about is CRUD. And that is why the session interface is the way it is, because you get to do CRUD right off the bat, and if you want more knobs to turn and handles to crank, you go to behind the session.Advanced door and can get all of the features that you could imagine.

Another aspect of that is the suggestions of API from users for all sort of stuff. From SaveChangesAndWaitForIndexing to DeleteAll, etc.

Those things are useful, sure. But they can be implemented as extension methods, and they wouldn’t be useful for the general case. The thing that I am trying to avoid is the case where you have something like what happened to Rhino Commons Repository<T>, which got so many features to handle one off use cases that it was really quite hard to use for the common case.

Tags:

Published at

Originally posted at

Comments (6)

Dynamic resolution rules joy

In the following code, what do you believe the output should be?

class Program
{
    static void Main(string[] args)
    {
        dynamic stupid = new Stupid{Age = 3};

        Console.WriteLine(stupid.Age);
    }
}

public class Stupid : DynamicObject
{
    public int Age { get; set; }

    public override bool TryGetMember(GetMemberBinder binder, out object result)
    {
        result = 1;
        return true;
    }
}

Argh!

The default is to use the actual type members, and then fall back to the dynamic behavior. Whereas I would expect it to first check the dynamic behavior and then fall back to the actual members if it can’t find dynamic stuff.

Quite annoying.

Tags:

Published at

Originally posted at

Comments (24)

Your ctor says that your code is headache inducing: Explanation

I was pointed to this codebase, as a good candidate for review. As usual, I have no contact with the project owners, and I am merely using the code as a good way to discuss architecture and patterns.

It starts with this class:

image_thumb

Okay, this codebase is going to have the following problems:

  • Complex and complicated
  • Hard to maintain
  • Hard to test
  • Probably contains a lot of code “best practices” that are going to cause a lot of pain

And I said that this is the case without looking at any part of the code except for this constructor. How am I so certain of that?

Put simply, with 9 dependencies, and especially with those kind of dependencies, I can pretty much ensure that this class already violate the Single Responsibility Principle. It is just doing too much, too complex and too fragile.

I shudder to think what is involved in testing something like that. Now, to be fair, I looked at the rest of the codebase, and it seems like I caught it in a state of flux, with a lot of stuff still not implemented.

Nevertheless… this is a recipe for disaster, and I should know, I have gone ctor happy more than once, and I learned from it.

And here is the obligatory self reference:

image

And yes, this does give me a headache, too.

Tags:

Published at

Originally posted at

Comments (27)

Your ctor says that your code is headache inducing: Introduction

I was pointed to this codebase, as a good candidate for review. As usual, I have no contact with the project owners, and I am merely using the code as a good way to discuss architecture and patterns.

It starts with this class:

image

Stop right here!

Okay, this codebase is going to have the following problems:

  • Complex and complicated
  • Hard to maintain
  • Hard to test
  • Probably contains a lot of code “best practices” that are going to cause a lot of pain

Tomorrow, I’ll discuss why I had that reaction in detail, then dive into the actual codebase and see if I am right, or just have to wipe a lot of egg off my face.

Tags:

Published at

Originally posted at

Comments (50)

Gilad Shalit is back

I don’t usually do posts about current events, but this one is huge.

To celebrate the event, you can use the following coupon code: SLT-45K2D4692G to get 19.41% discount (Gilad was captive for 1,941 days) for all our profilers:

Hell, even our commercial support for NHibernate is participating.

Please note that any political comment to this post that I don’t agree with will be deleted.

System Users and Application Users

This is a question that comes up relatively often in the RavenDB mailing list. How do I handle multiple users with RavenDB? Does it support multiple users? Does it supports the Membership Provider?

Those questions usually confuse a very key concept regarding users. Whose users are they?

In particular, we need to make a distinction between System Users and Application Users. Despite using the same term for both, there is actually very little connection between the two.

Here is an example of a System User:

<connectionStrings>
    <add name="RavenDB" connectionString="Url=http://scotty.ravendb.net;user=beam;password=up"/>
</connectionStrings>

As you can probably surmise, this is a connection string, and the user is ‘beam’. This user is a System User, if you call the Ops Team and ask them why the password expired, they can help you there.

This is a system user, it controls access to external resources, and usually you have very few of those. Usually they control things like what parts of the disk you can write to, what databases you can connect to, etc. For the most part, they aren’t in your control, you don’t manage them and neither does you application

In contrast to that, here is a great example of an Application User:

image

An Application User is unique to its application. It is usually manifested as a document (or a database row) and doesn’t have any existence beyond that. If you called the Twitter Team Ops and told them that the RavenDB account password need resetting, they would be pissed that you are wasting their time.

This distinction is important, because it implies a lot about how we use those two different types of users.

System Users are used… well, for the system. Application Users are the actual users using the system. Very rarely are they one and the same. Usually our application use service accounts, and any security checks for what an Application User can do are implemented as part of the business logic, not by setting ACLs.

Don’t confuse the two, despite the common name.

And coming back all the way to the original question. RavenDB comes with the notion of System Users via Windows Auth and OAuth, and it helps with Application Users using the Authorization Bundle. But you really don’t want to use the membership API, regardless of the underlying storage.

Tags:

Published at

Originally posted at

Comments (7)

Challenge: Minimum number of round trips

We are working on creating better experience for RavenDB & Sharding, and that led us to the following piece of code:

shardedSession.Load<Post>("posts/1234", "post/3214", "posts/1232", "posts/1238", "posts/1232");

And the following Shard function:

public static string[] GetAppropriateUrls(string id)
{
    switch (id.Last())
    {
        case '4':
            return new[] { "http://srv-4", "http://srv-backup-4" };

        case '2':
            return new[] { "http://srv-2" };

        case '8':
            return new[] { "http://srv-backup-4" };

        default:
            throw new InvalidOperationException();
    }
}

Write a function that would make the minimum number of queries to load of all the posts from all of the servers.

Tags:

Published at

Originally posted at

Comments (25)

Reminder, RavenDB and NHibernate courses are upcoming in New York

This is a reminder that the RavenDB and NHibernate courses are coming to New York at the end of this month:

There are 2 places left for the RavenDB course and 1 for the NHibernate course, so if you want to register, hurry up or miss out.

Tags:

Published at

Originally posted at

Peeking behind the curtains: The new RavenDB Management Studio

We did a fair amount of work on this, and I think that it is much nicer. Take a look at the home page:

image

And collections:

image

The document edit UI was also improved:

image

Those are just cosmetic changes, so far, matching the new look and feel of the upcoming RavenDB site and freshening things up.

We also implemented a bunch of new features. Here is the new index edit page, which gives you access to the full power of RavenDB indexing system.

image

Let us see something that is significantly cooler, we implemented intellisense for querying:

image

image

Now you can get queries much more easily, because this will actually look at the database data and suggest the right items to you.

For operational support, we have import/export directly in the management studio:

image

And more about operations, we export the recent log entries directly in the UI, making it very easy to figure out what is actually going on:

image

Well, what do you think?

Tags:

Published at

Originally posted at

Comments (30)

What is wrong with this code?

Calling the Compare method will (sometimes) crash your entire application in a very rude fashion, why?

static bool Compare(byte[] b1, byte[] b2)
  {
    IntPtr retval = memcmp(b1, b2, new IntPtr(b1.Length));
    return retval == IntPtr.Zero;
  }
 
  [DllImport("msvcrt.dll")]
  static extern IntPtr memcmp(byte[] b1, byte[] b2, IntPtr count);

In fact, there are actually two different problems that I can see here. The easy one would consistently crash, the hard one would usually pass, but sometime crash you in an apparently random ways.

Tags:

Published at

Originally posted at

Comments (14)

Full Text Search takes you only so far

A few weeks ago I had a really interesting engagement with a customer. They were using RavenDB to do some interesting searches, and eventually they hit a wall with what they were trying to do.

For simplicity sake, we will say that the customer wanted to allow users to search for books. The scenario is something like this (totally different domain, obviously) and the client isn’t Amazon, they are just a good place to get the screen shot from:

image

Sure, the suggest feature is really nice, but what the customer really cared about is being able to search on the whole set of options.

In their field, people usually write the book name using one of the following formats:

  • Author First Name, Author Last Name – Book Title, Year
  • Year, Book Title, Author Last Name
  • Author Last Name, Book Title, Year

And a bunch of other options.

Also, they want to offer a free text search option.

Also, it had to be fast. They already had an existing system that worked, but had unacceptably high latency for most queries and had… issues under load. The first approach they tried was just moving to RavenDB, enabling full text search and seeing what it got them. It got them something, but not nearly enough.

When I started looking at the problem, I had several recommendation, none of them had much of anything to do with full text search. They were mostly around just being smarter in understanding the user.

To start with, given that most of the information was in one of a small number of formats, there was really no reason not to build a parser for that information. When you actually know what fields you are looking for, you can provide much better information for the user, than if you are just doing brute force full text search.

So, instead of issuing a query like this:

RavenSession.Query<Books_FullText.Result, Books_FullText>()
   .Search(x=> x.Result, searchTermFromUser)
   .ToList();

Which can work, but can’t really take advantage of your knowledge of the domain and the users, you will do something like this:

var parseResult = new BooksQueryParser(Context).Parse(searchTermFromUser);
if( parseResult.Success )
{
  var q = RavenSession.Query<Books_FullText.Result, Books_FullText>()
  parseResule.ApplyOn(q);
  // would do things like
  // q.Search(x=>x.Title , "the lost fleet");
  // q.Search(x=>x.Author, "jack campbell");
  return q.ToList();
}
else // fall back, do a full text search, because there isn't anything else to do
{
  return RavenSession.Query<Books_FullText.Result, Books_FullText>()
   .Search(x=> x.Result, searchTermFromUser)
   .ToList();
}

RavenDB can’t do that for you. It can provide awesome full text support, but if you guide it in this manner, it would be tremendously more helpful.

The next stage is to actually learn from your users. Whenever you users make a search, you are going to record it. In fact, you are going to track the entire interaction. It will end up looking something like this:

{ // searchInteractions/4833424
  "User": "users/93432",
  "Terms" [
    "the last feetl",
    "the lost fast",
    "the lost fleet"
  ],
  "FollowedTo": "books/40273498723"

}

In this case, the sample data shows typos, but in the customer scenario, those would be the user trying different ways to format the actual valid search, to find something that the system recognizes.

What is important is that if you can’t find a search result with high enough ranking (for example, if you failed to parse the search terms), you can now do several fairly intelligent things.

You can search for similar searches made by other users, there is a high likelihood that the same search term was tried before, the user then corrected his typos / formatting errors and then found what they wanted. The next user that run into this can benefit from this experience. You can also suggest to the user “did you mean ?“  when you can’t find a good result for the search query.

Note that the interactions always ends when the user has selected an appropriate result. This is the user’s way of telling you, “this is what I meant”, you should learn from it.

In all, I don’t think that either suggestion is truly ground breaking, but together they can result in a huge leap for the usability of the search feature. And for that particular client, the search feature is Major.

Tags:

Published at

Originally posted at

Comments (4)

Idle musing while commuting: The ownership index

While driving to work today, I started wondering what pieces of code are owned by someone, and how you can detect it. Owned means that they are the only one that can touch that code. Whatever it is by policy or simply because they are the only one with the skills / ability to do so.

I wonder if you can use the source control history to figure it out. Something like:

  • Find all files changes within the last year
  • Remove all files whose changes are over a period of less than two weeks (that usually indicate a completed feature).
  • Remove all the files that are modified by more than 2 people.
  • Show the result and the associated names.

That might be a good way to indicate a risky location, some place that only very few people can touch and modify.

I started to think about how to do this in Git, but I got lost. Anyone want to try and take that up?

Tags:

Published at

Originally posted at

Comments (12)

Negative hiring decisions, Part II

Another case of a candidate completing a task at home and sending it to me which resulted in a negative hiring decision is this:

protected void Button1_Click(object sender, EventArgs e)
{
    string connectionString = @"Data Source=OFFICE7-PC\SQLEXPRESS;Integrated Security=True";
    string sqlQuery = "Select UserName From  [Users].[dbo].[UsersInfo] Where UserName = ' " + TextBox1.Text + "' and Password = ' " + TextBox2.Text+"'";
    
    using (SqlConnection connection = new SqlConnection(connectionString))
    {
        SqlCommand command = new SqlCommand(sqlQuery, connection);
        connection.Open();
        SqlDataReader reader = command.ExecuteReader();
        try
        {
            while (reader.Read())
            {
           
            }
        }
        finally
        {
            if (reader.HasRows)
            {
                reader.Close();
                Response.Redirect(string.Format("WebForm2.aspx?UserName={0}&Password={1}", TextBox1.Text, TextBox2.Text));
            }
            else
            {
                reader.Close();
                Label1.Text = "Wrong user or password";
            }
        }
    }
}

The straw that really broke the camel’s back in this case was the naming of WebForm2. I could sort of figure out the rest, but not bothering to give a real name to the page was over the top.

Spiking RavenDB Management Studio: Design decisions

We got a lot of compliments in RavenDB over our management UI, which is only fair, we invested a lot of time and money in it. We also got a fair amount of bug reports about it.

That is fair, as well, it has got its problems. The major one among them is that what I thought we wanted and what we actually need are two very different things. I wanted something that could do a lot, would be very dynamic, support extensions and plugins, do… you get the idea.

In practice, we never use any of those things, and supporting them cost us a lot in terms of complexity and performance. So I decided to spike a complete reboot of the RavenDB Management Studio UI.

The goals are to create an architecture with the following properties:

  • Stupid, stupid, stupid! I don’t do UI very well, so the architecture should be there to support me (and others) when we want to modify / play with things.
  • Blind monkey should be able to contribute code!
  • Big Refresh button on the screen that would refresh everything on the screen.
  • No caching of views (currently a big problems for us)

The basic idea is that we want to make it very easy for users to contribute code / fixes. That can only happen if we are actually working hard to make sure that this is the case, I am going to post more stuff about the actual design decisions relating to this.

This means that we need strong conventions support. It means that the architecture should be obvious enough that non expert users should be able to get into it very rapidly.

Tags:

Published at

Originally posted at

Comments (12)

Is Node.cs a cure for cancer?

This is mainly a tongue in cheek post, in reply to this guy. I decided to take his scenario and try it using my Node.cs “framework”. Here is the code:

 

public class Fibonaci : AbstractAsyncHandler
{
    protected override Task ProcessRequestAsync(HttpContext context)
    {
        return Task.Factory.StartNew(() =>
        {
            context.Response.ContentType = "text/plain";
            context.Response.Write(Fibonacci(40).ToString());
        });
    }

    private static int Fibonacci(int n)
    {
        if (n < 2)
            return 1;
        
        return Fibonacci(n - 2) + Fibonacci(n - 1);
    }
}

We start by just measuring how long it takes to serve a single request:

$ time curl http://localhost/Fibonaci.ashx
165580141
real    0m2.763s
user    0m0.000s
sys     0m0.031s

That is 2.7 seconds for a highly compute bound operation. Now, let us see what happens when we use Apache Benchmark to test things a little further:

ab.exe -n 10 -c 5 http://localhost/Fibonaci.ashx

(Make a total of ten requests, maximum of 5 concurrent ones)

And this gives us:

Requests per second:    0.91 [#/sec] (mean)
Time per request:       5502.314 [ms] (mean)
Time per request:       1100.463 [ms] (mean, across all concurrent requests)

Not bad, considering the best node.js (on a different machine and hardware configuration) was able to do was 0.17 requests per second.

Just for fun, I decided to try it with  a hundred requests, with 25 of them concurrent.

Requests per second:    0.97 [#/sec] (mean)
Time per request:       25901.481 [ms] (mean)
Time per request:       1036.059 [ms] (mean, across all concurrent requests)

Not bad at all.

Tags:

Published at

Originally posted at

Comments (10)

RavenOverflow - building a StackOverflow clone with RavenDB

Another in our series of videos of customer interactions. This time, we take a look at the challenges involved in building a RavenDB powered Stack Overflow site. I think you’ll find it very interesting.

Building StackOverflow clone with RavenDB

In this webcast Ayende works with Justin on modeling a StackOverflow website clone with RavenDB. The topics discussed in this 1hr video are:

  • Map/Reduce indexes
  • Modeling tags
  • Root aggregates
    • Metadata
    • Active tags
    • Facets
    • The suggest feature
  • Performance:
    • Built-in caching
    • Lazy loading
    • Aggressive caching
  • RavenDB profiler

The new RavenDB beta docs: http://docs.ravendb.net

The project we are talking about: https://github.com/PureKrome/RavenOverflow

Tags:

Published at

Originally posted at

Comments (16)

RavenDB Webinar #2–Modeling

The feedback that we got about the Webinar was amazing, and it is highly likely that we will continue doing so over the long term. In the meantime, we just scheduled a new webinar for 2 days from now (October 6th, 2011).

This time we tried to make sure that it would be in an hour comfortable for people from the states as well.

You can register here: https://www2.gotomeeting.com/register/905183322

Note: There are only 100 available spot, and who goes into the webinar will be decided during the actual webinar, not by whoever registered first.

And yes, we will make every attempt to remember to hit the record bottom this time.

Tags:

Published at

Full Throttle at TekPub

Rob Conery pinged me a few weeks back, wanting to do another TekPub episode. So I show up, very early in the morning, and he starts throwing curve balls at me:

The website is down and we aren’t accepting orders, DO SOMETHING ABOUT THIS NOW!

And that was in the first minute or so!

Wait a minute! I thought we were doing a show about…

Oh, this is the Full Throttle episode…

Basically, Rob is just piling requirements and I am trying to play catch up. It is just over an hour, but I think it should be interesting to watch.  I really liked Rob’s post about the show. But you can also skip this and go directly to the actual show.

Tags:

Published at

Originally posted at

Comments (5)

Hiring Questions–The phone book–responding to commentary

Wow, this post got a lot more attention than I thought it would. Most of it was along the same lines, so I’ll answer it here.

Anyone suggesting, SQLite, Excel, Access, Esent, Embedded RavenDB, Munin, Embedded FireBird, MS SQL CE, DB4O or anything like it – that isn’t the purpose of the question. I am not trying to figure out if the candidate knows about embedded databases.

Performance isn’t much of a concern, we expect up to several thousands entries per phonebook, but not beyond that, and speed should be in the human response time range (hundreds of milliseconds).

I rejected JSON / XML file formats because I wanted to make the task harder than just using the built-in Linq API and serializing to a file. 

Out of this question, I want actual code that I can try out, not just some high level design. I estimate that if you know what you are doing, this should take less than half an hour. At high school, I think it took about two hours, and that was in unmanaged land.

Some people questioned what is the purpose of this question, under what scenarios is it valid, etc.

Put simply, it is valid because it tests a wide range of topics in a candidate abilities. I don’t feel that I need to go into world building to setup a scenario for an interview question.

Mike McG put it beautifully:

He spells out the requirements for a basic database engine with indexing. It's a multifaceted problem that can expose a lot about a candidate in their solution. Are concerns separated logically? How is performance addressed against disk I/O? How is code correctness validated (e.g. testing)? To submit a project that just wraps another database is disingenuous. He's looking for people that can solve problems head-on, not just pass the buck.

Exactly. More to the point, it forces a candidate to actually do a fairly complex task that still can be done in a short amount of time. It shows me how they think, whatever they have any idea how computers actually work. If you can’t complete this task, you don’t understand basic file IO. That means that you might be a great front end developer, but I need someone who can do more than that.

Tags:

Published at

Originally posted at

Comments (16)