Ayende @ Rahien

Hi!
My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:

ayende@ayende.com

+972 52-548-6969

, @ Q c

Posts: 5,953 | Comments: 44,408

filter by tags archive

Is Node.cs a cure for cancer?


This is mainly a tongue in cheek post, in reply to this guy. I decided to take his scenario and try it using my Node.cs “framework”. Here is the code:

 

public class Fibonaci : AbstractAsyncHandler
{
    protected override Task ProcessRequestAsync(HttpContext context)
    {
        return Task.Factory.StartNew(() =>
        {
            context.Response.ContentType = "text/plain";
            context.Response.Write(Fibonacci(40).ToString());
        });
    }

    private static int Fibonacci(int n)
    {
        if (n < 2)
            return 1;
        
        return Fibonacci(n - 2) + Fibonacci(n - 1);
    }
}

We start by just measuring how long it takes to serve a single request:

$ time curl http://localhost/Fibonaci.ashx
165580141
real    0m2.763s
user    0m0.000s
sys     0m0.031s

That is 2.7 seconds for a highly compute bound operation. Now, let us see what happens when we use Apache Benchmark to test things a little further:

ab.exe -n 10 -c 5 http://localhost/Fibonaci.ashx

(Make a total of ten requests, maximum of 5 concurrent ones)

And this gives us:

Requests per second:    0.91 [#/sec] (mean)
Time per request:       5502.314 [ms] (mean)
Time per request:       1100.463 [ms] (mean, across all concurrent requests)

Not bad, considering the best node.js (on a different machine and hardware configuration) was able to do was 0.17 requests per second.

Just for fun, I decided to try it with  a hundred requests, with 25 of them concurrent.

Requests per second:    0.97 [#/sec] (mean)
Time per request:       25901.481 [ms] (mean)
Time per request:       1036.059 [ms] (mean, across all concurrent requests)

Not bad at all.


Comments

Ferret Chere

Call me naive but isn't this completely pointless/meaningless when you're benchmarks are being run on a completely different machine to the "node.js is cancer" writer?

Merouane Atig

It seems to me that you arrive to the same conclusion than him on his new post http://teddziuba.com/2011/10/straight-talk-on-event-loops.html Use threads!

Khalid Abuhakmeh

That "Node.js is Cancer" article is hilarious. I understand what he's saying, but the Node.js community is still fledgling. He pretends that the people behind UNIX got it right the first time, which I'm sure they didn't. Any issue that Node.js has will probably be solved in time as that community grows. Let's not forget that Node.js is just a tool in your developer toolbox. It isn't meant to be the silverbullet for all your werewolf killing needs.

Still... very funny. If you haven't read it, it's worth reading.

Alexei K

That whole thing was so hilarious. The responses to the original troll article are even worse than the original troll. Here is a distilled summary, for those who missed it:

http://www.unlimitednovelty.com/2011/10/nodejs-has-jumped-shark.html

Also, I've learned a new word: "roflscale". It is now my mission to find a reason to use it in the workplace.

Nican

This is hilarious.

https://github.com/glenjamin/node-fib

So far, I have counted NodeJS, Python, PHP, C#, Ruby, Haskell making a Fibonacci server.

Uriel Katz

The problem is that people doesn't understand why they need node.js and why it works very good for certain cases (file upload,chatting). Using event loops without threads in a CPU intensive workload is stupid and mostly useless.

BUT if you workload is mostly IO bound(like 99.999% of webapps are) then using event loop based IO is very good and necessary in some cases like chat,file serving/uploading with many concurrent users.

At work(Binfire.com) I designed(python with gevent) a file upload/download service that proxies cloud files and it could handle a DDoS attack from 200 IPs(apart from normal traffic) trying to download a 200MB file(the case for many concurrent long lived connections) and it used 45MB of memory.

If it was written with 1 thread per client it would use more memory(1MB thread stack instead of microthread 4KB stack) and more CPU(real context switches instead of getcontext/setcontext low overhead).

So the lesson for this is quite old:Use the right tool for the job!

tobi

I understand neither Teds post nor yours. Both of you take the worst possible scenario for node.js and use that to justify ... something.

The reason why node.js is total crap is different: Your code gets pulled inside out. You cannot even add logging later on without needing to refactor the whole call tree. Logging needs to do IO, so it needs to be async.

node.js is a very special purpose lib for chats and file transfers like Uriel said. I would just use an async actionmethod/httphandler with asp.net for that. It is really embarrassingly useless for production.

Demis Bellot

Ted's troll post is annoying, he focuses on known event loop design limitations and uses it to discredit the entire technology.

Here's node.js response post showing how to handle high CPU-load tasks like video encoding in node.js: http://blog.nodejs.org/2011/10/04/an-easy-way-to-build-scalable-network-programs/

Rafal

Thanks for linking Ted's blog, it's great.

Felice Pollano

Just for the citation: Fibonacci has two 'c' ;)

Comment preview

Comments have been closed on this topic.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. The RavenDB Comic Strip (3):
    28 May 2015 - Part III – High availability & sleeping soundly
  2. Special Offer (2):
    27 May 2015 - 29% discount for all our products
  3. RavenDB Sharding (3):
    22 May 2015 - Adding a new shard to an existing cluster, splitting the shard
  4. Challenge (45):
    28 Apr 2015 - What is the meaning of this change?
  5. Interview question (2):
    30 Mar 2015 - fix the index
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats