How to get REALLY fast benchmarks
I was very excited when I tried this code, it produced amazing performance:
private static void LoadDataFor(string searchPattern) { foreach (var file in Directory.GetFiles("Docs", searchPattern).OrderBy(x=>x)) { var sp = Stopwatch.StartNew(); var httpWebRequest = (HttpWebRequest)WebRequest.Create("http://localhost:8080/bulk_docs"); httpWebRequest.Method = "POST"; using(var requestStream = httpWebRequest.GetRequestStream()) { var readAllBytes = File.ReadAllBytes(file); requestStream.Write(readAllBytes, 0, readAllBytes.Length); } Console.WriteLine("{0} - {1}", Path.GetFileName(file), sp.Elapsed); } }
Can you figure out what the fatal flaw in this code is?
Comments
Uhm... perhaps you're not flushing the request stream and thus you're only measuring how long does it take to write data onto the stream's internal buffer?
What @Marco said, plus you're not checking the response so you don't know if the request was successful.
Forgot to specify HttpWebRequest.ContentLength before writing to the request stream?
msdn.microsoft.com/.../...quest.contentlength.aspx
Uh oh... nope sorry, made a mistake.. The request is within a using block, so you're actually flushing it! It's Sunday morning here in Italy, and my brain started in Safe Mode today, LOL ;-)
Another try (my last comment was flagged as spam): It might be that the server doesn't get the request until you invoke GetResponse due to buffering.
Not related to the performance issue: Stopping the Stopwatch might be a good idea, or directly encapsulating the whole measurement. I did something similar here:
http://tinyurl.com/39op9j7
If your trying to benchmark the overall performance of loading all the documents in the directory, then I'm guessing that the 'var sp = Stopwatch.StartNew();' should be outside the foreach loop.
Otherwise the elapsed time is going to get reset on each iteration.
Marco was right, you need to call GetResponse to perform the upload. If not, nothing is going to be uploaded.
Should this httpWebRequest be created in the loop?
As people pointed out, GetResponse actually fire the HTTP request.
@Leonard - you should not reuse HttpWebRequest instances. Calling consecutive calls for GetResponse() will return a cached result of the first call, without re-issuing an HTTP request.
btw, I'd use WebClient as long as you do not need to mingle too much with the returned Response in term of headers, certificates etc.
Haven't tested it much (would definitely only work for sync, for instance), but:
consume like:
http://www.google.com";
You initialized your variable in the for loop so technically you are measuring only the last request and none of the file access or any of the other requests.
var sp = Stopwatch.StartNew();
foreach (var file in Directory.GetFiles("Docs", searchPattern).OrderBy(x=>x))
You never hit Stop on the stop watch.
The http request is not submitted until the "GetResponse()" method of the HttpWebRequest class is invoked. The code in the blog post writes the data to the local stream and discards it without initiating the actual web request.
The statement "Directory.GetFiles("Docs", searchPattern).OrderBy(x=>x)"
By default, the GetFiles would usually return the files within the alphabetical order. Also, "GetFiles" is a blocking operation waiting for the entire array to be built up.
Therefore, the Stopwatch would only start measuring after the more expensive operations began. This would also only measure the very last cycle in the foreach loop... not the overall response time of the ENTIRE method.
You must call the Stream.Close method to close the stream and release the connection for reuse. Failure to close the stream causes your application to run out of connections.
@Justin I think what you said above is correct, but when using Directory.GetFiles the order of the returned file names is not guaranteed, so using OrderBy is necessary.
you have to either consume or close the ResponseStream
the very least that you have to do is call GetResponse().Close()
the "fatal flaw" being that you are not closing the underlying HTTP connection
at some point in time you are going to run out of available connections and be blocked waiting for an HTTP connection to timeout
testing and even "production" running code will work given a small number of files
I'd say it's the missing requestStream.Close within the loop. My guess would be that the Close operation would block the thread until the upload was complete so without it, (aside from the risk of running out of connections) your timer wouldn't reflect the actual upload performance. Hence, a false report of smashing performance.
I would use DirectoryInfo instead of Directory to get the file listing. It does less security checks when combined with FileInfo.OpenText it should speed the process when processing a significant number of files ...
Would you care to post the solution?
Magesh,
KAE figured that out :-)
Comment preview