I started the day at 5:30 AM, because we had to go to the venue and get everything ready. This is now 01:30 AM for the next day, and I am still stoked.
I took this picture at around 8 AM, when people started to queue up to go into the conference:
For that matter, you can see a few other photos from the conference here. I’ll do a retrospect about the conference when I have time to breath. But for now, I can report that the RavenDB Hackaton was quite a success, we have a cool new feature mostly implemented. You can see it here: https://github.com/ayende/ravendb/tree/duco
And yes, I’ll talk about that in the future as well. But right now, I’m going to have a db capable of managing a 100 billion records tomorrow, live on stage, so I probably need to get some sleep done…
Rob Ashton is a great developer. We invited him to Hibernating Rhinos as part of his Big World Tour. I had the chance to work with him in the past on RavenDB, and I really liked working with him, and I liked the output even better. So we prepared some stuff for him to do.
This is the status of those issues midway through the second day.
And yes, I am giddy.
A while ago I had to promise myself that I wouldn’t be traveling so much (translated: all the time). Which is probably why I would not be in this event.
There are some quite interesting talks scheduled there. In particular, I would note:
CQRS presented by Greg Young – I don’t always agree with Greg, but it is always a pleasure to learn from him.
Embracing Async Programming presented by Chris Tavares – Because you really need to learn all about how to use async properly and effectively.
Modern Architecture presented by Ted Neward – Ted is a great guy, and he usually has a good way to looking at problems, generally at a wildly different angle that I would.
I am going to be in the Professional .NET 2012 conference in Vienna, Austria next month, and I think we can plan for a special surprise.
Sep 10 is going to be the close off date for new features the next release of RavenDB (feature freeze), and in my talk, I am going to take you through a tour of not only what is RavenDB, but what are all the goodies that you can expect from the next version.
See your there…
This post is in reply to Hadi’s post. Please go ahead and read it.
Done? Great, so let me try to respond, this time, from the point of view of someone who regularly asks for patches / pull requests.
To make things more interesting, the project that I am talking about now is RavenDB which is both Open Source and commercial. Hadi says:
Numerous times I’ve seen reactions from OSS developers, contributors or merely a simple passer by, responding to a complaint with: submit a patch or well if you can do better, write your own framework. In other words, put up or shut up.
Hadi then goes on to explain exactly why this is a high barrier for most users.
- You need to familiarize yourself with the codebase.
- You need to understand the source control system that is used and how to send a patch / pull request.
And I would fully agree with Hadi that those are stumbling blocks. I can’t speak for other people, but in our case, that is the intention.
Nitpicker corner here: I am speaking explicitly and only about features here. Bugs gets fixed by us (unless the user already submitted a fix as well).
Put simply, there is an issue of priorities here. We have a certain direction for the project that we want to take it. And in many cases, users want things that are out of scope for us for the foreseeable future. Our options then become:
- Sorry, ain’t going to happen.
- Sure, we will push aside all the work that we intended to do to do your thing.
- No problem, we added that to the queue, expect it in 6 – 9 months, if we will still consider it important then.
None of which is an acceptable answer from our point of view.
Case in point, facets support in RavenDB was something that was requested a few times. We never did it because it was out of scope for our plan, RavenDB is a database server, not a search server and we weren’t really sure how complex this would be and how to implement this. Basically, this was an expensive feature that wasn’t in the major feature set that we wanted. The answer that we gave people is “send me a pull request for that”.
To be clear, this is basically an opportunity to affect the direction of the project in a way you consider important. What ended up happening is that Matt Warren took up the task and created an initial implementation. Which was then subject to intense refactoring and finally got into the product. You can see the entire conversation about this here. The major difference along the way is that Matt did all the research for this feature, and he had working code. From there the balance change. It was no longer an issue of expensive research and figuring out how to do it. It was an issue of having working code and refactoring it so it matched the rest of the RavenDB codebase. That wasn’t expensive, and we got a new feature in.
Here is another story, a case where I flat out didn’t think it was possible. About two years ago Rob Ashton had a feature suggestion (ad hoc queries with RavenDB). Frankly, I thought that this was simply not possible, and after a bit of back and forth, I told Rob:
Let me rephrase that.
Dream up the API from the client side to do this.
Rob went away for a few hours, and then came back with a working code sample. I had to pick my jaw off the floor using both hands. That feature got a lot of priority right away, and is a feature that I routinely brag about when talking about RavenDB.
But let me come back again to the common case, a user request something that isn’t in the project plan. Now, remember, requests are cheap. From the point of view of the user, it doesn’t cost anything to request a feature. From the point of view of the project, it can cost a lot. There is research, implementation, debugging, backward compatibility, testing and continuous support associated with just about any feature you care to name.
And our options whenever a user make a request that is out of line for the project plan are:
- Sorry, ain’t going to happen.
- Sure, we will push aside all the work that we intended to do to do your thing.
- No problem, we added that to the queue, expect it in 6 – 9 months, if we will still consider it important then.
Or, we can also say:
- We don’t have the resources to currently do that, but we would gladly accept a pull request to do so.
And that point, the user is faced with a choice. He can either:
- Oh, well, it isn’t important to me.
- Oh, it is important to me so I have better do that.
In other words, it shift the prioritization to the user, based on how important that feature is.
We recently got a feature request to support something like this:
session.Query<User>() .Where(x=> searchInput.Name != null && x.User == searcInput.Name) .ToArray();
I’ll spare you the details of just how complex it is to implement something like that (especially when it can also be things like: (searchInput.Age > 18). But the simple work around for that is:
var q = session.Query<User>(); if(searchInput.Name != null) q = q.Where(x=> x.User == searcInput.Name); q.ToArray();
Supporting the first one is complex, there is a simple work around that the user can use (and I like the second option from the point of view of readability as well).
That sort of thing get a “A pull request for this feature would be appreciated”. Because the alternative to that is to slam the door in the user’s face.
The following is quite annoying. I am trying to make an authenticated request to a web server. In this instance, we are talking about communication from one IIS application to another IIS application (the second application is RavenDB hosted inside IIS).
The part that drives me CRAZY is that I am getting this error when I am trying to make an authenticated request, but using the exact same code and credentials, from another machine, I can make this work just fine.
It took me a while to figure out that most important part, when running locally, I was running under my own account, when running remotely, the application run under IIS account. The really annoying part was that even when running on IIS locally, it still worked locally and failed remotely.
It took me even longer to figure out that the local IIS was configure to run under my own account as well
Once that discovery was made, I was able to figure out what is wrong and implement a work around (run the remote IIS site under a custom user name). I know that the actual problem is something relating to permissions on certificate store, but I have no idea how, or, for that matter, which certificate?
This is plain old HTTP auth, no SSL, client certs, etc. I am assuming that this is raised because we are using Windows Auth, but I am not sure what is going on here with regards to which certificate should I grant to which user, or even how to do this.
I am going to be at Dallas Days of .NET on March next year. You can use the following link to get a discount if you order now: http://jointechies.eventbrite.com/?discount=OrenEini
This is going to be an interesting event, because there is one track in which I am going to be doing every other talk for 2 days. This is going to give me a wide enough scope to cover just about every topic that I am interested at, including some time to go in depth into several topics that I usually have the chance to only skim.
Originally posted at 11/3/2010
I am going to have an open session on the 18th Nov, discussing all the details and considerations that goes into how I design, build, and create software.
You are welcome to come, it should be interesting.
This post is a reply for this post, you probably want to read that one first.
Basically, the problem is pretty simple. It is the chicken & the egg problem. There is a set of problems where it doesn’t matter. Rhino Mocks is a good example where it doesn’t really matter how many users there are for the framework. But there are projects where it really does matters.
A package management tool is almost the definition of the chicken & egg problem. Having a tool coming from Microsoft pretty much solve this, because you get a fried chicken pre-prepared.
If you look at other projects, you can see that the result has been interesting.
- Unity / MEF didn’t have a big impact on the OSS containers.
- ASP.Net MVC pretty much killed a lot of the interest in MonoRail.
- Entity Framework had no impact on NHibernate.
In NHibernate’s case, it is mostly because it already moved beyond the chicken & egg problem, I think. In MonoRail’s case, it was that there wasn’t enough outside difference, and most people bet on the MS solution. For Unity / MEF, there wasn’t any push to use something else, because you really didn’t depended on that.
In short, it depends :-)
There are some projects that really need critical mass to succeed. And for those projects, having Microsoft get behind them and push is going to make all the difference in the world.
And no, I don’t really see anything wrong with that.
I just had to respond to this post, Davy Brion talks about the Ruby community, and he had the following to say:
When i asked them about interesting resources to follow as a newbie Rubyist, they all gladly shared their suggestions. When i thanked them for it, they all replied stating that i should feel free to contact them if i had any more questions about whatever Ruby related. Seriously, can you imagine the few .NET heroes that we have responding to questions through email from people they don’t even know like that? I can’t. Hell, i know most of them don’t respond like that. The few that do are still trying to earn their MVP award or are too worried about renewing their MVP status.
Ignoring the MVP dig, allow me to explain exactly what is going on.
In the last 48 hours:
Those are all cold requests, from people I have never met, and all to my private email. Note that in most cases, there is a dedicated mailing list for the topic in question.
For that matter, the last two days has been decidedly quiet in the NHibernate front, this represent a more realistic sample of what is going on:
And those are in addition to the business, private, mailing list and other stuff that I do in email.
Putting it simply, there is too much traffic for me to welcome most cold questions with anything more than a direction to the appropriate mailing list. This isn’t about being rude, or uncaring, this is about actually being able to do any work at all.
I’ll be speaking in JAOO 2010, giving an Introduction to RavenDB in the NoSQL track. This is going to be the first time that I am going to show RavenDB in a major conference, and I am just a tiny bit nervous. This is going to be interesting, because I am going to present to people who are experienced in NoSQL solutions.
In addition to my talk (obviously the highlight of the entire conference :-) ), there are other sessions that I really want to be at:
Rx: curing your asynchronous programming blues - Erik Meijer. This is something that have been popping into my sights for a while, but never long enough to sit down and really study it. So I think I’ll take a shortcut through this session :-)
Lessons Learned in Large HTTP-Centric Systems – Jim Webber. There are two reasons to go to one of Jim’s talks. The first is the content, the second is the actual presentation style. Take a look at some of his talks to see what I means.
Building a Pet Store that will Survive Cyber - Cameron Purdy. This presentation interests me mostly because I don’t believe that what is suggest can be delivered (virtually unlimited scaling in a generic fashion), so it would be very interesting to see what is going on there.
Where to put data - Michael T. Nygard. I usually learn new things from Michael, so I’m looking forward to seeing what he has to say about this.
And the conference gods have actually managed to set things up so I’ll be able to be in all of those sessions, and not be busy giving a parallel session.
I’ll be doing another run of my NHibernate course on the 26th April. We are talking about an intensive 3-day NHibernate workshop, where you will learn how to use NHibernate efficiently in your applications to save time and effort on communicating with database storage. The last course I gave was quite a success, even if I say so myself.
If you can’t make the 26th April, I’ll be giving the same course on the 17th May, again in London.
And, in between, we have Progressive.NET as well, 12th - 14th May, it was awesome last year, and I think it is going to be even more fun this year.
Along with the NHibernate course that I’ll be giving in London next month, I’ll be doing a free session about lessons learned from building the NHibernate Profiler.
I am going to talk about architecture, internal design (including showing off the code), distributed team, release per commit, making technical decisions based on business concerns, building real world application infrastructure, etc.
This is a free event, but the number of places is limited, so please register in advance.
Roy Osherove has a few tweets about commercial tools vs. free ones in the .NET space. I’ll let his tweets serve as the background story for this post:
The backdrop is that Roy seems to be frustrated with the lack of adoption of what he considers to be better tools if there are free tools that deal with the same problem even if they are inferior to the commercial tools. The example that he uses is Final Builder vs. NAnt/Rake.
As someone who is writing both commercial and free tools, I am obviously very interested in both sides of the argument. I am going to accept, for the purpose of the argument, that the commercial tool X does more than the free tool Y who deals with the same problem. Now, let us see what the motivations are for picking either one of those.
With a free tool, you can (usually) download it and start playing around with it immediately. With commercial products, you need to pay (usually after the trail is over), which means that in most companies, you need to justify yourself to someone, get approval, and generally deal with things that you would rather not do. In other words, the barrier for entry is significantly higher for commercial products. I actually did the math a while ago, and the conclusion was that good commercial products usually pay for themselves in a short amount of time.
But, when you have a free tool in the same space, the question becomes more complex. Roy seems to think that if the commercial product does more than the free one, you should prefer it. My approach is slightly different. I think that if the commercial product solves a pain point or remove friction that you encounter with the free product, you should get it.
Let us go back to Final Builder vs. NAnt. Let us say that it is going to take me 2 hours to setup a build using Final Builder and 8 hours to setup the same build using NAnt. It seems obvious that Final Builder is the better choice, right? But if I have to spend 4 hours to justify buying Final Builder, the numbers are drastically different. And that is a conservative estimate.
Worse, let us say that I am an open minded guy that have used NAnt in the past. I know that it would take ~8 hours to setup the build using NAnt, and I am pretty sure that I can find a better tool to do the work. However, doing a proper evaluation of all the build tools out there is going to take three weeks. Can I really justify that to my client?
As the author of a commercial product, it is my duty to make sure that people are aware that I am going to fix their pain points. If I have a product that is significantly better than a free product, but isn’t significantly better at reducing pain, I am not going to succeed. The target in the product design (and later in the product marketing) is to identify and resolve pain points for the user.
Another point that I want to bring up is the importance of professional networks to bring information to us. No one can really keep track on all the things that are going on in the industry, and I have come to rely more & more on the opinions of the people in my social network to evaluate and consider alternatives in areas that aren’t offering acute pain. That allows me to be on top of things and learn what is going on at an “executive brief” level. That allows me to concentrate on the things that are acute to me, knowing the other people running into other problems will explore other areas and bring their results to my attention.
I might be speaking at QCon London, but I am not sure what about.
The requirements are:
This track intends to showcase some of the practitioners, tools and technologies to provide an awareness of something other than the Microsoft mantra for software development on .NET
Each talk should show at least one thing that is new or unusual for the masses on .NET to know or use and compare it to the status quo. It should provide some in depth examples or code around that comparison. In the cases where the speaker is the author of an OSS product also give a broader rationale and explanation of the tool and when it is best used.
My main issue is that there are too many topics to talk about, and I thought that I might put it on the blog and see what people are interested in.
The very first draft of Impleo (my CMS system), was based on sound design principles. It had good separation between the different parts (it actually had 4 or 5 projects). At some point I took a look at the code and couldn’t find it. There was a lot of infrastructure, but nothing that I could actually point to and say: “This is the application logic”.
I decided to take a different approach, created a new WebForms project, and started everything from scratch. I ported some of the code from the original solution, but mostly we just built new stuff. I stubbornly decided that I am going to apply YAGNI here, so I just scrapped everything that I usually do and started a typical web forms project, no IoC, no testing, nothing.
As expected, initially we had a really nice velocity, and if I had to hold my nose a bit during some parts, I was willing to go with that.
The next stage with Impleo, however, is to turn it from just a content management system into a website framework. The difference between the two are significant, while a CMS is just about content, a website framework allows you to add custom pages and application behavior. There is an added difficulty in that I want to use Impleo for multiple sites, doing different things.
That turn it from just a custom application to a platform, and the roles there are quite different. For that, just going with the approach that I used so far would have been disastrous. I tried, and spiked some things, but none of them made me happy. I decided that I really need to stop, and since I have a working version, I can “refactor” it to a better shape. I put refactor in double quotes intentionally.
My method included creating a new MVC project, add some integration tests and then porting everything from the WebForms to the MVC project. Overall, it was a pretty simple, if tedious, process.
The most difficult thing along the way was that I was doing this while I was flying, and for some stupid reason, I could either charge the laptop battery or use the laptop, so I had to make frequent stops to recharge.
If you head out to http://hibernatingrhinos.com/, you will see that I finally had the time to setup the corporate site. This is still very early, but I have a lot of content to add there, but it is a start.
Impleo, the CMS running the site, doesn’t have any web based interface, instead, it is built explicitly to take advantage of Windows Live Writer and similar tools. The “interface” for editing the site is the MetaWeblog API. This means that in order to edit the site, there isn’t any Wiki syntax to learn, or XML files to edit, or anything of this sort.
You have a powerful editor in your fingertips, one that properly handle things like adding images and other content. This turn the whole experience around. I usually find documentation boring, but I am used to writing in WLW, it is fairly natural to do, and it removes all the pain from the equation.
One of the things that I am trying to do with it is to setup a proper documentation repository for all my open source projects. This isn’t something new, and it is something that most projects have a hard time doing. I strongly believe in making things simple, in reducing friction. What I hope to do is to be able to accept documentation contributions from the community for the OSS projects.
I think that having a full fledged rich text editor in your hands is a game changer, compared to the usual way OSS handle documentation. Take a look at what is needed to make this works, it should take three minutes to get started, no learning curve, no “how do they do this”.
So here is the deal, if you would like to contribute documentation (which can be anything that would help users with the projects), I just made things much easier for you. Please contact me directly and I’ll send you the credentials to be able to edit the site.
Thanks in advance for your support.
Just finished doing this presentation, I think it went very well, although I planned to do a 45 minutes session + 15 questions but I ended up hitting the session time limit without covering everything that I wanted.
You can get the source code that I have shown in the presentation here: http://github.com/ayende/Advanced.NHibernate
You can find the PDF of the presentation here: http://ayende.com/presentations.aspx
Billy Newport is talking about Redis, showing some of the special APIs that Redis offers.
- Redis gives us first class List/Set operation, simplify many tasks involving collections. It is easy to get into big problems afterward.
- Can do 100,000 operations per second.
- Redis encourage a column oriented view, you use things like:
R.set("user:123@firstname", "billy") R.set("user:123@surname", "newport") R.set("uid:bewport", 123)
Ayende’s comment: I really don’t like that. No transactions or consistency, and this requires lots of remote calls.
- Bugs in your code can corrupt the entire data store. Causing severe issues in development.
- There is a sample Twitter like implementation, and the code is pretty interesting, it is a work-on-write implementation.
- List/set operations are problems. What happen when you have a big set? Case in point, Ashton has 4 million followers, work-on-write doesn’t work in this case.
- 100,000 operations per second doesn’t mean much when a routine scenario result in millions of operations.
- This is basically the usual SELECT N+1 issue.
- Async approach is required, processing large operations in chunks.
- Changing the way we work, instead of getting the data and working on it, send the code to the data store and execute it there (execute near the data).
- Ayende’s note: That is still dangerous, what happen if you send a piece of code to the data store and it hungs?
- Usual problems with column oriented issues, no reports, need export tools.
- Maybe use closures as a way to send the code to the server?
I need to think about this a bit more, I have some ideas based on this presentation that I would really like to explore more.
I have a tremendous amount of respect to Michael Feathers, so it is a no brainer to see his presentation.
Michael is talking about why Global Variables are not evil. We already have global state in the application, removing it is bad/impossible. Avoiding global variables leads to very deep argument passing chains, where something needs an object and it passed through dozens of objects that just pass it down. We already have the notions on how to test systems using globals (Singletons). He also talks about Repository Hubs & Factory Hubs – which provide the scope for the usage of a global variable.
- Refactor toward explicit seams, do not rely on accidental seams, make them explicit.
- Test Setup == Coupling, excessive setup == excessive coupling.
- Slow tests indicate insufficient granularity of coupling <- I am not sure that I agree with, see my previous posts about testing for why.
- It is often easier to mock outward interfaces than inward interfaces (try to avoid mocking stuff that return data)
- One of the hardest things in legacy code is making a change and not knowing what it is affecting. Functional programming makes it easier, because of immutability.
- Seams in a functional languages are harder. You parameterize functions in order to get those seams.
- TUF – Test Unfriendly Feature – IO, database, long computation
- TUC – Test Unfriendly Construct – static method, ctor, singleton
- Never Hide a TUF within a TUC
- No Lie principal – Code should never lie to you. Ways that code can lie:
- Dynamically replacing code in the source
- Addition isn’t a problem
- System behavior should be “what I see in the code + something else”, never “what I see minus something else”
- Weaving & aspects
- Impact on inheritance
- The Fallacy of Restricted Languages
- You want to rewrite if the architecture itself is bad, if you have issues in making changes rapidly, it is time for refactor the rough edges out.
This is just something that came up recently in a mailing list, we were talking about copyright, ownership and such. The topic of who owns the code you write on your own time (and on your own machines) came up.
The opinion of some people was that the employer may own the code even under those circumstances. It seems that it isn’t usually part of the law (that depend on where you are at, of course), but it is part of standard employment contract templates.
When I started looking for a job, I insisted on taking the employment contract home and going over it with:
- a calm mind
- having another set of eyes go over it
I had one case of not properly reading what I was signing on with bad consequences, I learned since then.
There is no such thing as a standard contract, you can always negotiate.
For that matter, I rejected an offer from one place after verbal agreements that we reached didn’t get into the contract (twice!). I decided that if they were trying to effectively cheat me when I wasn’t even working for them, I had better things to do than to put my head into this sickbed.
Some of the things that I found in employment contracts are of the sort that would make your head curl. Non compete agreements that basically say that you are not allowed to do any work (for anyone) for 2 years after you stop working for the company. Ownership on anything you do (be in software artifacts, a book about flowers and quite possibly any children you have during your employment terms).
Some of them are unenforceable at court, but you would be at a much better position if you didn’t have to deal with annoying section in a contract that you are signed on in the first place.
My usual approach to reading contracts is to debug them, assuming that the other side is nefarious, evil, double dealing and likes kicking puppies before breakfast. Most places will go with the “Try and you shall succeed” method for contracts. If you signed on to them without complaints, they are good. If you object to something, they can amend the contract to be more reasonable. It isn’t that they are nefarious, or that they even plan to act according to the contract. But it is best if they don’t have any leverage on you.
An interesting point that I run into is that it is often useful to be bold when negotiating a contract. I deleted the non compete clause for my employment contract when I viewed it, and required a lot of clarifications about what of my work amounts to company’s property. I followed the same logic as they did, “Try and you shall succeed”, if they didn’t care about that, I was good.
We ended up with a 1 year limitation for clients that they sent me to, and agreeing that any software work that I am making on the company’s time or using their equipment belong to the company, which I considered reasonable.
Not reading the contract is a crime, once you did, be very careful in deciding what is acceptable and what isn’t. And if you are already signed on a contract, make sure that you know what is in it.
I think that it is better to forget to blog about an event than to forget to show up for the event, so I am improving.
I am currently in Stockholm, Sweden, for the Progressive.NET event that starts tomorrow. I am going to do an Intro to NHibernate and two runs of my Advanced NHibernate workshop.
Should be fun :-)