Ayende @ Rahien

It's a girl

End of year discount for all our products

To celebrate the new year, we offer a 21% discount for all our products. This is available for the first 33 customers that use the coupon code: 0x21-celebrate-new-year

In previous years, we offered a similar number of uses for the coupon code, and they run out fast, so hurry up. This offer is valid for:

Happy Holidays and a great new years.

On a personal note, this marks the full release of all our product lines, and it took an incredible amount of work. I'm very pleased that we have been able to get the new version out there and in your hands, and to have you start making use of the features that we have been working on for so long.

Published at

Originally posted at

NHibernate & Entity Framework Profiler 3.0

It may not get enough attention, but we have been working on the profilers as well during the past few months.

TLDR; You can get the next generation of NHibernate Profiler and Entity Framework Profiler now, lots of goodies to look at!

I’m sure that a lot of people would be thrilled to hear that we dropped Silverlight in favor of going back to WPF UI. The idea was that we would be able to deploy anywhere, including in production. But Silverlight just made things harder all around, and customers didn’t like the production profiling mode.

Production Profiling

We have changed how we profile in production. You can now make the following call in your code:

NHibernateProfiler.InitializeForProduction(port, password);

And then connect to your production system:

image

At which point you can profile what is going on in your production system safely and easily. The traffic between your production server and the profiler is SSL encrypted.

NHibernate 4.x and Entity Framework vNext support

The profilers now support the latest version of NHibernate and Entity Framework. That include profiling async operations, better suitability for modern apps, and more.

New SQL Paging Syntax

We are now properly support SQL Server paging syntax:

select * from Users
order by Name
offset 0 /* @p0 */ rows fetch next 250 /* @p1 */ rows only

This is great for NHibernate users, who finally can have a sane paging syntax as well as beautiful queries in the profiler.

At a glance view

A lot of the time, you don’t want the profiler to be front and center, you want to just run it and have it there to glance at once in a while. The new compact view gives you just that:

image

You can park it at some point in your screen and work normally, glancing to see if it found anything. This is much less distracting than the full profiler for normal operations.

Scopes and groups

When we started working on the profilers, we followed the “one session per request” rule, and that was pretty good. But a lot of people, especially in the Entity Framework group are using multiple sessions or data contexts in a single request, but they still want to see the ability to see the operations in a request at a glance. We are now allowing you to group things, like this:

image

By default, we use the current request to group things, but we also give you the ability to define your own scopes. So if you are profiling NServiceBus application, you can set the scope as your message handling by setting ProfilerIntegration.GetCurrentScopeName or explicitly calling ProfilerIntegration.StartScope whenever you want.

Customized profiling

You can now surface troublesome issues directly from your code. If you have an issue with a query, you can mark it for attention using CustomQueryReporting .ReportError() that would flag it in the UI for further investigation.

You can also just mark interesting pieces in the UI without an error, like so:

using (var db = new Entities(conStr))
{
    var post1 = db.Posts.FirstOrDefault();

    using (ProfilerIntegration.StarStatements("Blue"))
    {
        var post2 = db.Posts.FirstOrDefault();
    }

    var post3 = db.Posts.FirstOrDefault();
    
    ProfilerIntegration.StarStatements();
    var post4 = db.Posts.FirstOrDefault();
    ProfilerIntegration.StarStatementsClear();

    var post5 = db.Posts.FirstOrDefault();
}

Which will result in:

image

Disabling profiler from configuration

You can now disable the profiler by setting:

<add key="HibernatingRhinos.Profiler.Appender.NHibernate" value="Disabled" />

This will avoid initializing the profiler, obviously. The intent is that you can setup production profiling, disable it by default, and enable it selectively if you need to actually figure things out.

Odds & ends

We move to WebActivatorEx  from the deprecated WebActivator, added xml file for the appender, fixed a whole bunch of small bugs, the most important among them is:

clip_image001[4]

 

Linq to SQL, Hibernate and LLBLGen Profilers, RIP

You might have noticed that I talked only about NHibernate and Entity Framework Profilers. The sales for the rests weren’t what we hoped they would be, and we are no longer going to sale them.

Go get them, there is a new release discount

You can get the NHibernate Profiler and Entity Framework Profiler for a 15% discount for the next two weeks.

Published at

Originally posted at

Comments (7)

Working on the Uber Profiler 3.0

One of the things that tend to get lost is the fact that Hibernating Rhinos is doing more than just working on RavenDB. The tagline we like to use is Easier Data, and the main goal we have is to provide users with better ways to access, monitor and manage their data.

We have started planning the next version of the Uber Profiler, that means Entity Framework Profiler, NHibernate Profiler, etc. Obviously, we’ll offer support for EF 7.0 and NHibernate 4.0, and we had gathered quite a big dataset for common errors when using an OR/M, which we intend to convert into useful guidance whenever our users run into such an issue.

But I realized that I was so busy working on RavenDB that I forgot to even mention the work we’ve been doing elsewhere. And I also wanted to solicit feedback about the kind of features you’ll want to see in the 3.0 version of the profilers.

Special Offer: 29% discount for all our products

Well, it is nearly the 29 May, and that means that I have been married for two years.

To celebrate that, I am offering a 29% discount on all our products (RavenDB, NHibernate Profiler, Entity Framework Profiler, etc).

All you have to do is purchase any of our products using the following coupon code:

2nd anniversary

This offer is valid to the end of the month, so hurry up.

Goodbye, 2012: Our end of year discount starts now!

Well, as the year draws to a close, it is that time again, I got older, apparently. Yesterday marked my 31th trip around the sun.

To celebrate, I decided to give the first 31 people a 31% discount for all of our products.

This offer applies to:

This also applies to our support & consulting services.

All you have to do is to use the following coupon code: goodbye-2012

Enjoy the end of the year, and happy holidays.

Production Cloud Profiling With Uber Prof

With Uber Prof 2.0 (NHibernate Profiler, Entity Framework Profiler, Linq to SQL Profiler, LLBLGen Profiler) we are going to bring you a new era of goodness.

In 1.0, we gave you a LOT of goodness for the development stage of building your application, but now we are able to take it a few steps further. Uber Prof 2.0 supports production profiling, which means that you can run it in production and see what is going on in your application now!

To make things even more interesting, we have also done a lot of work to make sure that this works on the cloud as well. For example, go ahead and look at this site: http://efprof.cloudapp.net/

This is a live application, that doesn’t really do anything special, I’ll admit. But the kicker is when you go to this URL: http://efprof.cloudapp.net/profiler/profiler.html

image

This is EF Prof, running on the cloud, and giving you the results that you want, live. You can read all about this feature and how to enable it here, but I am sure that you can see the implications.

Uber Prof V2.0 is now in Public Beta

Well, we worked quite a bit on that, but the Uber Prof (NHibernate Profiler, Entity Framework Profiler, Linq to SQL Profiler, etc) version 2.0 are now out for public beta.

We made a lot of improvements. Including performance, stability and responsiveness, but probably the most important thing from the user perspective is that we now support running the profiler in production, and even on the cloud.

We will have the full listing of all the new goodies up on the company site soon, including detailed instructions on how to enable production profiling and on cloud profiling, but I just couldn’t wait to break the news to you.

In fact, along with V2.0 of the profilers, we have a brand new site for our company, which you can check here: http://hibernatingrhinos.com/.

To celebrate the fact that we are going on beta, we also offer a 20% discount for the duration of the beta.

Nitpicker corner, please remember that this is a beta, there are bound to be problems, and we will fix them as soon as we can.

Über Profiler v2.0–Private Beta is open

Well, it took a while, but the next version of the NHibernate Profiler is ready to go to a private beta.

We go to private beta before the product is done, because we want to get as much early feedback as we can.

We have 10 spots for NHibernate Profiler v2.0 and 10 spots for Entity Framework Profiler v2.0.

If you are interested, please send me an email about this.

Published at

Originally posted at

Gilad Shalit is back

I don’t usually do posts about current events, but this one is huge.

To celebrate the event, you can use the following coupon code: SLT-45K2D4692G to get 19.41% discount (Gilad was captive for 1,941 days) for all our profilers:

Hell, even our commercial support for NHibernate is participating.

Please note that any political comment to this post that I don’t agree with will be deleted.

What is next for the profilers?

We have been working on the profilers (NHibernate Profiler, Entity Framework Profiler, Linq to SQL Profiler, LLBLGen Profiler and Hibernate Profiler) for close to three years now. And we have been running always as 1.x, so we didn’t have a major release (although we have continual releases, we currently have close to 900 drops of the 1.x version).

The question now becomes, what is going to happen in the next version of the profiler?

  • Production Profiling, the ability to setup your application so that you can connect to your production application and see what is going on right now.
  • Error Analysis, the ability to provide you with additional insight and common solution to recurring problems.
  • Global Query Analysis, the ability to take all of your queries, look at their query plans and show your potential issues.

Those are the big ones, we have a few others, and a surprise in store Smile

What would you want to see in the next version of the profiler?

NHibernate Profiler update: Client Profile & NHibernate 3.x updates

We just finished a pretty major change to how the NHibernate Profiler interacts with NHibernate. That change was mostly driven out of the desire to fully support running under the client profile and to allow us to support the new logging infrastructure in NHibernate 3.x.

The good news, this is done Smile, you can now use NH Prof from the client profile, and you don’t need to do anything to make it work for NHibernate 3.x.

The slightly bad news is that if you were relying on log4net conifguration to configure NH Prof, there is a breaking change that affects you. Basically, you need to update your configuration. You can find the full details on how to do this in the documentation.

New Profiler Feature: Avoid Writes from Multiple Sessions In The Same Request

Because I keep getting asked, this feature is available for the following profilers:

This new feature detects a very interesting bad practice, write to the database from multiple session in the same web request.

For example, consider the following code:

public void SaveAccount(Account account)
{
    using(var session = sessionFactory.OpenSession())
    using(session.BeginTransaction())
    {
           session.SaveOrUpdate(account);
           session.Transaction.Commit();    
    }
}
public Account GetAccount(int id)
{
    using(var session = sessionFactory.OpenSession())
    {
        return session.Get<Account>(id);
    }
}

It is bad for several reasons, micro managing the session is just one of them, but the worst part is yet to come…

public void MakePayment(int fromAccount, int toAccount, decimal ammount)
{
    var from = Dao.GetAccount(fromAccount);
    var to = Dao.GetAccount(toAccount);
    from.Total -= amount;
    to.Total += amount;
    Dao.SaveAccount(from);
    Dao.SaveAccount(to);
}

Do you see the error here? There are actually several, let me count them:

  • We are using 4 different connections to the database in a single method.
  • We don’t have transactional safety!!!!

Think about it, if the server crashed between the fifth and sixth lines of this method, where would we be?

We would be in that wonderful land where money disappear into thin air and we stare at that lovely lawsuit folder and then jump from a high window to a stormy sea.

Or, of course, you could use the profiler, which will tell you that you are doing something which should be avoided:

image

Isn’t that better than swimming with the sharks?

New Uber Prof Feature: Too Many Database Calls In The Same Request

Recently, we added a way to track alerts across all the sessions the request. This alert will detect whenever you are making too many database calls in the same request.

But wait, don’t we already have that?

Yes, we do, but that was limited to the scope of one session. there is a very large set of codebases where the usage of OR/Ms is… suboptimal (in other words, they could take the most advantage of the profiler abilities to detect issues and suggest solutions to them), but because of the way they are structured, they weren’t previously detected.

What is the difference between a session and a request?

Note: I am using NHibernate terms here, but naturally this feature is shared among all profiler:

A session is the NHibernate session (or the data/object context in linq to sql / entity framework), and the request is the HTTP request or the WCF operation. If you had code such as the following:

public T GetEntity<T>(int id)
{
    using (var session = sessionFactory.OpenSession())
    {
         return session.Get<T>(id);
    }
}

This code is bad, it micro manages the session, it uses too many connections to the database, it … well, you get the point. The problem is that code that uses this code:

public IEnumerable<Friends> GetFriends(int[] friends)
{
   var results = new List<Friends>();
   foreach(var id in friends)
       results.Add(GetEnttiy<Friend>(id));

   return results;
}

The code above would look like the following in the profiler:

Image1

As you can see, each call is in a separate session, and previously, we wouldn’t have been able to detect that you have too many calls (because each call is a separate session).

Now, however, we will alert the user with a too many database calls in the same request alerts.

Image2

New Uber Prof Concept: Cross Session Alerts

We have recently been doing some work on Uber Prof, mostly in the sense of a code review, and I wanted to demonstrate how easy it was to add a new feature. The problem is that we couldn’t really think of a nice feature to add that we didn’t already have.

Then we started thinking about features that aren’t there and that there wasn’t anything in Uber Prof to enable, and we reached the conclusion that one limitation we have right now is the inability to analyze your application’s behavior beyond the session’s level. But there is actually a whole set of bad practices that are there when you are using multiple sessions.

That led to the creation of a new concept the Cross Session Alert, unlike the alerts we had so far, those alerts looks at the data stream with a much broader scope, and they can analyze and detect issues that we previously couldn’t detect.

I am going to be posting extensively on some of the new features in just a bit, but in the meantime, why don’t you tell me what sort of features do you think this new concept is enabling.

And just a reminder, my architecture is based around Concepts & Features.

Uber Prof New Features: A better query plan

Originally posted at 1/7/2011

Because I keep getting asked, this feature is available for the following profilers:

This feature is actually two separate ones. The first is the profiler detecting what is the most expensive part of the query plan and making it instantly visible. As you can see, in this fairly complex query, it is this select statement that is the hot spot.

image

Another interesting feature that only crops up whenever we are dealing with complex query plans is that the query plan can get big. And by that I mean really big. Too big for a single screen.

Therefore, we added zooming capabilities as well as the mini map that you see in the top right corner.

Uber Prof New Features: Go To Session from alert

Originally posted at 1/7/2011

This is another oft requested feature that we just implemented. The new feature is available for the full suite of Uber Profilers:

You can see the new feature below:

image

I think it is cute, and was surprisingly easy to do.

Uber Prof have recently passed the stage where it is mostly implemented using itself, so I just had to wire a few things together, and then I spent most of the time just making sure that things aligned correctly on the UI.

NH Prof New Feature: Alert on bad ‘like’ query

Originally posted at 12/3/2010

One of the things that are coming to NH Prof is more smarts at the analysis part. We now intend to create a lot more alerts and guidance. One of the new features that is already there as part of this strategy is detecting bad ‘like’ queries.

For example, let us take a look at this

image

This is generally not a good idea, because that sort of query cannot use an index, and requires the database to generate a table scan, which can be pretty slow.

Here is how it looks like from the query perspective:

image

And NH Prof (and all the other profilers) will now detect this and warn about this.

image

In fact, it will even detect queries like this:

image

Tags:

Published at

Originally posted at

Comments (8)

Feature selection strategies for NH Prof

Originally posted at 12/3/2010

I recently had a discussion on how I select features for NH Prof.  The simple answer is that I started with features that would appeal to me.  My dirty little secret is that the only reason NH Prof even exists is that I wanted it so much and no one else did it already.

But while that lasted for a good while, I eventually got to the point where NH Prof does everything that I need it to do. So, what next… ?

Feature selection is a complex topic, and it is usually performed in the dark, because you have to guess at what people are using. A while ago I setup NH Prof so I can get usage reports (they are fully anonymous, and were covered on this blog previously). Those usage reports come in very handily when I need to understand how people are using NH Prof. Think about it like a users study, but without the cost, and without the artificial environment.

Here are the (real) numbers for NH Prof:

Action % What it means
Selection 62.76% Selecting a statement
Session-Statements 20.58% Looking at a particular session statements
Recent-Statements 8.67% The recent statements (default view)
Unique-Queries 2.73% The unique queries report
Listening-Toggle 1.10% Stop / Start listening to connections
Session-Usage 0.91% Showing the session usage tab for a session
Session-Entities 0.54% Looking at the loaded entities in a session
Query-Execute 0.50% Show the results of a query
Connections-Edit 0.38% Editing a connection string
Queries-By-Method 0.34% The queries by method report
Queries-By-Url 0.27% The queries by URL report
Overall-Usage 0.25% The overall usage report
Show-Settings 0.23% Show settings
Aggregate-Sessions 0.21% Selecting more than 1 session
Reports-Queries-Expensive 0.16% The expensive queries report
Session-Remove 0.13% Remove a session
Queries-By-Isolation-Level 0.08% The queries by isolation level report
File-Load 0.04% Load a saved session
File-Save 0.03% Save a session
Html-Export 0.02% Exporting to HTML
Sessions-Diff 0.01% Diffing two sessions
Sort-By-ShortSql 0.01% Sort by SQL
Session-Rename 0.01% Rename a session
Sort-By-Duration 0.01% Sort by duration
Sort-By-RowCount > 0.00% Sort by row count
GoToSession > 0.00% Go from report to statement’s session
Sort-By-AvgDuration > 0.00% Sort by duration (in reports)
Production-Connect > 0.00% (Not publically available) Connect to production server
Sort-By-QueryCount > 0.00% Sort by query count (in reports)
Sort-By-Alerts > 0.00% Sort by alerts (for statements)
Sort-By-Count > 0.00% Sort by row count

There is nothing really earth shattering here, by far, people are using NH Prof as a tool to show them the SQL. Note how most of the other features are used much more rarely. This doesn’t mean that they are not valuable, but it does represent that a feature that isn’t immediately available on the “show me the SQL” usage path is going to be used very rarely.

There is another aspect for feature selection, will this feature increase my software sales?

Some features are Must Have, your users won’t buy the product without them. Some features are Nice To Have, but have no impact on the sale/no sale. Some features are very effective in driving sales.

In general, there is a balancing act between how complex a feature is, how often people will use it and how useful would it be in marketing.

I learned quickly that having better analysis (alerts) is a good competitive advantage, which is why I optimized the hell out of this development process. In contrast to that, things like reports are much less interesting, because once you got the Must Have ones, adding more doesn’t seem to be an effective way of going about things.

And then, of course, there are the features whose absence annoys me…

Tags:

Published at

Originally posted at

Comments (6)

The most amazing demo ever

I was giving a talk yesterday at the Melbourne ALT.Net group, the topic of which was How Ayende’s Build Software. This isn’t the first time that I give this talk, and I thought that talking in the abstracts is only useful so much.

So I decided to demonstrate, live, how I get stuff done as quickly as I am. One of the most influential stories that I ever read was The Man Who Was Too Lazy to Fail - Robert A. Heinlein. He does a much better job in explain the reasoning, but essentially, it comes down to finding anything that you do more than once, and removing all friction from it.

In the talk, I decided to demonstrate, live, how this is done. I asked people to give me an idea about a new feature for NH Prof. After some back and forth, we settled on warning when you are issuing a query using a like that will force the database to use a table scan.  I then proceed to implement the scenario showing what I wanted:

public class EndsWith : INHibernateScenario
{
    public void Execute(ISessionFactory factory)
    {
        using(var s = factory.OpenSession())
        {
            s.CreateCriteria<Blog>()
                .Add(Restrictions.Like("Title", "%ayende"))
                .List();
        }
    }
}

Implemented the feature itself, and tried it out live. This showed off some aspects about the actual development, the ability to execute just the right piece of the code that I want by offering the ability to execute individual scenarios easily.

We even did some debugging because it didn’t work the first time. Then I wrote the test for it:

public class WillDetectQueriesUsingEndsWith : IntegrationTestBase
{
    [Fact]
    public void WillIssueWarning()
    {
        ExecuteScenarioInDifferentAppDomain<EndsWith>();

        var statementModel = model.RecentStatements.Statements.First();

        Assert.True(
            statementModel.Alerts.Any(x=>x.HelpTopic == AlertInformation.EndsWithWillForceTableScan.HelpTopic)
            );
    }
    
}

So far, so good, I have been doing stuff like that for a while now, live.

But it is the next step that I think shocked most people, because I then committed the changes, and let the CI process takes care of things.  By the time that I showed the people in the room that the new build is now publically available, it has already been download.

Now, just to give you some idea, that wasn’t the point of this talk. I did a whole talk on different topic, and the whole process from “I need an idea” to “users are the newly deployed feature” took something in the order of 15 minutes, and that includes debugging to fix a problem.

What is Uber Prof’s competitive advantage?

Originally posted at 11/25/2010

In a recent post, I discussed the notion of competitive advantage and how you should play around them. In this post, I am going to focus on Uber Prof. Just to clarify, when I am talking about Uber Prof, I am talking about NHibernate Profiler, Entity Framework Profiler, Linq to SQL Profiler, Hibernate Profiler and LLBLGen Profiler. Uber Prof is just a handle for me to use to talk about each of those.

So, what is the major competitive advantage that I see in the Uber Prof line of products?

Put very simply, they focus very heavily on the developer’s point of view.

Other profilers will give you the SQL that is being executed, but Uber Prof will show you the SQL and:

  • Format that SQL in a way that make it easy to read.
  • Group the SQL statements into sessions. Which let the developer look at what is going on in the natural boundary.
  • Associate each query with the exact line of code that executed it.
  • Provide the developer with guidance about improving their code.

There are other stuff, of course, but those are the core features that make Uber Prof into what it is.

Profiler new features: Data binding alerts

The following features apply to NHProf, EFProf, L2SProf.

In general, it is strong discouraged to data bind directly to an IQueryable. Mostly, that is because data binding may actually iterate over the IQueryable several times, resulting in multiple queries being generated from something that can be done purely in memory. Worse, it is actually pretty common for data binding to result in lazy loading, and lazy loading from data binding almost always result in SELECT N+1. The profiler can now detect and warn you about such mistakes preemptively. More than that, the profiler can also now detect queries that are being generated from the views in an ASP.Net MVC application, another bad practice that I don’t like.

You can find more information about each warnings here:

WPF detection:

image

 

image

WinForms detections:

image

image

Web applications:

image

image

Profiler new features, Sept Edition

The following features apply to NHProf, EFProf, HProf, L2SProf.

The first feature is something that was frequently requested, but we kept deferring. Not because it was hard, but because it was tedious and we had cooler features to implement: Sorting.

image

Yep. Plain old sorting for all the grids in the application.

Not an exciting feature, I’ll admit, but an important one.

The feature that gets me exciting is the Go To Session. Let us take the Expensive Queries report as a great example for this feature:

image

As you can see, we have a very expensive query. Let us ignore the reason it is expensive, and assume that we aren’t sure about that.

The problem with the reports feature in the profiler is that while it exposes a lot of information (expensive queries, most common queries, etc), it also lose the context of where this query is running. That is why you can, in any of the reports, right click on a statement and go directly to the session where it originated from:

image

image

We bring the context back to the intelligence that we provide.

What happen if we have a statement that appear in several sessions?

image

You can select each session that this statement appears in, getting back the context of the statement and finding out a lot more about it.

I am very happy about this feature, because I think that it closes a circle with regards to the reports. The reports allows you to pull out a lot of data across you entire application, and the Go To Session feature allows you to connect the interesting pieces of the data back to originating session, giving you where and why this statement was issued.

Estimates sucks, especially when I pay for them

I recently got an estimate for a feature that I wanted to add to NH Prof. It was for two separate features, actually, but they were closely related.

That estimate was for 32 hours.

And it caused me a great deal of indigestion. The problem was, quite simply, that even granting that there is the usual padding of estimates (which I expect), that timing estimate was off, way off. I knew what would be required for this feature, and it shouldn’t be anywhere near complicated enough to require 4 days of full time work. In fact, I estimated that it would take me a maximum of 6 hours and a probable of 3 hours to get it done.

Now, to be fair, I know the codebase (well, actually that isn’t true, a lot of the code for NH Prof was written by Rob & Christopher, and after a few days of working with them, I stopped looking at the code, there wasn’t any need to do so). And I am well aware that most people consider me to be an above average developer.

I wouldn’t have batted an eye for an estimate of 8 – 14 hours, probably. Part of the reason that I have other people working on the code base is that even though I can do certain things faster, I can only do so many things, after all.

But a time estimate that was 5 – 10 times as large as what I estimated was too annoying. I decided that this feature I am going to do on my own. And I decided that I wanted to do this on the clock.

The result is here:

image

This is actually total time over three separate sittings, but the timing is close enough to what I though it would be.

This includes everything, implementing the feature, unit testing it, wiring it up in the UI, etc.

The only thing remaining is to add the UI works for the other profilers (Entity Framework, Linq to SQL, Hibernate and the upcoming LLBLGen Profiler) . Doing this now…

image

And we are done.

I have more features that I want to implement, but in general, if I pushed those changes out, they would be a new release that customers can use immediately.

Nitpicker corner: No, I am not being ripped off. And no, the people making the estimates aren't incompetent. To be perfectly honest, looking at the work that they did do and the time marked against it, they are good, and they deliver in a reasonable time frame. What I think is going on is that their estimates are highly conservative, because they don't want to get into a bind with "oh, we run into a problem with XYZ and overrun the time for the feature by 150%".

That also lead to a different problem, when you pay by the hour, you really want to have estimates that are more or less reasonably tied to your payments. But estimating with hours has too much granularity to be really effective (a single bug can easily consume hours, totally throwing off estimation, and it doesn't even have to be a complex one.)