Ayende @ Rahien

Oren Eini aka Ayende Rahien CEO of Hibernating Rhinos LTD, which develops RavenDB, a NoSQL Open Source Document Database.

Get in touch with me:

oren@ravendb.net

+972 52-548-6969

Posts: 7,143 | Comments: 50,106

Privacy Policy Terms
filter by tags archive
time to read 3 min | 584 words

IconSo, I talked a bit about the architecture and the actual feature, but let us see how I have actually build & implemented this feature.

This is the actual code that goes into the actual product, I want to point out. And this is actually one of the more complex ones, because of the possible state changes.

public class UnboundedResultSetStatementProcessor : IStatementProcessor
{
	public void BeforeAttachingToSession(SessionInformation sessionInformation, 
		FormattedStatement statement)
	{
	}

	public void AfterAttachingToSession(SessionInformation sessionInformation, 
		FormattedStatement statement, OnNewAction newAction)
	{
		if(statement.CountOfRows!=null)
		{
			CheckStatementForUnboundedResultSet(statement, newAction);
			return;
		}
		bool addedAction = false;
		statement.ValuesRefreshed += () =>
		{
			if(addedAction)
				return;
			addedAction = CheckStatementForUnboundedResultSet(statement, newAction);
		};
	}

	public bool CheckStatementForUnboundedResultSet(FormattedStatement statement,
		 OnNewAction newAction)
	{
		if (statement.CountOfRows == null)
			return false;

		// we are discounting statements returning 1 or 0 results because
		// those are likely to be queries on either PK or unique values
		if (statement.CountOfRows < 2)
			return false;

		// we don't check for select statement here, because only selects have row count
		var limitKeywords = new[] { "top", "limit", "offset" };
		foreach (var limitKeyword in limitKeywords)
		{
			//why doesn't the CLR have Contains() that takes StringComparison ??
			if (statement.RawSql.IndexOf(limitKeyword, StringComparison.InvariantCultureIgnoreCase) != -1)
				return true;
		}

		newAction(new ActionInformation
		{
			Severity = Severity.Suggestion,
			Title = "Unbounded result set"
		});
		return true;
	}

	public void ProcessTransactionStatement(TransactionMessageBase tx)
	{
	}
}

And now the test:

[TestFixture]
public class Ticket_51_UnboundedResultSet : IntegrationTestBase
{
	[Test]
	public void Will_issue_alert_for_unbounded_result_sets()
	{
		ExecuteScenarioInDifferentAppDomain<LoadPostsUsingCriteriaQuery>();

		var statement = observer.Model.RecentStatements.Statements
			.OfType<StatementModel>()
			.First();
		Assert.AreEqual(1, statement.Actions.Count);
		Assert.AreEqual("Unbounded result set", statement.Actions[0].Title);
	}
}

And, just for fun, the scenario that we are testing:

public class LoadPostsUsingCriteriaQuery : IScenario
{
    public void Execute(ISessionFactory factory)
    {
        using (var session = factory.OpenSession())
        using (var tx = session.BeginTransaction())
        {
            session.CreateCriteria(typeof(Post))
                .List();

            tx.Commit();
        }
    }
}

And this is it. All you have to do to implement a new feature. This make building the application much easier, because at each point in time, we have to deal with only one thing. It is the aggregation of everything put together that is actually of value.

Also, notice that I heavily optimized my workflow for tests and scenarios. I can write just what I want to happen, not caring about how this is actually happening. Optimizing the ease of test is another architectural concern that I consider very important. If we don't deal with that, the tests would be a PITA to write, so they would either wouldn't get written, or we would get tests that are hard to read.

Also, notice that this is a full integration tests, we execute the entire backend, and we test the actual view model that the UI is going to display. I could have tested this using standard unit testing, but in this case, I chose to see how everything works from start to finish.

time to read 2 min | 352 words

IconThe back end in NH Prof is responsible for intercepting NHibernate's events, making sense of all the mess, applying best practices suggestions and forwarding to the front end for display.

Second, it is also a good example of how I apply the Open Close Principle at the architecture level. With NH Prof, there are multiple extension points that I can use to add new features.

Here is a schematic of how things works:

image

Not shown here is the NHibernate Listener (of which, of course, I have several), which is publishing events to the bus. Those events are first handled by the low level message handlers, which publish new events on the bus.

Those are interesting only in the sense that they translate the low level details into events with semantics that we can use in the app itself. Most of those events, as you probably guessed, end up in the Model Building part. This is responsible of taking a set of unstructured events into a coherent structure.

Along with the model building, we have another extension point here, best practices analysis. Those are implemented as a set of classes that we plug into the model building part. If we want to add a new best practice, we need to create a new class, register it, and we are done.

Here is the checkin for implementing the unbounded row set (which is ticket #51):

image

We add a new class (and a test for the class), register it in the BusFacility and in this case I actually had to fix a bug in the tested scenario, which loaded the wrong item.

I'll post more details about the actual implementation of Unbounded Result Set soon, but I wanted to talk about the architecture that enable this. Because we structured the whole thing around a common core that we can use, anything that fit the core (and most things does) doesn't require any special effort. Apply a new behavior, done.

time to read 4 min | 620 words

I am getting a lot of requests to explore the actual innards of the NH Prof. I find it surprising, because I didn't think that people wimageould actually be interested in that aspect of the tool.

But since interest was expressed, I'll do my best to satisfy the curiosity.The first topic to discuss is the integration test architecture. One of the things that the profiler is doing is to capture data from a remote process, and I wanted my integration tests to be able to test that scenario, which exposes me to things like synchronization issues, cross process communication and (not incidentally), allows me to test scenarios that looks just like real code.

The integration tests project for NH Prof is not a dll, it is an executable. And when you execute it you can ask it to run a specific scenario (specified using command line parameters).

Let us take an example of such scenario:

public class SelectBlogByIdUsingCriteria : IScenario
{
public void Execute(ISessionFactory factory)
{
using (var session = factory.OpenSession())
using (var tx = session.BeginTransaction())
{
session.CreateCriteria(typeof(Blog))
.Add(Restrictions.IdEq(1))
.List();
tx.Commit();
}
}
}

And now that I have the scenario, we can write a test that uses it. Here is the first test that I wrote using this approach:

[TestFixture]
public class CanGetDataFromSeparateProcess : IntegrationTestBase
{
[Test]
public void SelectBlogById()
{
ExecuteScenarioInDifferentProcessWithDefaultConfig<SelectBlogByIdUsingCriteria>();
var sessionModel = observer.Model.Sessions.First();
StatementModel selectBlogById = (StatementModel)
sessionModel.Statements.Where(x=>x is StatementModel).First();
const string expected = @"SELECT this_.Id as Id\d_\d_,
this_.Title as Title\d_\d_,
this_.Subtitle as Subtitle\d_\d_,
this_.AllowsComments as AllowsCo\d_\d_\d_,
this_.CreatedAt as CreatedAt\d_\d_
FROM Blogs this_
WHERE this_.Id = 1
";
Assert.IsTrue(Regex.IsMatch(selectBlogById.Text, expected),selectBlogById.Text);
}
}

There are a couple of interesting things going on here. First, we can see the IntegrationTestBase, which has methods such as ExecuteScenarioInDifferentProcessWithDefaultConfig (the config controls the communication mechanism).

Then, all I need to do is assert on the actual model that was built as a result of executing the scenario.

Executing the scenario involves running the integration test project as an executable, and passing it the scenario to execute. This tests cross process communication, expose threading issues and in general means that I have a realistic view of what is going to happen in the actual application.

FUTURE POSTS

  1. AWS Roles, AWS Lambda and eventual consistency - about one day from now

There are posts all the way to Apr 12, 2021

RECENT SERIES

  1. Building a phone book (3):
    02 Apr 2021 - Part III
  2. Building a social media platform without going bankrupt (10):
    05 Feb 2021 - Part X–Optimizing for whales
  3. Webinar recording (12):
    15 Jan 2021 - Filtered Replication in RavenDB
  4. Production postmortem (30):
    07 Jan 2021 - The file system limitation
  5. Open Source & Money (2):
    19 Nov 2020 - Part II
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats