Ayende @ Rahien

Hi!
My name is Oren Eini
Founder of Hibernating Rhinos LTD and RavenDB.
You can reach me by phone or email:

ayende@ayende.com

+972 52-548-6969

, @ Q c

Posts: 6,317 | Comments: 46,923

filter by tags archive

AnswerWhat does this code do?

time to read 2 min | 225 words

This was a surprising shock, this code seems so simple, but it does something very different than what I would expect.

The question is why?

As it turns out, we are missing one character here:

image

Notice the lack of the comma? Let us see how the compiler is treating this code by breaking it apart into steps, shall we?

First, let us break it into two statements:

Note that we moved the name line into the first statement, since there isn’t a command here. But what is it actually doing? This looks very strange, but that is just because we have the dictionary initializer here. If we drop it, we get:

image

And that make a lot more sense. If we’ll break it all down, we have:

And that explains it all, make perfect sense, and a very nasty trap. We run into it accidently in production code, and it was near impossible to figure out what was going on or why it happened.

AnswerModifying execution approaches

time to read 12 min | 2372 words

In RavenDB, we had this piece of code:

        internal T[] LoadInternal<T>(string[] ids, string[] includes)
        {
            if(ids.Length == 0)
                return new T[0];

            IncrementRequestCount();
            Debug.WriteLine(string.Format("Bulk loading ids [{0}] from {1}", string.Join(", ", ids), StoreIdentifier));
            MultiLoadResult multiLoadResult;
            JsonDocument[] includeResults;
            JsonDocument[] results;
#if !SILVERLIGHT
            var sp = Stopwatch.StartNew();
#else
            var startTime = DateTime.Now;
#endif
            bool firstRequest = true;
            do
            {
                IDisposable disposable = null;
                if (firstRequest == false) // if this is a repeated request, we mustn't use the cached result, but have to re-query the server
                    disposable = DatabaseCommands.DisableAllCaching();
                using (disposable)
                    multiLoadResult = DatabaseCommands.Get(ids, includes);

                firstRequest = false;
                includeResults = SerializationHelper.RavenJObjectsToJsonDocuments(multiLoadResult.Includes).ToArray();
                results = SerializationHelper.RavenJObjectsToJsonDocuments(multiLoadResult.Results).ToArray();
            } while (
                AllowNonAuthoritiveInformation == false &&
                results.Any(x => x.NonAuthoritiveInformation ?? false) &&
#if !SILVERLIGHT
                sp.Elapsed < NonAuthoritiveInformationTimeout
#else 
                (DateTime.Now - startTime) < NonAuthoritiveInformationTimeout
#endif
                );

            foreach (var include in includeResults)
            {
                TrackEntity<object>(include);
            }

            return results
                .Select(TrackEntity<T>)
                .ToArray();
        }

And we needed to take this same piece of code and execute it in:

  • Async fashion
  • As part of a batch of queries (sending multiple requests to RavenDB in a single HTTP call).

Everything else is the same, but in each case the marked line is completely different.

I chose to address this by doing a Method Object refactoring. I create a new class, and moved all the local variables to fields, and moved each part of the method to its own method. I also explicitly gave up control on executing, deferring that to whoever it calling us. We ended up with this:

    public class MultiLoadOperation
    {
        private static readonly Logger log = LogManager.GetCurrentClassLogger();

        private readonly InMemoryDocumentSessionOperations sessionOperations;
        private readonly Func<IDisposable> disableAllCaching;
        private string[] ids;
        private string[] includes;
        bool firstRequest = true;
        IDisposable disposable = null;
        JsonDocument[] results;
        JsonDocument[] includeResults;
                
#if !SILVERLIGHT
        private Stopwatch sp;
#else
        private    DateTime startTime;
#endif

        public MultiLoadOperation(InMemoryDocumentSessionOperations sessionOperations, 
            Func<IDisposable> disableAllCaching,
            string[] ids, string[] includes)
        {
            this.sessionOperations = sessionOperations;
            this.disableAllCaching = disableAllCaching;
            this.ids = ids;
            this.includes = includes;
        
            sessionOperations.IncrementRequestCount();
            log.Debug("Bulk loading ids [{0}] from {1}", string.Join(", ", ids), sessionOperations.StoreIdentifier);

#if !SILVERLIGHT
            sp = Stopwatch.StartNew();
#else
            startTime = DateTime.Now;
#endif
        }

        public IDisposable EnterMultiLoadContext()
        {
            if (firstRequest == false) // if this is a repeated request, we mustn't use the cached result, but have to re-query the server
                disposable = disableAllCaching();
            return disposable;
        }

        public bool SetResult(MultiLoadResult multiLoadResult)
        {
            firstRequest = false;
            includeResults = SerializationHelper.RavenJObjectsToJsonDocuments(multiLoadResult.Includes).ToArray();
            results = SerializationHelper.RavenJObjectsToJsonDocuments(multiLoadResult.Results).ToArray();

            return    sessionOperations.AllowNonAuthoritiveInformation == false &&
                    results.Any(x => x.NonAuthoritiveInformation ?? false) &&
#if !SILVERLIGHT
                    sp.Elapsed < sessionOperations.NonAuthoritiveInformationTimeout
#else 
                    (DateTime.Now - startTime) < sessionOperations.NonAuthoritiveInformationTimeout
#endif
                ;
        }

        public T[] Complete<T>()
        {
            foreach (var include in includeResults)
            {
                sessionOperations.TrackEntity<object>(include);
            }

            return results
                .Select(sessionOperations.TrackEntity<T>)
                .ToArray();
        }
    }

Note that this class doesn’t contain two very important things:

  • The actual call to the database, we gave up control on that.
  • The execution order for the methods, we don’t control that either.

That was ugly, and I decided that since I have to write another implementation as well, I might as well do the right thing and have a shared implementation. The key was to extract everything away except for the call to get the actual value. So I did just that, and we got a new class, that does all of the functionality above, except control where the actual call to the server is made and how.

Now, for the sync version, we have this code:

internal T[] LoadInternal<T>(string[] ids, string[] includes)
{
    if(ids.Length == 0)
        return new T[0];

    var multiLoadOperation = new MultiLoadOperation(this, DatabaseCommands.DisableAllCaching, ids, includes);
    MultiLoadResult multiLoadResult;
    do
    {
        using(multiLoadOperation.EnterMultiLoadContext())
        {
            multiLoadResult = DatabaseCommands.Get(ids, includes);
        }
    } while (multiLoadOperation.SetResult(multiLoadResult));

    return multiLoadOperation.Complete<T>();
}

This isn’t the most trivial of methods, I’ll admit, but it is ever so much better than the alternative, especially since now the async version looks like:

/// <summary>
/// Begins the async multi load operation
/// </summary>
public Task<T[]> LoadAsyncInternal<T>(string[] ids, string[] includes)
{
    var multiLoadOperation = new MultiLoadOperation(this,AsyncDatabaseCommands.DisableAllCaching, ids, includes);
    return LoadAsyncInternal<T>(ids, includes, multiLoadOperation);
}

private Task<T[]> LoadAsyncInternal<T>(string[] ids, string[] includes, MultiLoadOperation multiLoadOperation)
{
    using (multiLoadOperation.EnterMultiLoadContext())
    {
        return AsyncDatabaseCommands.MultiGetAsync(ids, includes)
            .ContinueWith(t =>
            {
                if (multiLoadOperation.SetResult(t.Result) == false)
                    return Task.Factory.StartNew(() => multiLoadOperation.Complete<T>());
                return LoadAsyncInternal<T>(ids, includes, multiLoadOperation);
            })
            .Unwrap();
    }
}

Again, it isn’t trivial, but at least the core stuff, the actual logic that isn’t related to how we execute the code is shared.

AnswerStopping the leaks

time to read 6 min | 1068 words

Originally posted at 4/19/2011

Yesterday I posted the following challenge:

Given the following API, can you think of a way that would prevent memory leaks?

public interface IBufferPool
{
    byte[] TakeBuffer(int size);
    void ReturnBuffer(byte[] buffer);
}

The problem with having something like this is that forgetting to return the buffer is going to cause a memory leak. Instead of having that I would like to have the application stop if a buffer is leaked. Leaked means that no one is referencing this buffer but it wasn’t returned to the pool.

What I would really like is that when running in debug mode, leaking a buffer would stop the entire application and tell me:

  • That a buffer was leaked.
  • What was the stack trace that allocated that buffer.

Let us take a look at how we are going about implementing this, shall we? I am going to defer the actual implementation of the buffer pool to System.ServiceModel.Channels.BufferManager and focus on providing the anti leak features. The result is that this code:

IBufferPool pool = new BufferPool(1024*512, 1024);

var buffer = pool.TakeBuffer(512);
GC.WaitForPendingFinalizers(); // nothing here

pool.ReturnBuffer(buffer);
buffer = null;
GC.WaitForPendingFinalizers(); // nothing here, we released the memory properly

pool.TakeBuffer(512); // take and discard a buffer without returning to the pool
GC.WaitForPendingFinalizers(); // failure!

Will result in the following error:

Unhandled Exception: System.InvalidOperationException: A buffer was leaked. Initial allocation:
   at ConsoleApplication1.BufferPool.BufferTracker.TrackAllocation() in IBufferPool.cs:line 22
   at ConsoleApplication1.BufferPool.TakeBuffer(Int32 size) in IBufferPool.cs:line 60
   at ConsoleApplication1.Program.Main(String[] args) in Program.cs:line 21

And now for the implementation:

public class BufferPool : IBufferPool
{
    public class BufferTracker
    {
        private StackTrace stackTrace;

        public void TrackAllocation()
        {
            stackTrace = new StackTrace(true);
            GC.ReRegisterForFinalize(this);
        }

        public void Discard()
        {
            stackTrace = null;
            GC.SuppressFinalize(this);
        }

        ~BufferTracker()
        {
            if (stackTrace == null)
                return;

            throw new InvalidOperationException(
                "A buffer was leaked. Initial allocation:" + Environment.NewLine + stackTrace
                );
        }
    }

    private readonly BufferManager bufferManager;
    private ConditionalWeakTable<byte[], BufferTracker> trackLeakedBuffers = new ConditionalWeakTable<byte[], BufferTracker>();

    public BufferPool(long maxBufferPoolSize, int maxBufferSize)
    {
        bufferManager = BufferManager.CreateBufferManager(maxBufferPoolSize, maxBufferSize);
    }

    public void Dispose()
    {
        bufferManager.Clear();
        // note that disposing the pool before returning all of the buffers will cause a crash
    }

    public byte[] TakeBuffer(int size)
    {
        var buffer = bufferManager.TakeBuffer(size);
        trackLeakedBuffers.GetOrCreateValue(buffer).TrackAllocation();
        return buffer;
    }

    public void ReturnBuffer(byte[] buffer)
    {
        BufferTracker value;
        if(trackLeakedBuffers.TryGetValue(buffer, out value))
        {
            value.Discard();
        }
        bufferManager.ReturnBuffer(buffer);
    }
}

As you can see, utilizing ConditionalWeakTable is quite powerful, since it allows us to support a lot of really advanced scenarios in a fairly simple ways.

AnswerThis code should never hit production

time to read 4 min | 769 words

Originally posted at 12/15/2010

Yesterday I asked what is wrong with the following code:

public ISet<string> GetTerms(string index, string field)
{
    if(field == null) throw new ArgumentNullException("field");
    if(index == null) throw new ArgumentNullException("index");
    
    var result = new HashSet<string>();
    var currentIndexSearcher = database.IndexStorage.GetCurrentIndexSearcher(index);
    IndexSearcher searcher;
    using(currentIndexSearcher.Use(out searcher))
    {
        var termEnum = searcher.GetIndexReader().Terms(new Term(field));
        while (field.Equals(termEnum.Term().Field()))
        {
           result.Add(termEnum.Term().Text());

            if (termEnum.Next() == false)
                break;
        }
    }

    return result;
}

The answer to that is quite simple, this code doesn’t have any paging available. What this means is if we executes this piece of code on an field with very high number of unique items (such as, for example, email addresses), we would return all the results in one shot. That is, if we can actually fit all of them to memory. Anything that can run over potentially unbounded result set should have paging as part of its basic API.

This is not optional.

Here is the correct piece of code:

public ISet<string> GetTerms(string index, string field, string fromValue, int pageSize)
{
    if(field == null) throw new ArgumentNullException("field");
    if(index == null) throw new ArgumentNullException("index");
    
    var result = new HashSet<string>();
    var currentIndexSearcher = database.IndexStorage.GetCurrentIndexSearcher(index);
    IndexSearcher searcher;
    using(currentIndexSearcher.Use(out searcher))
    {
        var termEnum = searcher.GetIndexReader().Terms(new Term(field, fromValue ?? string.Empty));
        if (string.IsNullOrEmpty(fromValue) == false)// need to skip this value
        {
            while(fromValue.Equals(termEnum.Term().Text()))
            {
                if (termEnum.Next() == false)
                    return result;
            }
        }
        while (field.Equals(termEnum.Term().Field()))
        {
            result.Add(termEnum.Term().Text());

            if (result.Count >= pageSize)
                break;

            if (termEnum.Next() == false)
                break;
        }
    }

    return result;
}

And that is quite efficient, even for searching large data sets.

For bonus points, the calling code ensures that pageSize cannot be too big :-)

AnswerYour own ThreadLocal

time to read 4 min | 616 words

Originally posted at 12/15/2010

Well, the problem on our last answer was that we didn’t protect ourselves from multi threaded access to the slots variable. Here is the code with this fixed:

public class CloseableThreadLocal
{
    [ThreadStatic]
    public static Dictionary<object, object> slots;

    private readonly object holder = new object();
    private Dictionary<object, object> capturedSlots;

    private Dictionary<object, object> Slots
    {
        get
        {
            if (slots == null)
                slots = new Dictionary<object, object>();
            capturedSlots = slots;
            return slots;
        }
    }


    public /*protected internal*/ virtual Object InitialValue()
    {
        return null;
    }

    public virtual Object Get()
    {
        object val;

        lock (Slots)
        {
            if (Slots.TryGetValue(holder, out val))
            {
                return val;
            }
        }
        val = InitialValue();
        Set(val);
        return val;
    }

    public virtual void Set(object val)
    {
        lock (Slots)
        {
            Slots[holder] = val;
        }
    }

    public virtual void Close()
    {
        GC.SuppressFinalize(this);
        if (capturedSlots != null)
            capturedSlots.Remove(this);
    }

    ~CloseableThreadLocal()
    {
        if (capturedSlots == null)
            return;
        lock (capturedSlots)
            capturedSlots.Remove(holder);
    }
}

Is this it? Are there still issues that we need to handle?

AnswerDebugging a resource leak

time to read 3 min | 461 words

As it turns out, there are a LOT of issues with this code:

public class QueueActions : IDisposable
{
    UnmanagedDatabaseConnection database;
    public string Name { get; private set; }

    public class QueueActions( UnmanagedDatabaseConnectionFactory factory)
    {
         database = factory.Create();
         database.Open(()=> Name = database.ReadName());
    }

   // assume proper GC finalizer impl

    public void Dispose()
    {
          database.Dispose();
    }
}

And the code using this:

using(var factory = CreateFactory())
{
   ThreadPool.QueueUserWorkItem(()=>
   {
          using(var actions = new QueueActions(factory))
          {
               actions.Send("abc");     
          }
    });
}

To begin with, what happens if we close the factory between the first and second lines in QueueActions constructors?

We already have an unmanaged resource, but when we try to open it, we are going to get an exception. Since the exception is thrown from the constructor, it will NOT invoke the usual using logic, and the code will not be disposed.

Furthermore, and the reason for the blog post about it. Dispose itself can also fail.

Here is the actual stack trace that caused this blog post:

Microsoft.Isam.Esent.Interop.EsentErrorException: Error TermInProgress (JET_errTermInProgress, Termination in progress)
at Microsoft.Isam.Esent.Interop.Api.Check(Int32 err) in Api.cs: line 1492
at Microsoft.Isam.Esent.Interop.Api.JetCloseTable(JET_SESID sesid, JET_TABLEID tableid) in Api.cs: line 372
at Microsoft.Isam.Esent.Interop.Table.ReleaseResource() in D:\Work\esent\EsentInterop\Table.cs: line 97
at Microsoft.Isam.Esent.Interop.EsentResource.Dispose() in EsentResource.cs: line 63
at Rhino.Queues.Storage.AbstractActions.Dispose() in AbstractActions.cs: line 146 

AnswerThe lazy loaded inheritance many to one association OR/M conundrum

time to read 4 min | 710 words

Update: It appears that I am wrong, and NHibernate can support this functionality by eagerly loading the association at load time. You can do by specifying lazy="false" (and optionally, outer-join="true") on the many to one association.

Yesterday I presented an interesting problem that pop up with any OR/M that supports inheritance and lazy loading.

Let us say that we have the following entity model:

image_thumb

Backed by the following data model:

image_thumb[2]

As you can see, we map the Animal hierarchy to the Animals table, and we have a polymorphic association between Animal Lover and his/her animal. Where does the problem starts?

Well, let us say that we want to load the animal lover. We do that using the following SQL:

SELECT Name,
	 Animal,
	 Id
FROM AnimalLover
WHERE Id = 1 /* @p0 */

And now we have an animal lover instance:

var animalLover = GetAnimalLoverById(1);
var isDog = animalLover.Animal is Dog;
var isCat = animalLover.Animal is Cat;

Can you guess what would be the result of this code?

The answer is that both isDog nor isCat would be… false.

But how is that?

To answer that question, let us take a look at the SQL that was used to load the animal lover, and let us take a look at a typical example of hydrating entities. I am using Davy’s DAL here to show off the problem, because the code is simple and it demonstrate that the problem is not unique to a particular OR/M, but is shared among all of them (Davy’s DAL doesn’t even support inheritance, for example).

private void SetReferenceProperties<TEntity>(
	TableInfo tableInfo, 
	TEntity entity, 
	IDictionary<string, object> values)
{
	foreach (var referenceInfo in tableInfo.References)
	{
		if (referenceInfo.PropertyInfo.CanWrite == false)
			continue;
		
		object foreignKeyValue = values[referenceInfo.Name];

		if (foreignKeyValue is DBNull)
		{
			referenceInfo.PropertyInfo.SetValue(entity, null, null);
			continue;
		}

		var referencedEntity = sessionLevelCache.TryToFind(
			referenceInfo.ReferenceType, foreignKeyValue);
			
		if(referencedEntity == null)
			referencedEntity = CreateProxy(tableInfo, referenceInfo, foreignKeyValue);
								   
		referenceInfo.PropertyInfo.SetValue(entity, referencedEntity, null);
	}
}

Take a look at what the code is doing, we are currently processing the Aminal property on the AnimalLover class. And we try to find an Animal that was loaded with a primary key matching to the value of the Animal column in the AnimalLovers table.

When we can’t find it, we have to create a lazy loading proxy for the referenced entity. And here is where the conundrum kicks into play. When we have inheritance, we have a real problem here. What is the type of the referenced entity?

From the model, we know that it must be a derivation of Animal of some sort, and we have its PK, but we have no way of knowing which without going to the database for it.

So what are we going to do? Because we don’t have enough information to create a lazy loading proxy of the appropriate type, we actually generate a lazy loading proxy of the type that we do know about, Animal.

But what about when it is being loaded?

Well, that is where the lack of #become in .NET becomes painful, we already have an instance, and we can’t change its types. And we can’t replace the reference on the AnimalLover because someone might have grab a reference to the animal before the lazy load.

The way to handle it is by turning the lazy loading proxy into a real one. We load a new instance that represent the entity, now with the correct type, since we query the DB to find out what it is (along with the rest of the entity’s data).

And the lazy loading proxy that we originally used is now loaded, and any call made on it will be forwarded to the new instance that was loaded.

animalLover.Animal stays an AnimalProxy, and cannot be cast to a Dog or a Cat, even if the actual row it is pointing to is a Dog or a Cat.

AnswerDon't stop with the first DSL abstraction

time to read 5 min | 801 words

The problem as it was stated was of rules that looked like this:

upon bounced_check or refused_credit:
	if customer.TotalPurchases > 10000: # preferred
		ask_authorization_for_more_credit
	else:
		call_the cops 

upon new_order:
	if customer.TotalPurchases > 10000: # preferred
		apply_discount 5.precent 

upon order_shipped:
	send_marketing_stuff unless customer.RequestedNoSpam 

I don't like it, and the reason isn't just that we can introduce IsPreferred.

I don't like it because the abstraction facilities here are poor. We have basically introduced events and business rules, maybe with a sprinkling of a domain model, but nothing really meaningful. Such system will die under their own weight in any situation of significant complexity (in other words, in all real world situations).

Let us consider the problem in reverse, shall we? We have various conditions and actions upon which we can act. But the logic is scattered all over the place, making it hard to read, modify, understand and work with. When such a system compose of the lifeblood of the business, the business usually adapts, and starts to talk in the terms of the system. However, they tend to lose the ability to think about things in way that would be more meaningful.

I listened today to a business person trying to explain some concept that he wanted to make. It took him several tries to explain the business problem because he was focused on the technical one. The system has a corrupting affect on it. I call this the Babel Syndrome, the reverse of DDD's ubiquitous language.

Let us see if we can get a high level of meaning out of the above DSL, shall we? First, we restate our problem, instead of dealing with events and conditions for responding the events, we deal with business responses for scenarios. It doesn't sound like much of a difference, but in actuality, there is a big difference between the two.

The most important of those differences is the change from handling the events to handling a business scenario in a given context. In other words, instead of asking what we should do when a check is bounced, we need to ask a totally different question. "When the customer is preferred, what should the response be for bounced check?"

This is anything but a minor change in the the way we think about the language and how we operate on it. Let us see the DSL script, after which we can discuss how it affects us. These are the contents of the default.boo file:

upon order_shipped:
	send_marketing_stuff unless customer.RequestedNoSpam

upon bounced_check or refused_credit: 
call_the cops

This will be executed for all orders, like before. Now, let us look at preferred_customer.boo, and what concepts it express.

when customer.TotalPurchases > 10000 # preferred

upon new_order:
	apply_discount 5.precents

upon bounced_check or refused_credit:
	ask_authorization_for_more_credit

And now we are getting to see some of the more interesting parts of the difference. We are now talking in terms of a business scenario. When we have a preferred customer, and something happen, how should we respond?

This change is a well known refactoring: conditional to polymorphism. In other words, we just created the strategy pattern with a DSL. The difference here is that the script have an active role in deciding whatever it can deal with the scenario or not (in other words, chain of responsibility, and the pattern I am going to mention).

When we need to handle some business scenario, we are going to execute all the scripts, with the default.boo being the last one to run. If any of the scripts accepted the scenario as valid and has specific action to take, it has the option to do so.

Enough about the implementation, let us go back to the concepts. We can make now talk to the business people in a way that is far more concise and natural. Instead of having to focus on all permutations of a possible event, we can now talking about a specific scenario and how we handle the business event in that context. Not only is this more readable, it is easier by far to actually define such things as what is the meaning of a preferred customer. I can open the DSL and actually read it.

Similar approaches are very useful when you recognize that the code is asking to be given a more explicit shape than just generic rules. Don't let your DSL be whatever you started with. Find and actively extract higher level meanings whenever it is possible.

A deeper examination of this DSL, how to build and use it is likely to compose most of chapter 13, as a real world example of a complex DSL. Who do you think?

Given this approach, how would you design an offer management DSL?

AnswerHow many tests?

time to read 2 min | 365 words

Two days ago I asked how many tests this method need:

///<summary> 
///Get the latest published webcast 
///</summary>
public Webcast GetLatest();

Here is what I came up with:

[TestFixture]
public class WebcastRepositoryTest : DatabaseTestFixtureBase
{
	private IWebcastRepository webcastRepository;

	[TestFixtureSetUp]
	public void TestFixtureSetup()
	{
		IntializeNHibernateAndIoC(PersistenceFramework.ActiveRecord, 
			"windsor.boo", MappingInfo.FromAssemblyContaining<Webcast>());
	}

	[SetUp]
	public void Setup()
	{
		CurrentContext.CreateUnitOfWork();
		webcastRepository = IoC.Resolve<IWebcastRepository>();
	}

	[TearDown]
	public void Teardown()
	{
		CurrentContext.DisposeUnitOfWork();
	}

	[Test]
	public void Can_save_webcast()
	{
		var webcast = new Webcast { Name = "test", PublishDate = null };
		With.Transaction(() => webcastRepository.Save(webcast));
		Assert.AreNotEqual(0, webcast.Id);
	}

	[Test]
	public void Can_load_webcast()
	{
		var webcast = new Webcast { Name = "test", PublishDate = null };
		With.Transaction(() => webcastRepository.Save(webcast));
		UnitOfWork.CurrentSession.Evict(webcast);

		var webcast2 = webcastRepository.Get(webcast.Id);
		Assert.AreEqual(webcast.Id, webcast2.Id);
		Assert.AreEqual("test", webcast2.Name);
		Assert.IsNull(webcast2.PublishDate);
	}

	[Test]
	public void When_asking_for_latest_webcast_will_not_consider_any_that_is_not_published()
	{
		var webcast = new Webcast { Name = "test", PublishDate = null };
		With.Transaction(() => webcastRepository.Save(webcast));

		Assert.IsNull(webcastRepository.GetLatest());
	}

	[Test]
	public void When_asking_for_latest_webcast_will_get_published_webcast()
	{
		var webcast = new Webcast { Name = "test", PublishDate = null };
		With.Transaction(() => webcastRepository.Save(webcast));
		var webcast2 = new Webcast { Name = "test", PublishDate = DateTime.Now.AddDays(-1) };
		With.Transaction(() => webcastRepository.Save(webcast2));

		Assert.AreEqual(webcast2.Id, webcastRepository.GetLatest().Id);
	}

	[Test]
	public void When_asking_for_latest_webcast_will_get_the_latest_webcast()
	{
		var webcast = new Webcast { Name = "test", PublishDate = DateTime.Now.AddDays(-2) };
		With.Transaction(() => webcastRepository.Save(webcast));
		var webcast2 = new Webcast { Name = "test", PublishDate = DateTime.Now.AddDays(-1) };
		With.Transaction(() => webcastRepository.Save(webcast2));

		Assert.AreEqual(webcast2.Id, webcastRepository.GetLatest().Id);
	}

	[Test]
	public void When_asking_for_latest_webcast_will_not_consider_webcasts_published_in_the_future()
	{
		var webcast = new Webcast { Name = "test", PublishDate = DateTime.Now.AddDays(-2) };
		With.Transaction(() => webcastRepository.Save(webcast));
		var webcast2 = new Webcast { Name = "test", PublishDate = DateTime.Now.AddDays(2) };
		With.Transaction(() => webcastRepository.Save(webcast2));
		Assert.AreEqual(webcast.Id, webcastRepository.GetLatest().Id);
	}
}

And the implementation:

public class WebcastRepository : RepositoryDecorator<Webcast>, IWebcastRepository
{
	public WebcastRepository(IRepository<Webcast> repository)
	{
		Inner = repository;
	}

	public Webcast GetLatest()
	{
		var publishedWebcastsByDateDesc =
			from webcast in Webcasts
			where webcast.PublishDate != null && webcast.PublishDate < SystemTime.Now()
			orderby webcast.PublishDate descending 
			select webcast;

		return publishedWebcastsByDateDesc.FirstOrDefault();
	}

	private static IOrderedQueryable<Webcast> Webcasts
	{
		get { return UnitOfWork.CurrentSession.Linq<Webcast>(); }
	}
}

I think it is pretty sweet.

FUTURE POSTS

  1. RavenDB Conference videos: Building Codealike: a journey into the developers analytics world - 2 days from now
  2. Low level Voron optimizations: Transaction lock handoff - 3 days from now
  3. RavenDB Conference Videos: Delving into Documents with Data Subscriptions - 4 days from now
  4. Low level Voron optimizations: Primitives & abstraction levels - 5 days from now
  5. RavenDB Conference Videos: Replication changes in 3.5 - 6 days from now

And 5 more posts are pending...

There are posts all the way to Mar 10, 2017

RECENT SERIES

  1. RavenDB Conference videos (12):
    23 Feb 2017 - Implementing CQRS and Event Sourcing with RavenDB
  2. Low level Voron optimizations (5):
    20 Feb 2017 - Recyclers do it over and over again.
  3. Implementing low level trie (4):
    26 Jan 2017 - Digging into the C++ impl
  4. Answer (9):
    20 Jan 2017 - What does this code do?
  5. Challenge (48):
    19 Jan 2017 - What does this code do?
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats