Voron & time series data: Getting real data outputs

time to read 8 min | 1438 words

So far, we have just put the data in and out. And we have had a pretty good track record doing so. However, what do we do with the data now that we have it?

As you can expect, we need to read it out. Usually by specific date ranges. The interesting thing is that we usually are not interested in just a single channel, we care about multiple channels. And for fun, those channel might be synchronized or not. An example of the first might be the current speed and the current engine temperature in a car. They are generally share the exact same timestamps. An example of out of sync is when you have a sensor on a rooftop measuring rainfall, and another sensor in the sewer measuring water flow rates. (Again, thanks to Dan for helping me with the domain).

This is interesting, because it present quite a few interesting problems:

  • We need to merge different streams into a unified view.
  • We need to handle both matching and non matching sequences.
  • We need to handle erroneous data, what happens when we have two reading for the same time for the same sensor? Yes, that shouldn’t happen, but it does.

I solved this with the following API:

public class RangeEntry
    public DateTime Timestamp;
    public double?[] Values;

IEnumerable<RangeEntry> results = dts.ScanRanges(DateTime.MinValue, DateTime.MaxValue, new[] { "6febe146-e893-4f64-89f8-527f2dbaae9b", "707dcb42-c551-4f1a-9203-e4b0852516cf", "74d5bee8-9a7b-4d4e-bd85-5f92dfc22edb", "7ae29feb-6178-4930-bc38-a90adf99cfd3", });

This API gives me the results in the time order, with the same positions as the ids requested for the values. With nulls if there isn’t a value matching the value from that time in that particular sensor channel.

The actual implementation relies on this method:

IEnumerable<Entry> ScanRange(DateTime start, DateTime end, string id)

All this does it provide the entries all the entries in a particular date range, for a particular channel. Let us see how we implement multi channel scanning on top of this:

private class PendingEnumerator
    public IEnumerator<Entry> Enumerator;
    public int Index;

private class PendingEnumerators
    private readonly SortedDictionary<DateTime, List<PendingEnumerator>> _values =
        new SortedDictionary<DateTime, List<PendingEnumerator>>();

    public void Enqueue(PendingEnumerator entry)
        List<PendingEnumerator> list;
        var dateTime = entry.Enumerator.Current.Timestamp;
        if (_values.TryGetValue(dateTime, out list) == false)
            _values.Add(dateTime, list = new List<PendingEnumerator>());

    public bool IsEmpty { get { return _values.Count == 0; } }

    public List<PendingEnumerator> Dequeue()
        if (_values.Count == 0)
            return new List<PendingEnumerator>();

        var kvp = _values.First();
        return kvp.Value;

public IEnumerable<RangeEntry> ScanRanges(DateTime start, DateTime end, string[] ids)
    if (ids == null || ids.Length == 0)
        yield break;

    var pending = new PendingEnumerators();
    for (int i = 0; i < ids.Length; i++)
        var enumerator = ScanRange(start, end, ids[i]).GetEnumerator();
        if(enumerator.MoveNext() == false)
        pending.Enqueue(new PendingEnumerator
            Enumerator = enumerator,
            Index = i

    var result = new RangeEntry
        Values = new double?[ids.Length]
    while (pending.IsEmpty == false)
        var entries = pending.Dequeue();
        if (entries.Count == 0)
        foreach (var entry in entries)
            var current = entry.Enumerator.Current;
            result.Timestamp = current.Timestamp;
            result.Values[entry.Index] = current.Value;
        yield return result;

We are getting a single entry from each channel into the pending enumerators. Then, we collate all the entries that share the same time into a single entry.

We use the Index property to track the actual expected index of the entry in the output. And we handle duplicate times in the same channel by outputting multiple entries.

Testing this on my 1.1 million records data set, we can get 185 thousands records back in 0.15 seconds.