Ayende @ Rahien

It's a girl

NHibernate on .Net 2.0: Part II

My first attempt in using generic collections in NHibernate was to just make NHibernate grok my custom collection. I took a look in the code, and it seemed rather straightforward to do so. I posted to the developer list and got the following response from Sergey, the lead developer:

  1. Have your collection class implement an interface (class MyCollection: IMyCollection)
  2. Create NHibernate-specific wrapper class (PersistentMyCollection : PersistentCollection, IMyCollection) which functions as a proxy for the real collection. All methods from IMyCollection that PersistentMyCollection implements have a Read() or Write() at the beginning, and then delegate to the real collection. PersistentMyCollection should also implement some abstract methods from PersistentCollection, these mostly have to do with snapshots and loading process.
  3. Create MyCollectionType which will derive from PersistentCollectionType and implement methods like Wrap (wrapping MyCollection in PersistentMyCollection), Instantiate, Add and Clear, and the rest.
  4. Modify the binder to accept your collection type, and that should be it.

It's straightforward but a long process, and it involves some deep knowledge about the way NHibernate works. Luckily I managed to get it most of it pre-baked by transforming the existing Set collections to use generics. I started to run into problems when I had to dynamically construct generic types, but this article sums it pretty well. I could get the correct info and get it to work.

I decided to wait with that for now, and try a less invasive approach. I'm currently trying to get it to work using a Property Accessors. I'll post my results when I'm done.

Good quotes

These quotes really cracked me up, I have to find an excuse to use them on someone sometime.

I think my favorites are:

  •  "A modest little person, with much to be modest about." --Winston Churchill
  •  "I've just learned about his illness. Let's hope it's nothing trivial." --Irvin S. Cobb
Tags:

Published at

NHibernate on .Net 2.0: Part I

I've been developing in .Net 2.0 [without ReSharper* :-( ], and I'm using NHiberante and Active Record for the data access layer. There were no problems with regard to their usage, but there are things that I don't like in the interface that they gives you. The problem is that you get an untyped collection. This was sort of OK in 1.0/1.1 but it's a really eye sore for developing in 2.0, where I want everything to be a generic type-safe collection.

Another issue with NHibernate is that the syncing between relationships is not automatic. That is to say, the following code is essentially a no-op:

Blog blog = session.Load(typeof(Blog),1);
blog.Posts.Add(new Post("Briliant Post By Ayende"));
session.Flush();

Since the relationship is maintained on the other side of the connection. (Think about the way it's all laid out in the tables, and you'll understand why.) This is the bare minimum to get it to work:

Post post = new Post("Briliant Post By Ayende")
Blog blog = session.Load(typeof(Blog),1);
post.Blog = blog;
session.Save(post);
session.Flush();

But the above code has a serious shortcoming, it doesn't create the association in the blog, so you would need to re-load the blog from the database to get the change. This is undesirable, naturally, so you can do:

Post post = new Post("Briliant Post By Ayende")
Blog blog = session.Load(typeof(Blog),1);
post.Blog = blog;
blog.Posts.Add(post);
session.Save(post);
session.Flush();

Or create a method AddPost() that will do both operations in one line. This is something that bothers me, I like the natural way of doing blog.Posts.Add(post) to create associations. It's what a developer come to expect. The problem is that you can't add the code to do it to the Add() method on the list since the list is something that NHibernate supplies to you.

On the next installment I'll describe how I'm going to solve this problem.

* I did try, but it's not stable by far yet.

Code Coverage and Exceptions

One of the more annoying this in code coverage in .Net is that it doesn't consider the line where an exception was thrown to be visited.

public void SomeMethod()
{
   throw new Exception(); //Will appear as unvisited to the code coverage
}

Both NCover and VS 2005 exhibit this behavior, and I assume that this has to do with the way they work, via the profiling API. Anyone can say what the reason for that is?

Tags:

Published at

Visual Studio 2005: Unfit for Testing

I'm not sure who is in fault here. If it is me or if it's something in VS. I didn't get any training, or read any books, before moving to VS 2005. And I am certainly struggling with it. I'd a comfortable set of tools in VS 2003, which I knew and loved. And VS 2005 has (on the surface) superior tools. But I keep running into stupid things that making it impossible for me to use them.

I've used VS testing capabilities for less than a single day, and I can tell you right now whoever wrote them left gaping holes in the implementation. Comparing VS 2005 Unit Testing (From Microsoft, supposedly well researched and well tested) vs. the open source unit testing frameworks (which doesn't have nearly the same resources as Microsoft) the open source wins hands down.

Oh, the VS 2005 UI is much slicker, and you need to combine a lot of tools to get the same results as what VS 2005 is offering, but the problem is that VS 2005 unit testing just doesn't work!

You can't use the abstract test fixture pattern! They know about it since January, but don't seem willing to change that! Another request in April got a "By Design" status and was postphoned to the next release. They also suggest to duplicate your code in order to cover this scenario.

I don't know about Microsoft, but when I've a complex class library, I want to know that a change to a base class (with potentially dozens of children, and very complex logic) is done in one place only. I believe this is called the DRY principal, which I hear has some importance in a distant developer cult located in the Himallaya. Obviously Microsoft thinks otherwise.

The microsoft solution to testing such a class is:

  • Change all the hundreds of concrete tests that you've that refer to a spesific functionality in the base class. (Since each concrete class gets its own copy of the base class tests.) - Time estimate: days.
  • Run the tests and see all the tests breaks.
  • Make the change to the base class.
  • Run the tests again and watch all the tests that you forgot to modify breaks.

The xUnit solution for testing such a class is:

  • Make a change to the single test in the abstract test class. - Time estimate: minutes.
  • Run and see it breaks.
  • Make the change to the base class.
  • Run the tests and get the green bar.

Can you spot the difference?

It's a simple change in the test runner, you need to drop the DeclaredOnly flag, but it's probably a major change with regard to the accopanying toolkit.

This comes after I discovered other major flaw in the product, the following code is a passing test:

[TestMethod(), ExpectedException(typeof(ArgumentNullException),"Blah")] 
public void MethodUnderTest() 

  throw new ArgumentNullException("1234"); 

Can you spot the huge bug? See what the exception message is? And what is the expected exception message it?

Again, this is from a very short introduction to the VS 2005 Unit Testing capabilities, and I'm really impressed with the abilities of the UI although there are some strange holes there as well. Where is the ability to point at a test/fixture and say run those tests? The functionality exists, but it's a bit involved procedure.

Nevertheless, I'm forced to conclude that VS 2005 as it stands today is unfit to be used as a serious Unit Testing framework. It has a lot of other capabilities that I didn't even began to explore (Load testing, web testing, etc). But for unit testing it simply is an unacceptable hurdle.

The rules for Test Driven Development are:

  1. Write a small test.
  2. Write just enough code to make it pass.
  3. Remove Duplication

You can't do that in VS 2005.

By the way, I'm using the release candidate, so it's pretty certain that this is the way it will go to market, which is really sad.

Mocking in the Real World

I was in the Agile Israel meeting earilier this earlier week, and I overheard a couple of guys* talking about mocking objects. The talk went something like this:

Guy #1: Most developers don't understand mock objects.
Guy #2: But then you explain it to them and when they get it, they become the local gurus.
Guy #1: Yes, but then they try to use mocks everywhere and make a mess of things.

Do you have similar experiances? I'm the author of a mocking framework, and I find that I don't use mocking all that often. Then again, my recent projects were mainly in compiler internals and the like, not much you can mock there.

I'm currently building a business application, so I expect to make heavy use of mocking the moment I move away from the data access layer, which is my curent focus.

* This was my first meeting, so I don't know any of the names, I did meet Justin, though.

Tags:

Published at

On naming things right the first time

This joke really hit home. I served two years in the same base, and my father always thought I was serving in a totally different base.

 

Tags:

Published at

How to get the database structure script from SQL Express?

Here the scenario, I'm using VS 2005 with SQL Express 2005. I started with a database structure from the schema, and then I started writing code, every so often finding spots where the spec got it wrong, so I would change the table (Naming/Relationships/etc). I did it all inside VS 2005.

The problem: Now I can't get it out.

There doesn't seem to be a way to tell VS 2005 or SQL Server: "Thank you very much, this is the final structure, now give me the script to create it in another database."

I don't need data replication, I just need the create table ... for the entire database. This can be done very easily from Enterprise Manager, but I don't have that on the machine. I can't install it because it require uninstaling SQL Express, and that takes us to the CTP Matching Game. I'm 99.9% sure that if I would do it I would manage to lose my current database.

I tried connecting to it from Enterprise Manager 2000 on another machine, but it refuse to login, even though I'm using Windows Authentication and I can login to the computer from the second machine. Installing the full SQL Server 2005 on another machine is something that will take a couple of hours at least, and even then I'm not guaranteed to have it work (I've very bad experience with the SQL Express line).

So the question is simple, how do I get the table information out of the database?

Tags:

Published at

ICollection WTF?

I’m just trying to implement a generic container for a collection from 1.1, and I took a real look at ICollection, there are not add/remove/contains methods!

WTF was going on in the mind of the person who designed it?

 

public interface ICollection : IEnumerable

{

      void CopyTo(Array array, int index);

 

      int Count { get; }

      bool IsSynchronized { get; }

      object SyncRoot { get; }

}

 

What is the use of such an interface? To provide synchronization primitives?  To allow coping to arrays? At least ICollection<T> has some abilities.

Published at

A new way to abuse C@

No, the the title is not a mistake, I just figured out something that should've been obvious. You can use C# to write code like this:

int @class = 0;

And the compiler will translate the name to the IL even though it's a reserved work. It stands to reason that you can also use it on non reserved words, doesn't it?

object @something = null;

The above compiles fine, and it's a very Ruby way to name a variable. What about the part about abusing the language? Well, consider this code:

string @a = "Cat";
a += @a;

There is only one variable involved here, and it's named 'a', but the @ is pretty confusing if you don't know what you're dealing with.

Tags:

Published at

A Firefox Secret

I accidently hit Ctrl+Shift+S on a site ( I inteded to hit Ctrl+Shift+A ) and suddenly the whole site changed, the advertisment that I meant to nuke vanished and everything was so much nicer.

It didn't take a long time to figure out what happened. This key sequence turned off CSS. I'm not sure if ti's Firefox or WebDeveloper that did it, but it's nice anyway.

Tags:

Published at

Grumping about unit testing using VS 2005

I’m writing this hoping that someone could explain me some of the weird things that are going on in there:

  • No way to run just one test without knowing its full name? I’m sitting in front of a test called LoadTest. Every class in the DAL has such a test. I need to type the full name of the class just to get something that in TestDriven I can do by right click and selecting “Run Tests”.
  • What is going on with Expected Exception and AssemblyInitalize? The two posts should explain what is going on there, but basically both of these features are horribly broken.
  • What is going on with sometimes building the application when I run a test and sometimes not building it. Invariably it doesn’t build after I saw a test failing, change my code, and then I’m left scratching my head wondering what the hell happened there.

 

Things that I really like about it:

  • Run all the tests, you get some broken ones. Run the test again, only the previously failed tests are running. This is a pretty good way to get trimming away test case failures without running the full suite over and over again. In many case we are talking about significant speed advantage.
  • Generate good test code that I can later modify. Sure, it generates a method per overload, and that about it. But it makes sure that I don’t forget about anything while I work my way down the list of generated tests. I often turn a dozen tests into a single one. (There really isn’t any need to test simple properties more than once, now is there?)

Published at

The value of unit testing

I’m writing a fairly complex application right now, using VS 2005 and .Net 2.0

The reason that I mention VS 2005 is that it’s supplementing a lot of the tools that I used to use. NUnit, NAnt, NCover, etc. This is all well and good, but the cost of changing so much at a single time caused me to be slack in writing unit tests. Especially since I’m doing just the DAL right now, and I’m using Castle’s ActiveRecord and NHibernate so it practically writes itself. After writing several thousands lines of code I got really nervous as I contemplated the next step in the chain. I had a piece of code with very few tests, and I was going to use that to build the rest of my application?

So I’m writing stupid unit tests right now, because I’m too stubborn to let it pass as if it didn’t matter.

  • Create the object; verify that all the properties are in the right value.
  • Save the object graph to database, load it again, compare the objects.

Any now and then I get to do something that resembles real unit testing, where I actually get to test logic, and not mechanics. The issue here is that in the middle of all the tests that VS generously provided via code generation, I started to slowly carve my own tests. VS produce very extensive unit tests, most of them I discard, but it’s pretty good in making sure that I wouldn’t miss the important ones.

I found several cases where the unit testing framework in VS didn’t behave as it should, and the whole thing is actually driving me crazy sometimes. No easy way to just point at a test/class/folder and just run the tests there. And I can’t get TestDriven 1.1 to work on VS RC (that is, I can’t get it to handle the VS’ Unit Testing, it handles NUnit just fine).

What I did find was that even writing the tests after the fact, I started to improve the code, just so I could write decent tests. I’m not talking about refactoring; this is way too early for that and the code is pretty clean. It’s actually those little things like checking for null, and valid ranges, and the like, which I skipped during the first rush to get the DAL working (I tested several approaches to write the DAL, and you could probably say that the current code started as a spike.).

What is interesting is that I uncovered a few routine bugs, a misspelled configuration that would cause an immediate failure in runtime. I just fixed those without being impressed, since I would’ve caught them the first time I would’ve started the application anyway.

What really impressed me was during this routine test writing, I managed to stumble on a fairly subtle bug. I wasn’t saving some information to the database, so I was getting the default instance back from another routine. This may have very well escaped my notice, since even after finding it, I was convinced that it couldn’t happen, I thought about this scenario, and I gave a solution to it. When I arrived at the site of the bug, I found out that while I originally thought about it, I later modified the code and introduced the bug. If I’d a unit test then, I would’ve spotted it the minute I did it. I don’t like to think how this would have appeared if it managed to get to the working application stage, where you get a report about something totally unrelated and finally track your way to the root cause.

In short, Unit Testing is Good, but VS 2005 is still driving me nuts.

Published at

The Property Indexer Of Doom

C# has indexers, but it doesn’t support indexed properties. By that I mean the ability to do something like:
config.Options[“Username”] = “Ayende”;
To be exact, the above syntax is possible, but only by cheating. While the CLR support indexed properties, in C# you need to return an object which has an indexer, leading to a common pattern of inner classes in order to allow this syntax.
Here is the last such set of classes that you’ll need, served to you by the power of delegates and generics. One thing to note here is that there is some code duplication here, I can't see how to prevent that withou mutiply inheritance, and it's a very small amount of code, so I allowed it.

public class PropertyIndexer<RetType, IndexType>
{
 private Getter getter;
 private Setter setter;
 public delegate RetType Getter(IndexType index);
 public delegate void Setter(IndexType index, RetType value);
 
 public PropertyIndexer(Getter getter, Setter setter)
 {
  this.getter = getter;
  this.setter = setter;
 }
 
 public RetType this[IndexType index]
 {
  get { return getter(index); }
  set { setter(index, value); }
 }
}
public class PropertyIndexerGetter<RetType, IndexType>
{
 private Getter getter;
 
 public PropertyIndexerGetter(Getter getter)
 {
  this.getter = getter;
 }
 
 public RetType this[IndexType index]
 {
  get { return getter(index); }
 }  
}

public class PropertyIndexerSetter<RetType, IndexType>
{
 private Setter setter;
 public PropertyIndexerSetter(Setter setter)
 {
  this.setter = setter;
 }
 
 public RetType this[IndexType index]
 {
  set { setter(index, value); }
 }  
}

The usage of this class is very simple, here is a sample of using it to allow a Linq like querying over a list of objects.

public class UserContainer
{
 private PropertyIndexerSetter<User, string> usersByName;
 private List<User> users;
 
 public EasyIndexedProperties()
 {
  users = DAL.GetUserList();
  usersByName = new PropertyIndexerSetter<stringint>(
   delegate(string userName) 
   { 
    return users.Find(
      delegate(User user) 
       { return user.Name == userName; }
     )
   });
 }
 public IList<User> Users { get { return users; } }
 public User UserByName { get { return usersByName; } }
}

And the client code looks like this:

UserContainer userContainer = new UserContainer(); 
User user = userContainer.UserByname["Ayende"]; 

The constructor is a little bit threatening, but it’s actually easy to understand what is required of you, certainly better than the 1.1 way of inner class that would get it directly, and then iterating over the list explicitly, etc.

Enjoy

Published at

The death of the post/pre increment/decrement operators?

In this post Nola wonders about the lack of the post/pre increment/decrement operators in Ruby.

I’m by no mean an expert of Ruby (Total lines of code written in Ruby: 2), but I can certainly say that I’ve noticed this situation in other languages and tools that I’m using. Both C# and Java support the ++/-- operators to add/decrement one from a number. The root of this operator is from the days of C, when the compiler wasn’t smart enough to do optimizations.

The whole idea was that i++; would be translated to INC I, a single CPU instruction. The issue wasn’t being developer friendly or anything like it. It was all about getting the correct output from the compiler. Remember, C was built in the days where people wrote whole OS using ASM. It had to allow high level constructs while allowing the optimizations that the programmers already knew and the compiler just couldn’t supply those optimizations.

Fast forward to today, I would like an honest opinion about it, how many of you use these operators outside of a for() loop? Let’s see the possible use cases:

  • I++; è On a line all on its own, incrementing the variable by one. This is just a shorthand to I+=1; or I = I + 1; Many books teaches that no true C-based programmer would the other options when they can use the first.
  • arrayOfSomething[i++]; è Access the array and increment the indexer in one line. A true time saver for those who are debited by lines of codes. (You wrote 100 lines to do that, WTF? J )

 

In the first case, I get a sense of uneasiness when I see a line like this:

I++;

Something in me tells me that it’s not right; it looks like an expression, not a statement. I just get uncomfortable around this code, and I always change it to:

I+=1;

Not without some sense of guilt that now I’ve wasted two whole characters and I’m not concise enough.

Then there is the access the array and increment the indexer in one go:

arrayOfSomething[i++];

Invariably I’ve found that the above can cause problems. Sure, it’s fast to write and it looks elegant, but how often would your eyes just skip it? It’s a side effecty code, and that is never good. Sure, the increment operator is widely known and it’s well understood what is going on their, but I still have hard feeling about this type of code. Invariably I would change it to:

arrayOfSomething[i];

I+=1;

I wasted a couple of characters, but I get a much clearer code this way. The compilers nowadays are more than capable of doing these sorts of optimizations by themselves without me holding their hands. I’m a firm believer in throwing as much as possible on the compiler. If I’ve two code patterns, one efficient and concise[1] and the second less efficient yet elegant, I would almost always go with the second until performance tuning forced me the other way.

Readable code is more important than fast code.



[1] It goes without saying that such a code is also probably not elegant. There is no question if both are elegant and one is more efficient.

Published at

Don't make my code compile!

I’m working with a code generation tool right now, and it’s a pretty spiffy one at that. Integration with Vs.Net, wizards, forms and whatever else that strikes your fancy. I’m sure that I can even make it serve coffee and dance a jig, if I found the right buttons. The problem is that while it’s certainly a very smart tool, it’s not letting itself into an invalid state.

Is that a problem? I can hear you say; it’s a Good Thing that it doesn’t allow you to put it in an invalid state. Well, it’s good for the tool, but it’s bad for me. I don’t work like that. I’m not put together like a machine does, and I’m certainly going through stages where my code is invalid. Either it’s not compilable or it doesn’t make sense. Maybe there is someone out there that can get compilable code every time they move a line, but I can’t.

Forcing me to work in the proper way means that I’m going to get very frustrated at the tool. No, I wouldn’t think first about the query’s parameters before I wrote the query, and I wouldn’t like any advice whatsoever about the possibilities of improving my schema because then it would be more normalized. Yes, the tool is probably right, but there are more concerns than the ability to get 100% normalized data. I wrote a couple of tools for developers myself, and I tried to give as much information as possible while not limiting the user to the way I would like it to work with my tools. For instance, if you use NHiberante Query Analyzer, it will show a red icon if the current mapping document is not valid according to the schema, but it will let you continue.

For all other tool makers out there, try to make it as flexible as possible. I don’t think linearly, not nearly (pun intended). Don’t try to force me to do to do so; that way leads to very high levels of user frustration. I’m more than sure that the tool is excellent, and I can certainly appreciate the amount of time that was dedicated to making sure that there was no way to input bad data to the system. But the inability to work my way, and the inability to write code instead of working my way through dozens of GUI wizards are driving me crazy.

Published at

Knife Of Dreams

The Wheel of Time turns, and Ages come and pass, leaving memories that become legend. Legend fades to myth, and even myth is long forgotten when the Age that gave it birth comes again. In one Age, called the Third Age by some, an Age yet to come, an Age long past, a wind rose above the broken mountain named Dragonmount. The wind was not the beginning. There are neither beginnings nor endings to the turning of the Wheel of Time. But it was a beginning."

I’m reading it right now, from the first chapter of the Knife of Dreams. It sends a shiver down my spine as I do so. It has been such a long wait…

Done with the Prologue

It’s beautiful.

That is all I’m going to say. It’s fast moving, people are actually doing things, and no one takes a bath.

I’ve great hopes for the next book.

Published at

A Grave Mistake

I’m about to commit a really big mistake, and I’m helpless to prevent it.

Product image for ASIN: 0312873077

Knife of Dreams is about to come out in a couple of weeks, and I haven’t read its prologue yet. I’ve been a fan of the Wheel of Time since 1996; the books are one of the main reasons that I’ve any knowledge in English.

I’ve attempted to resist it, but I can’t, I’m about to start reading: Embers Falling on Dry Grass: Prologue to Knife of Dreams

 

I’m sorry; I can’t help it any longer. I gotta read it. It will have to be done in one seating, of course, and afterward I’ll be to ecstatic to sleep, but it’s Robert Jordan that we are talking here. His books are certainly worth it.  

Using ObjectDumper to accelerate testing

ObjectDumper is a nice & simple class that uses reflection to output the content of object (including inner objects) to the console.

Andres showed it in the PDC and it’s available in the Linq preview sample in “C:\Program Files\LINQ Preview\Samples\ObjectDumper"

 

Currently it throws everything to the console, but it’s a very simple change to make it output everything to a string and return it.

What is the connection to testing? Well, since it’s capable of output hierarchical data, it’s a convenient way to use compare a complex object, without going through all the properties, and sub properties, and sub-sub-sub-sub properties, etc.

 

Compare:

Assert.AreEqual(

            ObjectDumper.Dump(expected,5),

            ObjectDumper.Dumo(actual,5));

 

To:

 

Assert.AreEqual(expected.Name, actual.Name);

Assert.AreEqual(expected.Description, actual.Description);

… similar code for every property and then going into sub objects, etc …

 

What would you rather write?

 

Naturally, this isn’t 100% fool proof, but it can certainly reduce the amount of time you spend comparing your objects.

Published at

More reasons to dislike the GPL

In an interview with Stallman, the following question came up:

 

If I take a patch under GPL 3 and merge it with a project under "GPL 2 or later," should I write that the new license for the whole project is GPL 3?

The merged program as a whole can only be used under GPL 3. However, the files you did not change could still carry the license of "GPL 2 or later." You could change them or not, as you wish.

 

This means that beyond managing the complexity of a project, with enough versioning problems between the components that I’m creating and the components that I’m using. Now I need to worry about the version of the GPL that the patches are under?

Published at

Frustrating 99%

It’s Thursday’s evening and I’m hard at work at 8 PM, culminating the day. I’d done a lot of code generation and mapping, so I now press the button and wait for it to finish generating the last version. I’m tried but excited since it’s took so long to get to this point, but the code compiles, which is always a good sign J

 

Now all I need is to write a really simple test to verify that it’s working on general, and I’m done for the day. I’m planning on just testing that something really simple works, and leave everything for Sunday. I run it, and the test fails. No worry, I think, I probably forgot something trivial, and so I start investigating it. I quickly find the problem, but I’ve no idea what is causing it.

The issue was that I VS2K5 didn’t add the file extensions to embedded resources. I’d done this sort of thing before, and I know that it should work. I’m banging my head against the keyboard, hoping to get some random key sequence that would fix it.

I decide that I can either stay at this until I figure exactly what is going wrong, or I can just leave it and let my unconsciousness work on it.

 

I decided to drop if for now, and I just solved it! J

It’s a simple matter, seems that MsBuild doesn’t save the file extension for embedded resources when the resource is marked as depended upon some other file. I edit it quickly, and I’m done. It require some few other minor tweaking, but it seems to be working.

Published at

SmartDraw

I just downloaded SmartDraw, it’s recommended by Scott Bellware as a non intrusive UML tool. So I’m checking them out. It’s a tiny download, but then it appears to download the rest using the installer.

It’s an interesting application and I’ve not even got to the File>New… point.

 

1.       I’ve installed as a non-admin, they recognized it, showed a message saying what will not work, and how to fix it. Nice.

2.       I got a flash demo of how to use their product which I’m using now, considering that it’s probably a non trivial application, it’s looking good.

 

I’ve been using if for a couple of minutes, but I really like the interface, looks like it can be used very easily to draw UML, which I was never able to do using any other tool.

Published at

SuperFetch & USB DisksOnKeys

One of the things that bugged me about the SuperFetch demo in the PDC was the declaration that SuperFetch will try to utilize any USB memory on the system to augment the file cache.

On the face of it, it didn’t make sense; the access times for USB 2.0 and modern HD are about the same, with the balance tipping toward HD as the faster.

This post clears things up (look at the comments), apparently this is true for sequential reading, but not for random access, while flash memory excel in random access.

Even with the cost of encrypting it, there should be a good saving in performance that way. I’ll assume that the file cache will look like this:

 

  1. Load to memory the most frequently used programs.
  2. When you run out of the space reserved for the cache on RAM, target the USB drives on the system.
  3. When you run out of USB drives, target the page file.

 

One consideration is what happens when you’ve external HD via USB? I’m pretty sure that they covered this option, though.

 

Cool technology, all around.

Published at

Testing dasBlog Mail to Blog interface

If you’re seeing this, I manage to config everything properly and I have succeeded in moving a message from Outlook to dasBlog

Tags:

Published at