RavenDB 2.01 Stable Release
Just about a month after the 2.0 release ,we have a minor stable release for RavenDB containing a lot of bug fixes, a few minor features and some new cool stuff.
Before I move on to anything else, everyone who is using 2.0 should be upgrading to 2.01. There have been a number of issues that were fixed between 2.0 and 2.01, and some of them are pretty important.
You can see the full list here, the highlights are below:
- Fixed race condition with replication when doing rapid update to the same document / attachment.
- Fixed issues of using the RavenDB Profiler with a different version of jQuery.
- Fixed bug where disposing of one changes subscription would also dispose others.
- Replication doesn't work when using API key.
- HTTP Spec Comp: Support quoted etags in headers.
- Fixed a problem with map/reduce indexes moving between single step and multi step reduce would get duplicate data.
- Fixed an error when using encryption and reducing the document size.
- Support facets on spatial queries
- Fixed unbounded load when loading items to reduce in multi step reduce.
- Fixed bulk insert on IIS with authentication.
- Fixed Last-Modified date is not being updated in embedded mode
- Added SQL Replication support.
- Will use multiple index batches if we have a slow index.
- More aggressive behavior with regards to releasing memory on the client under low memory conditions.
- Adding debug ability for authentication issues.
- Implemented server side fulltext search results highlighting
- Moved expensive suggestions operation init to a separate thread.
- Allow to define suggestions as part of the index creation.
- Better facets paging.
- Expose better view of internal storage concerns.
- Support TermVector options on index fields.
- RavenDB-894 When authenticating from the studio using API Key, set a cookie This allows us to use standard browser commands after authenticating using the studio just once.
- Periodic backup can now backup to a local path as well.
- Adding debug information for when we filter documents for replication
- Async API improvements:
We upgraded to 2.0 last week. Next day our RavenDB database got corrupt. This caused our production web site to crash. We had to export/import the database to get it up and running again. Who knows what caused it and if it may happen again in the future.
Then our map/reduce indexes started to get corrupt at least once a day. We have to manually reset the indexes a couple of times a day to make them work. The business side that consumes the report based on the map/reduce indexes is now losing faith in the reports and our system.
Lastly adding the count of each document collection in Raven Studio does not add up to the total document count shown in the bottom. Which one is correct? The diff is just a couple of hundred of documents...
Right now i am really frustrated and i regret that we chose RavenDB for our production system. It's good that you fix all these bugs in this release, but the problem is that it is two weeks too late for us. To keep your clients happy you have to improve the quality of your product. As a database user I want to be 100% sure that my data is correct and secure at all times. The users are making business decisions based on your product!
@Arne: If you suffered database corruption for weeks why didn't you report this problem to Hibernating Rhinos? I'm no customer but I got the impression that Oren and his team do everything in their power to help customers as quickly as possible, especially when it comes to urgent matters like data corruption.
Trash talking products without getting support involved at first isn't very nice. In fact it's kind of rude.
As I said we fixed it with export/import. My biggest concern is to keep our site up and running. We had around 15minutes of downtime. I experienced data corruption around a year ago too. I reported this to the support form on the web page and never got a response. My patience is only so big.
Classic example of “if it isn’t broken don’t fix it”.
Sorry to hear about your issues but that is the price you pay for “just upgrading” for no reason (unless you had a serious issue with the current system or there is a feature that is a must) why would you just upgrade a production system so casually?
Arne, We have really good support channels, and this is the first time that I have heard about this. If you have a db corruption, pretty much the first thing that you need to do is call the vendor. We could have helped you get things working again.
I've upgraded to 2.01 (2261) and we are loving it. I highly recommend you upgrade. This is a MUCH more mature, high-quality product than 1.0.
@Arne - as a paying customer, you should easily be able to get support. Even if you're on the FOSS license - the community forum support is quite responsive. My guess is that you hit on some of the issues with m/r indexes that are pointed out in this article. Several of those I tested myself, confirmed broken, and then confirmed fixed with the new release. I suggest you try 2.01 in a test environment with a copy of your production data. If you still have issues, please report them. If not, then you can reconsider it for production. Peace.
Will the existing 2230 client work with server 2261?
Oren, can you tell us how ravenhq plan to handle those minor releases?
João, Yes, a 2.01 client should work with 2.0 server and vice versa.
There is no excuse for upgrading a production system without attempting an upgrade in a staging environment first. There is a plan to use the 2.0 version and that plan involves the use of a staging area.
@Moti, we will start with an upgrade on Monday, 2/18/2013 & will move out of Beta shortly after. After we move out of beta support for V2, we will be hardening our upgrade schedule for V2 releases to what we supported in 960. This includes upgrading to stable releases after doing our own regression testing. Our release schedule has historically been between 2 weeks to 1 month after the release of a stable build.
As always, customers on replicated plans won't experience any downtime during these upgrades.