reThe Order of the JSON, AKA–irresponsible assumptions and blind spots

time to read 2 min | 346 words

I run into this post, in which the author describe how they got ERROR 1000294 from IBM DataPower Gateway as part of an integration effort. The underlying issue was that he sent JSON to the endpoint in an order that it wasn’t expected.

After asking the team at the other end to fix it, the author got back an estimation of effort for 9 people for 6 months (4.5 man years!). The author then went and figured out that the fix for the error was somewhere deep inside DataPower:

Validate order of JSON? [X]

The author then proceeded to question the competency  / moral integrity of the estimation.

I believe that the author was grossly unfair, at best, to the people doing the estimation. Mostly because he assumed that unchecking the box and running a single request is a sufficient level of testing for this kind of change. But also because it appears that the author never considered once what is the reason this setting may be in place.

  • The sort order of JSON has been responsible for Remote Code Execution vulnerabilities.
  • The code processing the JSON may not do that in a streaming fashion, and therefor expect the data in a particular order.
  • Worse, the code may just assume the order of the fields and access them by index. Change the order of the fields, and you may reverse the Creditor and Debtor fields.
  • The code may translate the JSON to another format and send it over to another system (likely, given the mentioned legacy system.

The setting is there to protect the system, and unchecking that value means that you have to check every single one of the integration points (which may be several layers deep) to ensure that there isn’t explicit or implied ordering to the JSON.

In short, given the scope and size of the change:  “Fundamentally alter how we accept data from the outside world”, I can absolutely see why they gave this number.

And yes, for 99% of the cases, there isn’t likely to be any different, but you need to validate for that nasty 1% scenario.

More posts in "re" series:

  1. (12 Nov 2019) Document-Level Optimistic Concurrency in MongoDB
  2. (25 Oct 2019) RavenDB. Two years of pain and joy
  3. (19 Aug 2019) The Order of the JSON, AKA–irresponsible assumptions and blind spots
  4. (10 Oct 2017) Entity Framework Core performance tuning–Part III
  5. (09 Oct 2017) Different I/O Access Methods for Linux
  6. (06 Oct 2017) Entity Framework Core performance tuning–Part II
  7. (04 Oct 2017) Entity Framework Core performance tuning–part I
  8. (26 Apr 2017) Writing a Time Series Database from Scratch
  9. (28 Jul 2016) Why Uber Engineering Switched from Postgres to MySQL
  10. (15 Jun 2016) Why you can't be a good .NET developer
  11. (12 Nov 2013) Why You Should Never Use MongoDB
  12. (21 Aug 2013) How memory mapped files, filesystems and cloud storage works
  13. (15 Apr 2012) Kiip’s MongoDB’s experience
  14. (18 Oct 2010) Diverse.NET
  15. (10 Apr 2010) NoSQL, meh
  16. (30 Sep 2009) Are you smart enough to do without TDD
  17. (17 Aug 2008) MVC Storefront Part 19
  18. (24 Mar 2008) How to create fully encapsulated Domain Models
  19. (21 Feb 2008) Versioning Issues With Abstract Base Classes and Interfaces
  20. (18 Aug 2007) Saving to Blob
  21. (27 Jul 2007) SSIS - 15 Faults Rebuttal
  22. (29 May 2007) The OR/M Smackdown
  23. (06 Mar 2007) IoC and Average Programmers
  24. (19 Sep 2005) DLinq Mapping