from Hacker News

Json vs. simplejson vs. ujson

by harshulj on 4/6/15, 4:20 AM with 71 comments

  • by jmoiron on 4/6/15, 5:43 AM

    When I wrote the same kind of article in Nov 2011 [1], I came to similar conculsions; ujson was blowing everyone away.

    However, after swapping a fairly large and json-intensive production spider over to ujson, we noticed a large increase in memory use.

    When I investigated, I discovered that simplejson reused allocated string objects, so when parsing/loading you basically got string compression for repeated string keys.

    The effects were pretty large for our dataset, which was all API results from various popular websites and featured lots of lists of things with repeating keys; on a lot of large documents, the loaded mem object was sometimes 100M for ujson and 50M for simplejson. We ended up switching back because of this.

    [1] http://jmoiron.net/blog/python-serialization/

  • by borman on 4/6/15, 12:58 PM

    The problem with all (widely known) the non-standard JSON packages is, they all have their gotchas.

    cjson's way of handling unicode is just plain wrong: it uses utf-8 bytes as unicode code points. ujson cannot handle large numbers (somewhat larger than 263, i've seen a service that encodes unsigned 64-bit hash values in JSON this way: ujson fails to parse its payloads). With simplejson (when using speedups module), string's type depends on its value, i.e. it decodes strings as 'str' type if their characters are ascii-only, but as 'unicode' otherwise; strangely enough, it always decodes strings as unicode (like standard json module) when speedups are disables.

  • by Drdrdrq on 4/6/15, 6:32 AM

    I disagree with the conclusion. How about this: you should use the tool that most of your coworkers already know and which has large community support and adequate performance. In other words, stop foling around and use json library. If (IF!!!) you find performance inadequate, try the other libraries. And most of all, if optimization is your goal: measure, measure and measure! </rant>
  • by jbergstroem on 4/6/15, 9:19 AM

    I just want to add another library in here which – at least in my world – is replacing json as the number one configuration and serialisation format. It's called libucl and it's main consumer is probably the new package tool in FreeBSD: `pkg`

    Its syntax is nginx-like but can also parse strict json. It's pretty fast too.

    More info here: https://github.com/vstakhov/libucl

  • by wodenokoto on 4/6/15, 6:01 AM

    How hard is it to draw a bar graph? I'd imagine it is easier than creating an ASCII table and then turning that into an image, but I've never experimented with the latter.
  • by chojeen on 4/6/15, 2:02 PM

    Maybe this is a dumb question, but is json (de)serialization really a bottleneck for python web apps in the real world?
  • by michaelmior on 4/6/15, 11:47 AM

    > ultrajson ... will not work for un-serializable collections

    So I can't serialize things with ultrajson that aren't serializable? I must be missing something in this statement.

    > The verdict is pretty clear. Use simplejson instead of stock json in any case...

    The verdict seems clear (based solely on the data in the post) that ultrajson is the winner.

  • by jroseattle on 4/6/15, 5:12 AM

    > keep in mind that ultrajson only works with well defined collections and will not work for un-serializable collections. But if you are dealing with texts, this should not be a problem.

    Well-defined collections? As in, serializable? Well sure, that's requisite for the native json package as well as simplejson (as far as I can recall -- haven't used simplejson in some time.)

    But does "texts" refer to strings? As in, only one data type? The source code certainly supports other types, so I wonder what this statement refers to.

  • by foota on 4/6/15, 5:08 AM

    I disagree with the verdict at the end of the article, it seems like json would be better if you were doing a lot of dumping? And also for the added maintenance guarantee of being an official package.
  • by jkire on 4/6/15, 8:53 AM

    > We have a dictionary with 3 keys 

    What about larger dictionaries? With such a small one I would be worried that a significant proportion of the time would be simple overhead.

    [Warning: Anecdote] When we were testing out the various JSON libraries we found simplejson much faster than json for dumps. We used large dictionaries.

    Was the simplejson package using its optimized C library?

  • by ktzar on 4/6/15, 4:54 AM

    The usage of percentages in the article is wrong. 6 is not 150% faster than 4.
  • by stared on 4/6/15, 9:46 AM

    But ujson comes at a price of slightly reduced functionality. For example, you cannot set indent. (And I typically set indent for files <100MB, when working with third-party data, often manual inspection is necessary).

    (BTW: I got tempted to try ujson exactly for the original blog post, i.e. http://blog.dataweave.in/post/87589606893/json-vs-simplejson...)

    Plus, AFAIK, at least in Python 3 json IS simplejson (but a few version older). So every comparison of these libraries is going to give different results over time (likely, with difference getting smaller). Of course, simpejson is the newer thing of the same, so it's likely to be better.

  • by willvarfar on 4/6/15, 8:45 AM

    (My own due diligence when working with serialisation: http://stackoverflow.com/questions/9884080/fastest-packing-o...

    I leave this here in case it helps others.

    We had other focus such as good for both python and java.

    At the time we went msgpack. As msgpack is doing much the same work as json, it just shows that the magic is in the code not the format..)

  • by apu on 4/6/15, 7:11 AM

    Also weird crashes with ultra json, lack of nice formatting in outputs, and high memory usage in some situations
  • by dbenhur on 4/6/15, 5:34 AM

    > Without argument, one of the most common used data model is JSON

    JSON is a data representation, not a data model.

  • by js2 on 4/6/15, 3:55 PM

    I'll have to try ultrajson for my use case, but when I benchmarked pickle, simplejson and msgpack, msgpack came out the fastest. I also tried combining all three formats with gzip, but that did not help. Primarily I care about speed when deserializing from disk.
  • by velox_io on 4/6/15, 12:12 PM

    I know it goes against the grain, but I wish that binary json (UBJSON) had much more widespread usage. There's no reason tools can't convert it back to json for us old humans.

    The speed deference between working with binary streams and parsing text is night and day.

  • by akoumjian on 4/6/15, 4:18 PM

    We took a look at ujson about a year ago and found that it failed loading even json structures that went 3 layers deep. I also recall issues handling unicode data.

    It was a big disappointment after seeing these kinds of performance improvements.

  • by MagicWishMonkey on 4/6/15, 2:12 PM

    It kills me that the default JSON module is so slow, if you're working with large JSON objects you really have no choice but to use a 3rd party module because the default won't cut it.
  • by bpicolo on 4/6/15, 7:01 PM

    Python version? Library version? Results are meaningless without that info
  • by fijal on 4/6/15, 12:05 PM

    The standard JSON has an optimized version in PyPy (that does not beat ujson, but is a lot faster than the stdlib one in cpython)
  • by UUMMUU on 4/6/15, 12:37 PM

    was aware of simplejson but had not seen ultra json. This is awesome to see. Thanks for the writeup.
  • by aaronem on 4/6/15, 5:46 AM

    *(Python)