Document-oriented Database Shootout Part 2: Performance

After talking about document-oriented databases in general in Part 1, for Part 2 I’ve written some code comparing MongDB 1.1.1, CouchDBX 0.9.1 and Tokyo Tyrant 1.4.32 in an apples to apples test.

mongodb couchdb-logo

The shootout code is on Github. I welcome patches and improvements as long as they don’t bias the tests in favor of any one system.


========== Running Tokyo Tyrant tests
Using rufus-tokyo 1.0.0
user system total real
init 0.000000 0.000000 0.000000 ( 0.013781)
create 19.770000 4.260000 24.030000 ( 39.982273)
query 0.160000 0.030000 0.190000 ( 0.318070)
delete 0.000000 0.000000 0.000000 ( 0.421201)

========== Running MongoDB tests
Using mongo + mongo_ext 0.15.1
user system total real
init 0.000000 0.000000 0.000000 ( 0.005074)
create 54.710000 1.750000 56.460000 ( 57.358498)
query 0.120000 0.010000 0.130000 ( 0.155486)
delete 0.000000 0.000000 0.000000 ( 0.957453)

========== Running CouchDB tests
Using jchris-couchrest 0.23
user system total real
init 0.000000 0.000000 0.000000 ( 0.000007)
create 9.290000 0.560000 9.850000 ( 51.177824)

init is the time required to initialize the database and create any necessary indices. In practice, this number isn’t terribly relevant as this is usually an infrequent operation.

The create operation shows how long it takes for the system to bulk load 200,000 documents. You can see that Tokyo is quite fast while the Mongo client hits the CPU pretty hard. The couchrest client seems more efficient than the other two but the task still takes longer than Tokyo.

The query operation shows how long it takes to perform a non-trivial query against those 200k documents. Both Mongo and Tokyo perform about the same speed although Mongo lazy fetches the results in order to minimize network traffic when used with pagination. Tokyo returns the entire result at once AFAIK. I was not able to complete this test in a weekend using CouchDB because its view layer is so alien to me. I’d welcome help with this task.

The delete operation tests the time required to delete a subset of documents within our set of 200,000. Again, Tokyo comes out on top. Since I couldn’t perform the query in CouchDB I couldn’t delete anything either.

Conclusions? Tokyo has a reputation for being very fast and it appears to be well-founded. Couch is fast for what I could get working – I would be much more concerned about developer training and learning curve with Couch. Mongo is by no means slow but someone has to finish last. I like Mongo as an interesting mix of RDBMS and document technologies – it’s not quite as conventional as Tokyo but not as unconventional as CouchDB with its unique view layer and Erlang underpinnings. What do you think? Leave a comment and let me know!

8 thoughts on “Document-oriented Database Shootout Part 2: Performance”

  1. Init time is actually better than the above with a clean tct file:

    Using ruby-tokyotyrant 0.2.0
                    user     system      total        real
    init        0.000000   0.000000   0.000000 (  0.083591)
    create      5.180000   0.120000   5.300000 (  7.102129)
    query       0.000000   0.000000   0.000000 (  0.071614)
    delete      0.000000   0.000000   0.000000 (  0.433987)
  2. Why couldn’t you try delete on CouchDB? Should be really easy if you know the ids of the documents.

    Also: The view-concept is really easy. Just write a small javascript-function which emit()’s the document if it matches your needs.
    I think some people could help you, if you show an example of the query you used.


  3. Just ran your tests for MongoDB and CouchDB on a Macbook Pro 2.4 GHz Core 2 Duo. The MongoDB ‘create’ tests were somewhat faster than the Couch tests on my system:

    create (32.836103)

    create (36.972354)

    More details at

    Can you include some info about the system you’re using to run the tests?

  4. Mike,

    Some pretty significant gains can be made for MongoDB by reducing the batch size to 500, upgrading the driver to 0.16, and running a rehearsal benchmark:

    create (26.27)

    create (37.97)

    The reason my benchmarks are running faster is the extra 1 GB of RAM on my system; MongoDB uses memory-mapped files, which can take advantage of the extra RAM.

  5. “The reason my benchmarks are running faster is the extra 1 GB of RAM on my system; MongoDB uses memory-mapped files, which can take advantage of the extra RAM.”

    You’ll probably also get a speedup if you run from Linux or Solaris.

  6. You feed items one by one in Tokyo but it has the “putlist” command; I wonder if that would speed things up. Or did I miss something?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>