cuiyi's blog(崔毅 crazycy)

记录点滴 鉴往事之得失 以资于发展
数据加载中……

NoSQL学习(五)Cassandra vs MongoDB vs CouchDB vs Redis vs Riak vs HBase vs Couchbase vs Neo4j vs Hypertable vs ElasticSearch vs Accumulo vs VoltDB vs Scalaris comparison

Cassandra vs MongoDB vs CouchDB vs Redis vs Riak vsHBase vs Couchbase vs Neo4j vs Hypertable vsElasticSearch vs Accumulo vs VoltDB vs Scalariscomparison

http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis/
KKovacsKristof Kovacs
Software architect, consultant

(Yes it's a long title, since people kept asking me to write about this and that too :) I do when it has a point.)

While SQL databases are insanely useful tools, their monopoly in the last decades is coming to an end. And it's just time: I can't even count the things that were forced into relational databases, but never really fitted them. (That being said, relational databases will always be the best for the stuff that has relations.)

But, the differences between NoSQL databases are much bigger than ever was between one SQL database and another. This means that it is a bigger responsibility on software architects to choose the appropriate one for a project right at the beginning.

In this light, here is a comparison of CassandraMongodbCouchDBRedisRiakCouchbase (ex-Membase)HypertableElasticSearchAccumuloVoltDBKyoto TycoonScalarisNeo4j and HBase:

The most popular ones

MongoDB (2.2)

·         Written in: C++

·         Main point: Retains some friendly properties of SQL. (Query, index)

·         License: AGPL (Drivers: Apache)

·         Protocol: Custom, binary (BSON)

·         Master/slave replication (auto failover with replica sets)

·         Sharding built-in

·         Queries are javascript expressions

·         Run arbitrary javascript functions server-side

·         Better update-in-place than CouchDB

·         Uses memory mapped files for data storage

·         Performance over features

·         Journaling (with --journal) is best turned on

·         On 32bit systems, limited to ~2.5Gb

·         An empty database takes up 192Mb

·         GridFS to store big data + metadata (not actually an FS)

·         Has geospatial indexing

·         Data center aware

Best used: If you need dynamic queries. If you prefer to define indexes, not map/reduce functions. If you need good performance on a big DB. If you wanted CouchDB, but your data changes too much, filling up disks.

For example: For most things that you would do with MySQL or PostgreSQL, but having predefined columns really holds you back.

Riak (V1.2)

·         Written in: Erlang & C, some JavaScript

·         Main point: Fault tolerance

·         License: Apache

·         Protocol: HTTP/REST or custom binary

·         Stores blobs

·         Tunable trade-offs for distribution and replication

·         Pre- and post-commit hooks in JavaScript or Erlang, for validation and security.

·         Map/reduce in JavaScript or Erlang

·         Links & link walking: use it as a graph database

·         Secondary indices: but only one at once

·         Large object support (Luwak)

·         Comes in "open source" and "enterprise" editions

·         Full-text search, indexing, querying with Riak Search

·         In the process of migrating the storing backend from "Bitcask" to Google's "LevelDB"

·         Masterless multi-site replication replication and SNMP monitoring are commercially licensed

Best used: If you want something Dynamo-like data storage, but no way you're gonna deal with the bloat and complexity. If you need very good single-site scalability, availability and fault-tolerance, but you're ready to pay for multi-site replication.

For example: Point-of-sales data collection. Factory control systems. Places where even seconds of downtime hurt. Could be used as a well-update-able web server.

CouchDB (V1.2)

·         Written in: Erlang

·         Main point: DB consistency, ease of use

·         License: Apache

·         Protocol: HTTP/REST

·         Bi-directional (!) replication,

·         continuous or ad-hoc,

·         with conflict detection,

·         thus, master-master replication. (!)

·         MVCC - write operations do not block reads

·         Previous versions of documents are available

·         Crash-only (reliable) design

·         Needs compacting from time to time

·         Views: embedded map/reduce

·         Formatting views: lists & shows

·         Server-side document validation possible

·         Authentication possible

·         Real-time updates via '_changes' (!)

·         Attachment handling

·         thus, CouchApps (standalone js apps)

Best used: For accumulating, occasionally changing data, on which pre-defined queries are to be run. Places where versioning is important.

For example: CRM, CMS systems. Master-master replication is an especially interesting feature, allowing easy multi-site deployments.

Redis (V2.4)

·         Written in: C/C++

·         Main point: Blazing fast

·         License: BSD

·         Protocol: Telnet-like

·         Disk-backed in-memory database,

·         Currently without disk-swap (VM and Diskstore were abandoned)

·         Master-slave replication

·         Simple values or hash tables by keys,

·         but complex operations like ZREVRANGEBYSCORE.

·         INCR & co (good for rate limiting or statistics)

·         Has sets (also union/diff/inter)

·         Has lists (also a queue; blocking pop)

·         Has hashes (objects of multiple fields)

·         Sorted sets (high score table, good for range queries)

·         Redis has transactions (!)

·         Values can be set to expire (as in a cache)

·         Pub/Sub lets one implement messaging (!)

Best used: For rapidly changing data with a foreseeable database size (should fit mostly in memory).

For example: Stock prices. Analytics. Real-time data collection. Real-time communication. And wherever you used memcached before.

Clones of Google's Bigtable

HBase (V0.92.0)

·         Written in: Java

·         Main point: Billions of rows X millions of columns

·         License: Apache

·         Protocol: HTTP/REST (also Thrift)

·         Modeled after Google's BigTable

·         Uses Hadoop's HDFS as storage

·         Map/reduce with Hadoop

·         Query predicate push down via server side scan and get filters

·         Optimizations for real time queries

·         A high performance Thrift gateway

·         HTTP supports XML, Protobuf, and binary

·         Jruby-based (JIRB) shell

·         Rolling restart for configuration changes and minor upgrades

·         Random access performance is like MySQL

·         A cluster consists of several different types of nodes

Best used: Hadoop is probably still the best way to run Map/Reduce jobs on huge datasets. Best if you use the Hadoop/HDFS stack already.

For example: Search engines. Analysing log data. Any place where scanning huge, two-dimensional join-less tables are a requirement.

Cassandra (1.2)

·         Written in: Java

·         Main point: Best of BigTable and Dynamo

·         License: Apache

·         Protocol: Thrift & custom binary CQL3

·         Tunable trade-offs for distribution and replication (N, R, W)

·         Querying by column, range of keys (Requires indices on anything that you want to search on)

·         BigTable-like features: columns, column families

·         Can be used as a distributed hash-table, with an "SQL-like" language, CQL (but no JOIN!)

·         Data can have expiration (set on INSERT)

·         Writes can be much faster than reads (when reads are disk-bound)

·         Map/reduce possible with Apache Hadoop

·         All nodes are similar, as opposed to Hadoop/HBase

·         Very good and reliable cross-datacenter replication

Best used: When you write more than you read (logging). If every component of the system must be in Java. ("No one gets fired for choosing Apache's stuff.")

For example: Banking, financial industry (though not necessarily for financial transactions, but these industries are much bigger than that.) Writes are faster than reads, so one natural niche is data analysis.

Hypertable (0.9.6.5)

·         Written in: C++

·         Main point: A faster, smaller HBase

·         License: GPL 2.0

·         Protocol: Thrift, C++ library, or HQL shell

·         Implements Google's BigTable design

·         Run on Hadoop's HDFS

·         Uses its own, "SQL-like" language, HQL

·         Can search by key, by cell, or for values in column families.

·         Search can be limited to key/column ranges.

·         Sponsored by Baidu

·         Retains the last N historical values

·         Tables are in namespaces

·         Map/reduce with Hadoop

Best used: If you need a better HBase.

For example: Same as HBase, since it's basically a replacement: Search engines. Analysing log data. Any place where scanning huge, two-dimensional join-less tables are a requirement.

Accumulo (1.4)

·         Written in: Java and C++

·         Main point: A BigTable with Cell-level security

·         License: Apache

·         Protocol: Thrift

·         Another BigTable clone, also runs of top of Hadoop

·         Cell-level security

·         Bigger rows than memory are allowed

·         Keeps a memory map outside Java, in C++ STL

·         Map/reduce using Hadoop's facitlities (ZooKeeper & co)

·         Some server-side programming

Best used: If you need a different HBase.

For example: Same as HBase, since it's basically a replacement: Search engines. Analysing log data. Any place where scanning huge, two-dimensional join-less tables are a requirement.

Special-purpose

Neo4j (V1.5M02)

·         Written in: Java

·         Main point: Graph database - connected data

·         License: GPL, some features AGPL/commercial

·         Protocol: HTTP/REST (or embedding in Java)

·         Standalone, or embeddable into Java applications

·         Full ACID conformity (including durable data)

·         Both nodes and relationships can have metadata

·         Integrated pattern-matching-based query language ("Cypher")

·         Also the "Gremlin" graph traversal language can be used

·         Indexing of nodes and relationships

·         Nice self-contained web admin

·         Advanced path-finding with multiple algorithms

·         Indexing of keys and relationships

·         Optimized for reads

·         Has transactions (in the Java API)

·         Scriptable in Groovy

·         Online backup, advanced monitoring and High Availability is AGPL/commercial licensed

Best used: For graph-style, rich or complex, interconnected data. Neo4j is quite different from the others in this sense.

For example: For searching routes in social relations, public transport links, road maps, or network topologies.

ElasticSearch (0.20.1)

·         Written in: Java

·         Main point: Advanced Search

·         License: Apache

·         Protocol: JSON over HTTP (Plugins: Thrift, memcached)

·         Stores JSON documents

·         Has versioning

·         Parent and children documents

·         Documents can time out

·         Very versatile and sophisticated querying, scriptable

·         Write consistency: one, quorum or all

·         Sorting by score (!)

·         Geo distance sorting

·         Fuzzy searches (approximate date, etc) (!)

·         Asynchronous replication

·         Atomic, scripted updates (good for counters, etc)

·         Can maintain automatic "stats groups" (good for debugging)

·         Still depends very much on only one developer (kimchy).

Best used: When you have objects with (flexible) fields, and you need "advanced search" functionality.

For example: A dating service that handles age difference, geographic location, tastes and dislikes, etc. Or a leaderboard system that depends on many variables.

The "long tail"
(Not widely known, but definitely worthy ones)

Couchbase (ex-Membase) (2.0)

·         Written in: Erlang & C

·         Main point: Memcache compatible, but with persistence and clustering

·         License: Apache

·         Protocol: memcached + extensions

·         Very fast (200k+/sec) access of data by key

·         Persistence to disk

·         All nodes are identical (master-master replication)

·         Provides memcached-style in-memory caching buckets, too

·         Write de-duplication to reduce IO

·         Friendly cluster-management web GUI

·         Connection proxy for connection pooling and multiplexing (Moxi)

·         Incremental map/reduce

·         Cross-datacenter replication

Best used: Any application where low-latency data access, high concurrency support and high availability is a requirement.

For example: Low-latency use-cases like ad targeting or highly-concurrent web apps like online gaming (e.g. Zynga).

Scalaris (0.5)

·         Written in: Erlang

·         Main point: Distributed P2P key-value store

·         License: Apache

·         Protocol: Proprietary & JSON-RPC

·         In-memory (disk when using Tokyo Cabinet as a backend)

·         Uses YAWS as a web server

·         Has transactions (an adapted Paxos commit)

·         Consistent, distributed write operations

·         From CAP, values Consistency over Availability (in case of network partitioning, only the bigger partition works)

Best used: If you like Erlang and wanted to use Mnesia or DETS or ETS, but you need something that is accessible from more languages (and scales much better than ETS or DETS).

For example: In an Erlang-based system when you want to give access to the DB to Python, Ruby or Java programmers.

VoltDB (2.8.4.1)

·         Written in: Java

·         Main point: Fast transactions and rapidly changing data

·         License: GPL 3

·         Protocol: Proprietary

·         In-memory relational database.

·         Can export data into Hadoop

·         Supports ANSI SQL

·         Stored procedures in Java

·         Cross-datacenter replication

Best used: Where you need to act fast on massive amounts of incoming data.

For example: Point-of-sales data analysis. Factory control systems.

Kyoto Tycoon (0.9.56)

·         Written in: C++

·         Main point: A lightweight network DBM

·         License: GPL

·         Protocol: HTTP (TSV-RPC or REST)

·         Based on Kyoto Cabinet, Tokyo Cabinet's successor

·         Multitudes of storage backends: Hash, Tree, Dir, etc (everything from Kyoto Cabinet)

·         Kyoto Cabinet can do 1M+ insert/select operations per sec (but Tycoon does less because of overhead)

·         Lua on the server side

·         Language bindings for C, Java, Python, Ruby, Perl, Lua, etc

·         Uses the "visitor" pattern

·         Hot backup, asynchronous replication

·         background snapshot of in-memory databases

·         Auto expiration (can be used as a cache server)

Best used: When you want to choose the backend storage algorithm engine very precisely. When speed is of the essence.

For example: Caching server. Stock prices. Analytics. Real-time data collection. Real-time communication. And wherever you used memcached before.

Of course, all these systems have much more features than what's listed here. I only wanted to list the key points that I base my decisions on. Also, development of all are very fast, so things are bound to change.

 MongoDB和Redis对比

http://taotao1240.blog.51cto.com/731446/755173
taojin1240 的BLOG

google了下,看到有篇英文版的对比:

英文来自——http://kkovacs.eu/cassandra-vs-mongodb-vs-couchdb-vs-redis/

于是就做了个表格,加上自己使用的一些体会,就有了此文。

MongoDB

Redis (V2.4)

说明

Written in: C++

Written in: C/C++


Main point:Retains some friendly properties of SQL. (Query, index)

Main point: Blazing fast

MongoDB保留类似SQL的属性,

例如:show dbs;db.test.find()

Redis—快

License: AGPL (Drivers: Apache)

License: BSD


Protocol: Custom, binary (BSON)

Protocol: Telnet-like


Master/slave replication (auto failover with replica sets)

主从复制+replica sets

Master-slave replication

主从复制



Sharding built-in

内置的sharding分片功能


MongoDB一般会使用replica sets和sharding功能结合,replica sets侧重高可用性及高可靠性,而sharding侧重于性能、易扩展

Queries are javascript expressions

查询是javascript语句



Run arbitrary javascript functions server-side

运行任意的server-side javascript函数



Better update-in-place than CouchDB

update-in-place的支持比CouchDB更好




Uses memory mapped files for data storage

使用内存转储文件做数据存储

Disk-backed in-memory database,

Currently without disk-swap (VM and Diskstore were abandoned)

磁盘做后备、内存数据库

目前2.4版本不带disk-swap(虚拟内存和diskstore被舍弃了)


Performance over features

(性能优于特性)



Journaling (with --journal) is best turned on

(Journaling日志功能最好打开



On 32bit systems, limited to ~2.5Gb

在32位平台MongoDB不允许数据库文件(累计总和)超过2.5G,而64位平台没有这个限制。



An empty database takes up 192Mb

空数据库大约占 192Mb




GridFS to store big data + metadata (not actually an FS)

使用GridFS存储大数据和元数据(不是真正意义上的文件系统)


GridFS是一种将大型文件存储在MongoDB的文件规范。


Values can be set to expire (as in a cache)

可以设置value过期(由于在内存中)

expire name 10

例如:设置name这个value的过期时间是10S


Simple values or hash tables by keys,but complex operations like ZREVRANGEBYSCORE.

INCR & co (good for rate limiting or statistics)

使用简单值或以key值为索引的哈希表,也支持复杂的例如ZREVRANGEBYSCORE的有序集操作


Has sets (also union/diff/inter)

Has lists (also a queue; blocking pop)

Has hashes (objects of multiple fields)

Sorted sets (high score table, good for range queries)

有很多类型的数据,包括sets,lists,hash,有序集


Redis has transactions (!)

redis支持事物处理


Pub/Sub lets one implement messaging (!)

Pub/Sub允许用户实现消息机制,因此redis用于新浪微博中

适用——动态查询; 索引比map/reduce方式更合适时; 对于大数据库性能要求高,需要和CouchDB的功能一样,但数据变化大

适用——数据库大小快速变化并且总量可预测的,对内存要求高


举例——大部分用Mysql/PostgreSQL的场合,但是无法使用预先定义好所有列的时候

举例——股票价格、统计分析、实时数据收集、实时通信




关于 redis、memcache、mongoDB 的对比(整理)

(PHPer.yang www.imop.us)
从以下几个维度,对 redis、memcache、mongoDB 做了对比。
1、性能
都比较高,性能对我们来说应该都不是瓶颈。
总体来讲,TPS 方面 redis 和 memcache 差不多,要大于 mongodb。

2、操作的便利性
memcache 数据结构单一。(key-value)
redis 丰富一些,数据操作方面,redis 更好一些,较少的网络 IO 次数,同时还提供 list,set,hash 等数据结构的存储。
mongodb 支持丰富的数据表达,索引,最类似关系型数据库,支持的查询语言非常丰富。

3、内存空间的大小和数据量的大小
redis 在 2.0 版本后增加了自己的 VM 特性,突破物理内存的限制;可以对 key value 设置过期时间(类似 memcache)

memcache 可以修改最大可用内存,采用 LRU 算法。Memcached 代理软件 magent,比如建立
10 台 4G 的 Memcache 集群,就相当于有了 40G。 magent -s 10.1.2.1 -s 10.1.2.2:11211 -b 10.1.2.3:14000

mongoDB 适合大数据量的存储,依赖操作系统 VM 做内存管理,吃内存也比较厉害,服务不要和别的服务在一起。

4、可用性(单点问题)
对于单点问题,
redis,依赖客户端来实现分布式读写;主从复制时,每次从节点重新连接主节点都要依赖整个快照,无增量复制,
因性能和效率问题,所以单点问题比较复杂;不支持自动 sharding,需要依赖程序设定一致 hash 机制。
一种替代方案是,不用 redis 本身的复制机制,采用自己做主动复制(多份存储),或者改成
增量复制的方式(需要自己实现),一致性问题和性能的权衡

Memcache 本身没有数据冗余机制,也没必要;对于故障预防,采用依赖成熟的 hash 或者环
状的算法,解决单点故障引起的抖动问题。

mongoDB 支持 master-slave,replicaset(内部采用 paxos 选举算法,自动故障恢复),auto sharding
机制,对客户端屏蔽了故障转移和切分机制。

5、可靠性(持久化)
对于数据持久化和数据恢复,
redis 支持(快照、AOF):依赖快照进行持久化,aof 增强了可靠性的同时,对性能有所影响
memcache 不支持,通常用在做缓存,提升性能;
MongoDB 从 1.8 版本开始采用 binlog 方式支持持久化的可靠性

6、数据一致性(事务支持)
Memcache 在并发场景下,用 cas 保证一致性
redis 事务支持比较弱,只能保证事务中的每个操作连续执行
mongoDB 不支持事务

7、数据分析
mongoDB 内置了数据分析的功能(mapreduce),
其他不支持

8、应用场景
redis:数据量较小的更性能操作和运算上
memcache:用于在动态系统中减少数据库负载,提升性能;
                做缓存,提高性能(适合读多写少,对于数据量比较大,可以采用 sharding)
MongoDB:主要解决海量数据的访问效率问题。

表格比较:
 

memcache

redis

类型

内存数据库

内存数据库

数据类型

在定义value时就要固定数据类型

不需要

有字符串,链表,集 合和有序集合

虚拟内存

不支持

支持

过期策略

支持

支持

分布式

magent

master-slave,一主一从或一主多从

存储数据安全

不支持

使用save存储到dump.rdb中

灾难恢复

不支持

append only file(aof)用于数据恢复

性能

  

性能
1、类型——memcache 和 redis 都是将数据存放在内存,所以是内存数据库。
                当然,memcache也可用于缓存其他东西,例如图片等等。
2、 数据类型——Memcache 在添加数据时就要指定数据的字节长度,而 redis 不需要。
3、 虚拟内存——当物理内存用完时,可以将一些很久没用到的 value 交换到磁盘。
4、 过期策略——memcache 在 set 时就指定,例如 set key1 0 0 8,即永不过期。
                       Redis 可以通过expire 设定,例如 expire name 10。
5、 分布式——设定 memcache 集群,利用 magent 做一主多从;redis 可以做一主多从。都可以一主一从。
6、 存储数据安全——memcache 断电就断了,数据没了;redis 可以定期 save 到磁盘。
7、 灾难恢复——memcache 同上,redis 丢了后可以通过 aof 恢复。

posted on 2014-01-14 01:34 crazycy 阅读(1357) 评论(0)  编辑  收藏 所属分类: JavaEE技术DBMS


只有注册用户登录后才能发表评论。


网站导航: