Ceph and GlusterFS comparison

GlusterFS and Ceph are both a software defined storage solution, parts of the RedHat solutions portfolio.
If at first view they can seem to be identical in what they offer (storage distributed on commodity hardware, with fault-resilience), when looking more in depth, there are some difference that can make one of the two solutions better for some use cases than the other, and vice-versa.

So, let's dig into what these 2 solutions have to offer:

 
  GlusterFS Ceph
Licence Open Source Open Source
Platform Run on commodity hardware (old PC, simple servers, ...) Run on commodity hardware
Best for Large files (4+ MB), sequential access Object and block storage
Scalability Scalable, same cluster can spread physical, virtual or cloud servers Scalable on commodity X86-64 hardware
Main features One pool file storage is a single mount point Object storage is compatible to Amazon S3 API and OpenStack SWIFT
  Data protection features by data replication
(may require more disk than a NAS appliance equivalent)
Data protection by data replication
Topology based on nodes being Monitor or OSD node (nodes providing storage)
  Support a basic form of erasure coding (not flexible to extend)
(data protection without using RAID or Replication - more info at Wikipedia algorithm used is REED-Solomon)
Complex inner working, making complex tweaking and tuning (Paxos protocol to solve consensus within a cluster, ClusterMap,Placement Groups, ...
all of these terms refer to the internal working of the Ceph protocol)
  Up to 256 snapshots  
  Bit Rot detection (protection against "Silent Data Corruption")  
Also support Object storage File storage
  Can grow above the Petabyte  
  LVM2 based  
Recommended use Backup and video streaming, OpenShift (PaaS) OpenStack IaaS, object storage
Clients Native FUSE library for GlusterFS, NFS or CIFS gateway, own API (Lib-GF) Native protocol RADOS: libRados (for application to access directly the storage), RadosGW (object storage - SWIFT or S3 API), RBD (block access)