Storage is a key component of your systems and infrastructure.
Let's investigate what are the various terminology and concepts in use in this area.
I'm sure that you've heard a lot about block storage, file storage, object storage, ... let's put order and definition inside these concepts.


Since almost the beginning, there has been two kind of storage : the local one and the remote one.
By local, we mean disk(s) that are locally attached, meaning :
  • disks connected internally to a communication bus like SCSI, IDE and more recently SATA
  • disk connected externally to the computer or server but presented has local to the operating system, like an external USB drive or USB key
  • disk connected from an external cabinet using one the following technology : SCSI, Fibre Channel or iSCSI
Indeed, the idea of local disk doesn't mean only disks that are fitted inside the enclosure of the server or computer but also disks connected via some external cables, but that are saw by the operating system has an internal disk.
In Linux, all devices can be represented by a file under the /dev directory. Historically, we know that devices like :
  • /dev/hda, /dev/hdb, ... represent an internal IDE disk and its partitions (/dev/hda1 for the first primary partition)
  • /dev/sda, /dev/sdb, ... represent an internal or external SCSI disk, an USB disk, ... and its partitions (/dev/sda1 for the first primary partition)


A partitions is a division of the disk.
A partition can be primary, so know because at some point of the history of computing, the BIOS, what's launch the OS on a computer was only supporting 4 partitions called primary. It was possible to mark one of these as an extended partition to create inside this one an ulimited number of logical partitions, depending on the needs of the implementation.
Today, with the more sophisticated BIOS, this limitation has dissapeared.


On a partition, which is part or the whole disk, you will write your filesystem.
Your Operating System has different choices for this : FAT, FAT32, NTFS, ... for the Windows world or EXT2, EXT3, EXT4, XFS, ... for the Linux world.
The filesystem define the way directories will be created and how files will be stored on the disk media. Usually the system has to write somewhere metadata, hence a description of the file or directory, then the data are saved somewhere else, filling block or the smallest logical part of the filesystem. When you create the filesystem (when you format) you create like an empty envelop on the partition and you say how many bytes will be stored by block.
So, if the block size is 4 kilobytes, even if the file to save is smaller than that, it will occupy one block, letting the difference between 4 KB and the size of the file unused.
If the file size is more than 4 KB, than as many blocks will be used that can received the data.
A kind of table of content will be created in the filesystem and populated by the address of the block where the data of a given file reside.
This is what we call a block access filesystem.
Access to the blocks and so on is done at the OS level, by using the appropriate filesystem driver.

Therefore, any disks, locally inserted in the server or remotely connected using external SCSI buses, iSCSI (this is the SCSI protocol over Internet - so over LAN), Fibre Channel (on fiber cable or over Ethernet / LAN), will
be seen as through their local device name (like /dev/sda) and can be partitioned and filled with a filesystem supported by the OS. In this case, we can talk also about DAS (Directly Attached Storage).

File access filesystem

On a filesystem by file access, your system don't write block of data when he writes a file, but send the full file over the network to another server. This is what is done typically in the case of file servers sharing some directory. Remote access is done by using protocols to exchange files over the network like CIFS (SMB) or NFS.
Servers offering this functionality are called file servers. Also in this case we talk about NAS (Network Attached Storage)
These functionalities are offered by other servers or by dedicated appliance on the network.

Object filesystem

In this system, you access also remote storage not presenting their disks as directly attached, but a little bit like in the case of the NAS, using network protocol.
Except here that the protocol we use are not specific to file exchange but can be Web protocol (HTTP) used to exchange files by some kind of API (REST or SOAP).
In fine, the files are stored locally on some storage, but the exchange is purely base on an object exchange, the object in this case is a file with its data. Metadata like original name of the file are exchanged too or are kept local.

Distributed filesystems

Distributed filesystems are filesystems which are available via a network, will be directly attached to the host using it as a block device and are in fact physically located on several other servers.
A big advantage of this kind of filesystems is for their implementation: no need to buy costly hardware but by using simple servers with some internal disks capacity, you can build a huge distributed filesystem at low cost. No need to have identical hosts in to implement this, just the same version of the distributed filesystem software will be sufficient.
Distributed filesystem will be accessed like internal disks, but over the network, using standard mechanisms like :
  • mounting / unmounting
  • writing, deleting, accessing file thru the same OS calls
  • using the same shells commands to list files, move into directories, ...
The only difference resides in where the data is finally stored: the data is spread among the various nodes composing the cluster. Note also that depending of the product you are using to implement this, the data can be replicated or not. In other words, different nodes can hold the same copy of the data, like in a RAID array, and the lost of one node doesn't affect the integrity of your data.
Among the different possibilities the Open Source worl is offering to you, there is the Hadoop Distributed File System (HDFS), GlusterFS and Ceph that I've been exploring recently.
GlusterFS and Ceph are now included into the solution portfolio of RedHat, each of them having its own straight and weakness, we have made a short comparison between the two, to help you to choose the appropriate one depending of your use cases.

Please note that HDFS is an exception, as it cannot be used through the standard OS filesystem access commands, but only via the specific tools of the Hadoop stack. This filesystem is created to support the specificities of the Big Data infrastructure proposed by Hadoop.

Clustered file system

A clustered filesystem, like a distributed one, is accessed via the network and the data can be stored on various nodes of the cluster.
The main difference with the distributed filesystem is that a clustered filesystem can be accessed by more than one host at a time.
But this is not a shared like NFS or CIFS, this is really a filesystem that can be mounted on more than one host and accessed in parallel by all these hosts, in reading and writing mode.