Muli-master OpenLDAP with 2 nodes

Installing and configuring       Configuration file      Explanation of configuration directives

Since its version 2.4, OpenLDAP includes the possibility to create a multi-master configuration. In a situation when your write access to the directory is as important as the read one, it is usefull to have more than one directory where you can write, each beeing kept in synch with its counterpart. In this howto, we give explanation on the creation of a two nodes multi-master setup, what the online documentation of OpenLDAP call the MirrorMode.
In front of your pair of multi-master LDAP servers, you will have to build a mechanism to direct the queries to one of the node. It is up to you to choose the load-balancing solution depending on your network infrastructure. This can be round-robin load-balancing via DNS, with a layer 2 or 3 load-balancer or even with a more complex load-balancer working until layer 7 and able to direct the write and read queries independantly of each others.

While the official documentation says it needs to have in front a mechanism where the write queries are all the time directed to the same node if present and the read queries balanced across the two nodes, this doesn't mean that it is not possible to write simultaneously on both nodes. There is no technical reason why to not do this. If you write object A on server 1 and object B on server 2, the replication is done in both direction. Also, if one of the two servers went away for some time, when it goes back online, it begin by synchronizing the delta, the modifications it missed when it was offline.
Of course, when you modify differently the same object at the same time on both nodes, there will be a conflict. But this is inherent to all multi-master or two-way synchronization system.
 

Installing and configuring

After installing the openLDAP RPM's, checks that it is launched at startup as we are not using a cluster software. We don't think that a cluster is needed because as everything run on each nodes, we don't need special action to be done if one node disappear (no promotion, no restart, …)

When the failed host would be back, it will launch automatically the startup script. This is sufficient as there is no special action to taken regarding the LDAP replication in case of a master vanished away during synchronization.

If not present in the startup scripts for the runlevel #3 :

# chkconfig –-add ldap
# chkconfig –-level 3 ldap on


To configure OpenLDAP, we have the choice between the flat slapd.conf file or the more sophisticated and dynamic configuration backend (where configuration directives can be accessed through LDAP too, just like it can be done for the schema). Because it more straightforward to explain and to show example, we use the flat file configuration mechanism.

If this file is missing (must be found under /etc/openldap) then it is pretty sure that your default installation is configured to use the configuration LDAP backend. So to go back to the “plain old” flat file, you do :

  • remove or rename the /etc/openldap/slapd.d directory

  • create a file called /etc/openldap/slapd.conf (there should be a example somewhere installed, if not start from our example below of a complete working slapd.conf file.)

Configuration file

Here is a complete slapd.conf file for a two nodes multi-master :


include /etc/openldap/schema/corba.schema
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/duaconf.schema
include /etc/openldap/schema/dyngroup.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/java.schema
include /etc/openldap/schema/misc.schema
include /etc/openldap/schema/nis.schema
include /etc/openldap/schema/openldap.schema
include /etc/openldap/schema/ppolicy.schema
include /etc/openldap/schema/collective.schema

allow bind_v2

pidfile /var/run/openldap/slapd.pid
argsfile /var/run/openldap/slapd.args

modulepath /usr/lib64/openldap
moduleload syncprov
# moduleload accesslog.la
# moduleload auditlog.la
# moduleload back_sql.la
## Following two modules can't be loaded simultaneously
# moduleload dyngroup.la
# moduleload dynlist.la
# moduleload lastmod.la
# moduleload pcache.la
# moduleload ppolicy.la
# moduleload refint.la
# moduleload retcode.la
# moduleload rwm.la
# moduleload translucent.la
# moduleload unique.la
# moduleload valsort.la

# TLSCACertificateFile /etc/pki/tls/certs/ca-bundle.crt
# TLSCertificateFile /etc/pki/tls/certs/slapd.pem
# TLSCertificateKeyFile /etc/pki/tls/certs/slapd.pem

serverID 1

#########################################
# Main LDAP database #
#########################################
database bdb
suffix "dc=begetest,dc=net"
checkpoint 1024 15
rootdn "cn=manager,dc=begetest,dc=net"
rootpw secret

directory /var/lib/ldap

# Indices to maintain for this database
index objectClass eq,pres
index ou,cn,mail,surname,givenname eq,pres,sub
index uidNumber,gidNumber,loginShell eq,pres
index uid,memberUid eq,pres,sub
index nisMapName,nisMapEntry eq,pres,sub
index entryCSN,entryUUID eq

overlay syncprov
syncprov-checkpoint 100 10
syncprov-sessionlog 100

syncrepl rid=100
         provider=ldap://ldap2:389
         type=refreshAndPersist
         retry="60 +"
         searchbase="dc=begetest,dc=net"
         scope=sub
         schemachecking=on
         bindmethod=simple
         binddn="cn=manager,dc=begetest,dc=net"
         credentials=secret

mirrormode on

##################################################
# Database for the monitoring #
##################################################
database monitor

access to *
       by dn.exact="cn=manager,dc=begetest,dc=net" read
       by * none
 

In bold in the printout, the only line that must be different in the slapd.conf on the other server. The value for serverID need to be different for both servers.


Back to top


Some explanations concerning the configuration directives

What directives are important and why ?

  • Modulepath : this where the modules extending the functionalities of OpenLDAP are located. It is dependant of your architecture and packaging system.

  • Moduleload : which module to load. We need to load at least the syncprov module, which has all the code to let OpenLDAP do synchronisation / replication

  • ServerID : an arbitrary ID for this server. It must be different on each server participating into the multi-master replication

  • database, suffix, rootdn, rootpw and directory : this is the definition of the core of your directory, the database where all the records will be hold.

  • Overlay : to activate the replication / synchronisation functions on this database

  • syncrepl : definition of the provider (the external server from which the data for this database will come).

  • Rid : an arbitrary number, identical on the provider and the consumer (the client in the replication)

  • provider : how we connect to the provider (IP, port, …)

  • searchbase and scope : use to restrict the replication to a portion of the tree of the provider.

  • Schemachecking : for each attribute and object replication checks if the changes to copy are conform to the local schema. If not, an error is throwed and the copy is not done. If put on “on”, be sure that both the provider and the consumer have the same schemas loaded.

  • Bindmethod, binddn and credentials : identity and mechanism to use to connect to the provider to have access to the information that needs to be replicated.

  • Type : can be RefreshAndPersist or RefreshOnly. With the first, the consumer connects to the provider and keeps the connection open, waiting for any change to be done on the provider. In this case, the replication happens “on change”. With the second option (RefreshOnly), the consumer connect the provider at a fixed interval (given by the directive interval) to do the replication of changed objects.

  • Retry : if the connection or the refresh failed, at which interval do we let the consumer retry the connection.

  • Interval : is used when type=RefreshOnly is used to tell the consumer when to connect to the provider to retrieve data.

  • Mirrormode : by setting this directive on “on”, we authorize modification of the replicated database on the consumer by other hosts than the provider. In a normal master-slave replication, it is not allowed to write on replicated data on the consumer, unless you are the provider of course. In multi-master environment, of course the purpose is to let any client write on each host and let them propagate the changes to each other. This is why this directive is important in this scenario.

 

 

Back to top