----- D R A F T ----- D R A F T ----- D R A F T ----- D R A F T ----- D R A F T ----- D R A F T ----- |
All DICE clients run their own complete LDAP server, the content of which is sync'ed hourly (†) against the master by the in-house slaprepl
script. slaprepl
runs via SASL->GSS-API->-Kerberos, so the exchange is both authenticated and encrypted. Machines on the 'stable' release currently synchronise against the single master LDAP server; machines on the 'develop' release sychronise against the slaves, via the DNS round-robin entry dir.inf.ed.ac.uk
.
(† The hourly sync does modifications and additions only. An additional daily run does deletions. The separation of these arose because the deletion run was at one stage time-consuming. That might not be the case any more; the separation could be revisited.)
A DICE client in current standard configuration makes all LDAP lookups against its own server, but no DICE client ever writes to its own LDAP server. The only LDAP server ever written to is the single master.
When a user process on a DICE client does an nss LDAP lookup (i.e. a lookup originating from a glibc call, e.g. getpwnam
), that lookup always proceeds via the nslcd
daemon. The LDAP server therefore has no knowledge of the UID of the actual process which made the lookup request. No Kerberos/SASL authentication or encryption is involved: the LDAP request is done via anonymous bind, and transmitted in plain-text.
The use of nslcd
appeared by default in the transition from SL5 to SL6; it was not particularly planned for, and there are alternatives - e.g. the use of no caching daemon whatsoever; or the replacement of nslcd
by sssd
. It is not clear what we are trying to gain by the use of a caching daemon.
On a DICE client, an explicit LDAP lookup via ldapsearch
normally (††) proceeds via SASL and Kerberos, so the exchange is both authenticated and encrypted.
(†† The normal behaviour is configured by compiled-in defaults; it can be overridden, and an anonymous lookup done, by use of the -x
switch.)
In the normal course of events, all of the above lookups on a DICE client take place entirely within the local machine. However, if a DICE client machine is configured to use an alternative LDAP server (e.g. !openldap.server mSET(dir.inf.ed.ac.uk)
) then the same authentication/encryption setups apply.
Specifically that means that, on our setup, no authentication or encryption is used for remote (i.e. off-machine) LDAP lookups which are done via the nslcd
daemon. All such lookups are done via anonymous binds and proceed over unencrypted links.
Some DICE applications on DICE clients use the hard-coded address of 'localhost' as the location of the LDAP server against which they expect to do a lookup. Currently, the only known source of this hard-coding is the dice-authorize
package (see 3.1 below.) We need to establish whether or not this is the only such case.
Synchronisation of data between the master and the slaves is done via the native OpenLDAP mechanism syncrepl
. That runs via SASL->GSS-API->-Kerberos, so the exchange is both authenticated and encrypted.
None of our LDAP servers currently support TLS/SSL connections.
Connections from clients are accepted either as anonymous binds, or as SASL->GSS-API->-Kerberos binds.
LDAP bind access to the servers is restricted by IP address range.
After a successful bind, access to certain LDAP data is restricted by slapd ACL lists. We don't appear to have a obvious human-readable list of what such access restrictions are, or what their intent is.
Any LDAP configuration must be extremely reliable: a broken LDAP means a broken machine.
The client must be guaranteed that all data returned from LDAP is correct. All such data must therefore transmitted by a mechanism which guarantees it originates from a bona-fide Informatics LDAP server, and which guarantees that the data cannot be altered en route (†). This means, for example, that non-TLS/SSL anonymous lookups cannot be used if the LDAP server is on a different machine than the client.
(† Note that we do not necessarily care whether or not any such data is snoop-able en route.)
It would be preferable if clients were aware of LDAP updates more rapidly than the current replication arrangements (which can mean a delay of up to one hour) permit.
It is likely that client-side caching of LDAP data is necessary, or at least helpful, for performance and responsiveness. But this is just a claim which needs to be proven.
om
can no longer be used.
localhost
to be hard-coded as the location of the LDAP server in this way: we should not build in policy decisions.
Currently, the only known package with such a hard-coded dependency is the dice-authorize
package.
dice-authorize
package dice-authorize
package dates back to the original DICE development and provides library calls (†) whose role it is to securely answer the question 'does user x have capability y?'
The package provides the following: pam_diceauth.so
libdiceauth.a
and libdiceauth.so
DICE::Authorize.pm
buildcaps
and roleupload
- which are no longer used.)
pam_diceauth.so
uses libdiceauth.so
- but we do not use pam_diceauth.so
in our PAM stack any more.
om
uses the DICE::Authorize.pm
module to do authorization checks. (This usage is configured via the profile.authorize
resource.)
We need to clarify what else - if anything - uses dice-authorize
, and in which circumstances.
dice-authorize/libs/c/authorize.c
and dice-authorize/libs/perl/Authorize.pm
, both appear to be parallel implementations of exactly the same logic. (††) And, currently, both implementations have hard-coded dependencies to localhost
as the LDAP server.
The C library would be easy to change to select the LDAP server based on the content of /etc/openldap/ldap.conf
, since it uses the OpenLDAP library to do its work. It should be enough to change the line
ld=ldap_init("localhost",LDAP_PORT);to
ld=ldap_initialize();The Perl module will be less trivial since it uses
Net::LDAP
, which has no intrinsic knowledge of OpenLDAP. The obvious change is to have that module itself extract the URI
value from /etc/openldap/ldap.conf
.
However: is there any good reason why we should have two distinct parallel implementations of this logic? That itself seems dubious. Is there any reason why a Perl module which is just a wrapper around the C library can't be built?
An alternative implementation which sidesteps the entire issue would be to use the standard system libraries to search the netgroups which expose the capability data, and, in that way, intrinsically rely on /etc/nsswitch
and /etc/openldap/ldap.conf
configurations to route such queries to the appropriate end source.
(†† Some of that logic should also probably be cleaned up: e.g. do we still have capabilities called 'user/...' and '<console>'?)
lcfg-openldap
component and its associated header files are very complicated. They should be simplified if possible..
/etc/ldap.conf
file) still being produced by the lcfg-openldap
component. That should be cleaned up if possible.
sssd
rather than nslcd
- see http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Deployment_Guide/index.html