Final Report : Investigate alternative DICE desktop platform (474)

The aim of this project was to investigate the feasibility of porting the DICE desktop environment to an alternative (non-Redhat) platform. The project was broken down into a number of requirements, some of these were pretty much mini-projects in their own right so they are discussed individually below. As this was an investigative project the plan was to rapidly develop prototype solutions for the various requirements so that we could explore the issues and discover problems quickly. The expectation was that even if we decided to go ahead with fully porting DICE to an alternative platform some of these prototypes would need to be completely replaced and that the others would require further enhancements to fulfil all requirements.

Requirements

Add support to LCFG build tools for building packages

Support has been added to the build tools for generating and building Debian packages in a similar style to the familiar rpm and devrpm commands. There are new deb and devdeb commands which build binary packages as expected on Debian/Ubuntu machines which have the necessary tools (e.g. debhelper and debuild) installed.

To assist with bootstrapping the new platform, support has also been added for generating source packages (*.dsc and associated *.tar.gz files) on an SL7 machine, using pack or devpack commands, which are suitable for submission to the Informatics PkgForge service.

To build a binary Debian package it is necessary to add a debian sub-directory to the project along with a number of configuration files, these drive the build in the same way as a specfile for RPMs. There are a number of essential files, to aid those who are unfamiliar with the process a new gendeb command has been added which can be used to create all the necessary debian build files. In most cases they will be sufficient and no further tweaking will be needed, particularly for LCFG components which are all similarly structured. At the very least this command produces a good starting point from which a package can be created fairly quickly.

The standard LCFG CMake files needed a complete overhaul to be suitable for building Debian packages. The main problem was that CMake has the concepts of source and build directories (CMAKE_SOURCE_DIR and CMAKE_BINARY_DIR variables). In the Redhat world these are typically the same location but Debian/Ubuntu follow the recommended best-practice of having them as different directories. All the LCFG scripts assumed the two locations were the same directory so a lot of refactoring of macros was required. Most of the LCFG components now build cleanly on both platforms but there are likely to be many other packages which still need tweaks to the CMakeLists.txt file to resolve this issue.

There are some limitations in the current tools:

  • No support for cryptographic package signing, it should be reasonably simple to add this if necessary at a later stage.
  • No support for LCFG macros in the debian build config files, this is intentional to keep things simple.
  • Local documentation on how to get started with Debian packaging is needed

Local package repository management

A prototype repository management service was created which uses the reprepro tool. This has been adequate for the prototype platform but investigation has shown that this will not be sufficient for a fully supported platform. The biggest issue is the lack of support for multiple versions of a package within a repository. That fits with the way Debian/Ubuntu expect people to manage packages but does not fit with our way of working. We need to be able to pin packages to old versions for some machines and also have the possibility of downgrading if an upgrade causes problems. For the fully supported platform it is likely that aptly will be much better. It's a bit more complicated to setup but is much more capable.

Upstream package repository mirroring

A quick attempt was made at mirroring upstream repositories using reprepro. This showed that it was possible but to save time a service was not developed. This would suffer from the same problem with multiple package versions, as mentioned above for local package repositories. Again, aptly seems like a better candidate for the job.

aptly has some interesting features for upstream mirroring. In particular, it supports making snapshots of the mirrors and it is possible to merge or all or some changes between snapshots. This would give us the option to make daily snapshots of repositories which would be available on develop machines and then merge those changes into weekly snapshots for the testing and stable releases.

Create essential LCFG headers for target platform(s)

This was all fairly straightforward and mostly followed the way we've done things with SL7. The operating system headers were created in a new ubuntu sub-directory and release-specific headers were created with both numerical and named forms (e.g. lcfg/os/ubuntu/19.10.h and lcfg/os/ubuntu/eoan.h), the named versions seem clearer.

Port LCFG client

Thanks to previous projects to modernise the LCFG client the code pretty much "just works" on Debian/Ubuntu platforms.

One focus of the work on the client was to improve the ability to bootstrap new platforms by shipping with all the necessary systemd config (e.g. lcfg-multi-user-stable.target, lcfg-multi-user.target, lcfg-lcfginit.service, etc). Being able to get rdxprof working quickly on any new system without having to manually create systemd config files or port the LCFG systemd component is a massive improvement. As this strategy was so successful it has been extended to all LCFG components, they now all ship with a standard systemd config so there is no need to add them via systemd component resources.

The other main focus was on creating a new packages file format since the cpp-based rpmcfg isn't really appropriate for non-Redhat platforms. The client now has support for saving the packages list as YAML, this was designed to make it easier to use from applications that might not have access to the full LCFG core libraries (e.g. python scripts).

One change which will be quite noticeable for admin users is that on Debian-based platforms the client stores data in different locations which more closely following the FHS guidelines (e.g. /var/log/lcfg and /var/lib/lcfg). To help with the transition compatibility symlinks have been added to SL7.

Port LCFG "core" components

Once the LCFG client was functional the porting of the core components was fairly straightforward. It was decided that the old logserver would not be ported due to security concerns. The inifile component was promoted to core status since it is now so commonly used and is the basis of other components (e.g. sssd).

A notable enhancement to the file component is the addition of support for actions which can be triggered when a managed file is changed (e.g. to run a script or restart a service). For example, it is needed for managing the mailcap file on Debian where the update-mime script needs to be run to apply changes. Triggered actions are aggregated so that with multiple managed files each triggered action will be fired just once at the end of the configuration. This is a feature which has been requested many times over the years so should prove very popular.

To make it easier to support multiple platforms the ngeneric framework gained some new features:

ng_tmpldir
This resource holds the standard location of templates for the component. If a template is specified with a relative path the directories in the resource will be searched by LCFG::Template::Substitute (in Perl) and the sxprof utility. This deals with the fact that the template directory is different between Redhat and Debian based systems. Components with template paths hardwired into the code will need to be changed to use the new resource.

ng_service
This resource can be used to hold the name of the service being managed (e.g. ssh or sshd) which might differ between platforms. This makes the code more flexible by avoiding the need to hardwire the service name into the component code. A number of components will need to be converted to using this resource to deal with platform differences.

Port necessary LCFG "standard" components

This milestone has a couple of related sub-milestones:

  • Identify all components required for desktop platform
  • Produce list of components - status and proposed plan

The list of components with status is available on the LCFG wiki - https://wiki.lcfg.org/bin/view/LCFG/UbuntuComponents

Some components have been rewritten into Perl (e.g. localhome and openssh). Others (e.g. rsyslog and kerberos) could do with similar attention.

Note that a number of components will need to be completely replaced (e.g. fstab and network). Also some may not be required (e.g. ntp where we could just use the systemd-timesyncd service).

The schemas for many standard components have been tweaked so that they use sysinfo resources for standard paths rather than hardwired Redhat-specific locations. There is still more to do but the majority of components now have platform-independent defaults. There are probably many hardwired locations in component code which also need changing, preferably by converting to LCFG resources which are easier to modify. Work to identify those locations and file bug reports is still ongoing.

A lot of work went into adding support into the LCFG headers for the standard Ubuntu configurations, for example the pam and systemd components.

Investigate package management options (e.g. updaterpms)

An entirely new package management tool, named apteryx has been developed which can manage the Debian packages in a way similar to the updaterpms tool used for Redhat platforms. This has been written in Python and uses the python-apt library to do most of the work.

This new tool can install, remove, upgrade and downgrade packages as necessary to match the requirements of the LCFG profile. The new tool is designed to work like a standard Debian package manager but the behaviour of updaterpms was used to direct the design so, for example, the new tool similarly supports package flags (e.g. reboot, boot-only). This tool can work in various modes, for example, it can auto-upgrade packages or automatically install dependencies similarly to the standard apt tool but it can also work in a stricter mode like updaterpms. There is a new LCFG apt component which is used to configure apteryx and run the package manager in a similar way to updaterpms.

apteryx works reasonably well but is still very much a prototype, consequently it is a little fragile. There are known problems with using the python-apt library that can cause apteryx to get stuck in a loop:

  • Handling some complex dependency issues
  • Dealing with packages that have errors in post-install scripts,
  • The output is only sent to stdout, there is no capturing/logging of the output

Other issues:

  • A failure to parse a single entry in the packages YAML file causes the whole process to crash

These issues will need to be thoroughly investigated and resolved if we wish to have a stable, reliable package manager.

Investigate installer technologies (in particular FAI)

Originally we had planned to investigate the Fully Automated Installer (FAI) project as a likely candidate for an installer for the platform. This turned out to be rather awkward to get working so instead the simpler debian installer was examined. This supports "preseeding" configuration and running scripts to manage the build process in a fully-automated manner.

The preseed files and scripts which drive the install process are fetched from a server by the installer via http(s). This provides the opportunity to generate the install config directly from the client LCFG XML profiles on the LCFG server. A CGI script has been created which uses the Perl Template Toolkit to generate all the files and scripts, the whole process is controlled by the LCFG install component resources in the profile. This is flexible enough to allow full support for tasks like querying the administrator for a Kerberos principle (so kdcregister can be used) and also running LCFG component methods registered with the install component in the same way as the SL7 installer. For full details see https://wiki.lcfg.org/bin/view/LCFG/UbuntuInstaller

There are currently some limitations:

  • Can only partition in one of standard layouts (atomic or multi) - it should be possible to generate the necessary configuration for the partman tool from the fstab resources
  • No support for encrypting certain partitions (e.g. /tmp and swap).
  • Error handling in the LCFG scripts is quite poor.
  • User feedback is poor, it's hard to know if the process is still going.

Investigate new network component

For this project we did not need to replace the network component, the test machines just used dhcp. The expectation is that the network configuration can be managed using the netplan tool, that can control either systemd-networkd or network-manager. Hopefully a new network component can be created which will support the old schema resources so we don't need to make lots of profile changes.

Investigate automated package building

The Informatics PkgForge service massively improves the efficiency of the package building process, particularly when large numbers of packages need rebuilding. It also helps ensure that the build-time and run-time dependencies are well known which reduces the potential for conflicts at install time. With this in mind it was clear that adding support to PkgForge at this early stage would be very beneficial even if that was only in a prototype form.

The aim was to add support for submitting Debian source packages in the same way as we have support for SRPMs. For building RPMs the mock tool is used to do the builds in a chroot, on Debian/Ubuntu there is a similar tool named pbuilder. An LCFG pbuilder component has been created to manage pbuilder configurations so that is possible to have multiple chroots for different target platforms on the same host machine. Similarly to mock this can also be used to manually build packages using pdebuild which is very useful for testing package builds. Along with this the pkgforge service was overhauled to supported different source formats and alternative builders. New support was added for platforms to have aliases so that it is now possible to easily target a build for "all ubuntu" or "all redhat" without knowing specifically which platforms are available.

Using the LCFG build tools it is possible to create a Debian src package (*.dsc plus *.tar.gz) files on an SL7 machine and then submit them for building via PkgForge. This helps hugely with the bootstrapping process as it enables someone to build a Debian package even if they have little knowledge of the build process or they do not have direct access to a machine with the necessary tools installed.

There are currently a few bits which need further attention:

  • The pbuilder component causes all configuration files to change each time the configure method is called even if no functional change has occurred, this causes unnecessary rebuilds of the chroots which slows down the build process.
  • The lintian checks on Debian package quality can occasionally cause the entire build process to fail.

Summary

This project has proved that it is definitely possible to port LCFG to a Debian-based environment. Although some software has needed to be completely replaced (e.g. apteryx for updaterpms), with just a small amount of effort most LCFG components can be easily modified to work in a platform-independent manner. For example, by using LCFG resources for all paths or relying on the sysinfo component for standard locations the component code can be identical on all platforms.

This project has laid the foundations required for a full port of the DICE environment to Ubuntu. It is now possible to configure and install Ubuntu machines using LCFG, the next stage would be to work on the DICE-specific configuration and software.

A lot of new software has been developed so we expect it to be less robust and more buggy than the long-established and well-tested code on SL7 but with wider usage those issues can be rapidly resolved.

Total Effort

Period Effort
2019 T2 162 hrs
2019 T3 310 hrs
2020 T1 199 hrs

Total: 671 hours = 96 days = 19 weeks

-- StephenQuinney - 05 May 2020

Topic revision: r4 - 26 May 2020 - 07:51:08 - StephenQuinney
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback
This Wiki uses Cookies