MPU Meeting Monday 28th January 2013

SL6 Server Upgrades

No change, still waiting on a couple of VMs to be upgraded by other units so that we can decommission the final VMWare server.

Security Enhancements

Nothing has happened. Stephen will be focussing on completing the documentation over the next couple of weeks.


Nothing has happened.

Login Logs Viewer

This is now almost complete. The wording on the website needs some work for both the top-level index page and the authview interface. It also needs a Data Protection statement, this should just be the same as the one we already have regarding the central storage of syslog files on tycho.

We need to run a script each day to anonymise the data stored in the BuzzSaw DB which is older than 26 weeks. This will involve deleting the original message, the username, any source IP addresses, etc. We will still have a record of the event type (e.g. sshd) and whether it was a successful authentication (or not). This will allow us to compute long-term statistics. Stephen has a script which is mostly complete, he needs to setup a new DB role and the necessary keytab on beaver to limit what the script can modify.

Sleep Enhancements

Not much has happened. Once Stephen has written up how to generate BuzzSaw reports Chris will have a go at writing a module which can calculate the sleep stats.

Misc Devel

autobuild script
Stephen has been working on adding auto-build support to the LCFG source repository. The aim is to allow external contributors to be able to build packages without the involvement of Informatics COs. This should make the whole process much more efficient and will hopefully encourage more external contributions. An SVN hook has been setup so that a script is triggered when a new tag is created for a project (normally done using lcfg-reltool and one of the major, minor or micro commands). An autobuild will only occur if there is an autobuild: 1 setting in the build section of the lcfg.yml for the project. There is support for specifying the name of the target bucket, for now this is limited to just lcfg though. We need to consider carefully whether we want to permit external contributors to submit packages to any of the other package buckets. If an auto-build is required then an SRPM is generated with lcfg-reltool srpm and then the SRPM is submitted to pkgforge using pkgforge submit. An email will be sent to each address in the authors field of the lcfg.yml file, a quick look at the various projects suggests this data needs a bit of updating. We could consider adding support for specifying the target platforms, for now it just gets submitted to the "default" set of platforms supported by pkgforge. The platforms data in the lcfg.yml files is very inconsistent so would need a lot of cleaning and standardisation, we won't worry about this for now but it might be useful for SL7. The new scripts still need a bit of tidying and packaging up as an RPM.

As part of the work to add autobuild support some changes had to be made to the LCFG::Build::PkgSpec Perl module. This now has support for querying values of hash elements (to any depth) using the lcfg-pkgcfg tool.

The BuzzSaw importer has gained support for setting size limits on the files which are parsed. This limit can be either a maximum or minimum size. This was added because we have seen the occasional enormous file (the largest has been 230MB) which takes an immense amount of time to parse. The parsing of a single large file has the potential to block the import of data from all other files for long periods of time thus potentially allowing an attacker to prevent log analysis for some days. Stephen has analysed the range of log file sizes we have for the last 6 months and come up with a maximum size of 50MB. There are only 5 log files larger than this size. We could have a strategy of parsing small files daily and then having a separate weekly import job which picks up any larger files. There is no issue with having multiple long-running BuzzSaw import scripts. It is important to note that the import tool will not complain about files which are too large, they are never seen since the size limiting is done using a rule with the File::Find::Rule module. Alongside this change support has been added to allow the control of the order in which the files are parsed. Normally this is done in random order so that multiple scripts can be run concurrently in an efficient manner. The files can now be parsed in order of name or size (decreasing or increasing for either).

openafs 1.6.2pre3
We are now testing the openafs .6.2 pre-release candidate on develop machines and one file server (unicorn). We have not seen any problems. It seems likely that the final release will be made fairly soon.

Thanks to Chris updaterpms now has support for reporting progress on deletions which should hopefully avoid users pulling the plug during a big upgrade.


openafs buildhost
The openafs package builder has been moved from lochranza to budapest so that it has a lot more disk space available. The old server ran out of space when we added Fedora 18 to the set of target platforms.

Server models for firmware checking
Chris wondered which server models we should be regularly checking for critical firmware updates. He proposed all the newer Dell 'R' series models. It was agreed that we would do all of those and also the Dell PE1950 since we still have many of them in service. We will ignore any model where the only machines of that type are self-managed.

KVM Server updates
Chris has applied firmware updates to the KVM servers. This introduced a problem with the BMC configuration for jubilee, Chris will fix that issue today. We should also alter the configuration for hammersmith so that we can access the BMC over the web. This raised the issue of whether it would be possible to access the serial console wires from our desktops, Alastair will check with George.

Daily KVM reports
As part of the daily KVM reports we would like to have a list of guest images which are not being used, this would help us reclaim disk space. Chris also wondered if we could have the reports in html format. This would be possible, they would need to be aggregated onto a central server running apache, the ordershost would probably be a good place.

Size of KVM guests
Some of the KVM guests are far larger than required. Alastair will talk to some people to raise awareness of the issue.

We need to come up with a solution for the delays with refreshpkgs. The only suggestion so far is to look at switching to using the demand-attach fileserver instead, Stephen will investigate.

This Week

  • Alastair
    • Check whether console wire is accessible on DICE desktops
    • Daily KVM report - display unattached volumes.
    • Look at whether can determine current pool being used for migrated guests using libvirt (for mail reports)
    • Educate individuals about inappropriate KVM guest sizes
    • Look at whether can remove screensavers
    • Look at default gnome settings for DPMS display off wrt screensaver
    • Test kernel module package that triggers rebuildinitramfs for certain package install/updates (once kernel component fixed)
    • Systems blog article on the KVM Service
    • Create an MPU KVM server header
    • Document ssh keys mgmt - windows
    • Stargaze

  • Stephen
    • Finish documentation on security project - document how to author buzzsaw reports
    • Finish login logs project
    • Finish off kernel component - missing triggers issue
    • Document ssh keys mgmt - linux http://docproj/using-ssh-linux
    • Look at 'dafs' on a victim AFS server - to test the volume update bug
    • Python course
    • 11th December minutes

  • Carol
    • migrate some VMs

-- AlastairScobie - 28 Jan 2013

Topic revision: r3 - 01 Feb 2013 - 13:22:50 - ChrisCooke
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback
This Wiki uses Cookies