Final Project Report for Project 164 - Prometheus AFS PTS Conduit

Goal

The initial goal of this project was to produce the conduit, datastore and associated code which would allow the AFS PTS database to be synchronised with the main Prometheus Olympus datastore. This would allow users to be added automatically to the PTS database when they appeared in Prometheus, rather than being added by hand as had been the case up until then. The scope of the project was soon enlarged to encompass the synchronisation of both the PTS and the VL database, allowing for the automatic creation and deletion of home directories but the name of the project was never changed. It was felt that it would be beneficial for members of the CO community outwith the main Prometheus development team to create conduits as a means of promulgating knowledge of the workings of Prometheus to a wider audience and so this project was assigned to the Services Unit in the form of myself.

Initial progress

The project began in T1 2010 with the aim of having the conduit (and later both conduits) in service by the start of the 2010-2011 academic year. The need to acquire knowledge of both the internal workings of Prometheus and Perl based Object Orientated programming mean that initial progress was slow to start with but as necessary knowledge was acquired, the pace accelerated and the aim of having the conduits in service by September 2010 was achieved.

Finished? Not Quite

Although the PTS and VL conduits and data stores were in place and working satisfactorily, there still remained the small matter of the test suites before the project could be signed off. Each Prometheus conduit and data store is required to have a test suite which can be used to automatically prove that the conduit or data store is working in the expected manner. It will be realised that changing the contents of the actual databases associated with the data stores would be highly undesirable and so some way of emulating the behaviour of the underlying database, without changing the code path followed in the conduit or data store, must be found. Most Prometheus data stores front what are at heart some variety of LDAP database and it is a relatively simple matter to create and populate a test LDAP server for use with the test suites. Bringing up a test AFS cell is a far more daunting prospect and after much research and cogitation, it was decided that the only practical solution to this problem was to use the Perl Test::Mock modules to emulate calls to the Perl AFS module. This took a considerable amount of effort, exceeding that which had been required for the writing of the conduits and data stores in the first place.

Since the conduits and data stores. were in place and working, the writing of the test suites was treated as a lower priority task. This added to the total effort required to complete the project since there was a certain amount of refreshing of knowledge required each time the task was taken up again. In hindsight, it would have been far better to give the writing of the test suites the same priority as the writing of the conduits and data stores received and get the project finished as quickly as possible.

Nevertheless, by the end of 2013, the job was done. The test suites had been written, the code submitted to Gerrit and approved, all seemed set fair to finally get the project signed off. Except for one small fly in the ointment. When the project began, there was a choice of two Perl modules to use to talk to Afs, AFS:: and AFS::Command. The former spoke directly to the Afs API, the latter worked by executing the relevant Afs command line commands and sticking the outputs from those commands into a variety of structures. There was overwhelming agreement that the AFS:: module was the one to use and so that is what I did. By the end of 2013, popular opinion had swung round 180 degrees. The AFS:: module had not been updated for some time and it was clear that forthcoming kernel changes would break the module completely. By contrast, the AFS::Command module remained well supported and regularly updated. There was no choice but to switch the conduits, datastores and test suites (which, it will be recalled went to great lengths to emulate the calls to the various AFS:: methods) to use AFS::Command.

As before, the conduits and data stores were completed and deployed in a relatively short timeframe while the test suite took rather longer. This was not because of any intrinsic difficulty in switching the test suites to AFS::Command, indeed it became clear that from the test suite point of view, it would have been much better to have used AFS::Command from the start. This is because the AFS::Command module offers the facility to specify alternative commands to run in place of the AFS binaries such as VOS and PTS. It is therefore easy to write and make use of emulated commands. The test suites have now been written and all code committed to Gerrit and successfully reviewed.

Time taken

It should be noted when considering the time taken for this project that a significant part of this time was spent in refamiliayrisation with the project. Over the 6 year period, a total of 624 hours was spent on the project, the per year split being as follows

Year Effort (hours)
2010 164
2011 42
2012 218
2013 121
2014 43
2015 36
Total 624

-- CraigStrachan - 18 Jan 2016

Edit | Attach | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r3 - 08 Feb 2016 - 16:43:56 - CraigStrachan
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback
This Wiki uses Cookies