Hypatia Portal Server Installation

Installing a Hypatia Portal server is straightforward, can be done 'from scratch' without access to the old server, and consists of a few manual steps.

Steps one and two to be performed if this is to be a new host, as opposed to a reinstallation / reinstatement.

1. If this is to be an additional host, configure the infdb (or your chosen development postgresql server) to accept kerberised connections from the new host. This is done for the live service in the +live/hypatia-db-server.h+ header and will usually consist of a single macro invocation
2. If this is to be an additional host, configure an "AFS" Admin UID for it. Please assign one to the server using the AFSAdminUids wiki page. Follow the instructions there to create and add it to the requisite AFS group (or renaming an outgoing portal host UID)

3. In the new server's profile, add
/* set this to "infdb" to use the live database */
#define LCFG_HYPATIA_DBSERVER_HOSTNAME <DB hostname>.<%profile.domain%>
/* set to "portal.theon" for the live service */ 
#define WEBP_HOST <hostname>
#include <dice/options/hypatia-portal-server.h>
4. Don't forget to configure
#include <dice/options/ipfilter.h>
!ipfilter.export    mADD(https)

if you want this server to be accessible outwith EdLAN . Note that Portal instances are always Cosign authenticated.

Portal in a Hurry

It's simplest just to restart following the installation of the new profile; a portal run will begin automatically once fully configured and a report tree will appear. However the following might help speed things up:

  • If you cannot pass the weblogin stage, touch cosign and x509 server profiles if the server is inaccessible
  • If you receive 403 or 404 errors, work your way through the +apache+ user's crontab: the +@reboot+ subversion and +portal+ run should do everything required.

"Seamless" Relocation

Like most cosign services it's tricky to seamlessly relocate the portal, but the following procedure allows relocation to be done with virtually no downtime. It's worth warning users, however, as users whose browsers' DNS is outdated might receive a 'certificate mismatch' warning depending on the exact ordering of the changes.

Technically, this should really be done in a different order, but the almost-guaranteed failure of the spanning maps governing the x509 names means that it's better to push the change through quite explicitly. You can however parallelise some parts of the procedure below, do reduce the disrupted period.

It helps to have a tail on the portal log of both old and new servers to check who is going where and if they are succeeding.

1. Update the dns map
129.215.xx.yyy  oldhost-2
#verbatim inf.ed.ac.uk portal.oldhost 300 IN CNAME oldhost-2
129.215.xx.yyy  newhost-2
#verbatim inf.ed.ac.uk portal.theon 300 IN CNAME newhost-2
2. Edit the old portal host's profile
#define WEBP_HOST portal.<hostname>
3. Edit the new portal host's profile
#define WEBP_HOST portal.theon
4. Update DNS on both hosts, and on the X509 and cosign servers
$ for h in oldhost newhost berlin osprey nautilus; do
    om $h.dns update
5. Now update the spanning map-holding hosts by touching their profiles
$ rfe -S -f lcfg/nautilus lcfg/osprey lcfg/berlin
6. Now you're finally ready to forcibly update the host configuration (if it hasn't been done already). On the new host

1. check if new keys, cosign entries have been generated
$ tail /var/lcfg/log/x509
$ tail /var/lcfg/log/cosign
2. if not, you can try a naive update
$ om x509 run
$ om cosign configure
3. but if nothing happens you'll need to do this
rm -rf /etc/pki/tls/certs/portal.*
om x509 run
4. finally, for good measure, if it looks necessary
om apacheconf restart
7. Repeat step 6 for the old host

CVS Submit / Maillist Publishing Login

A CVS login is required for conduit publication of mailing lists and submit data files. This is done via SSH so public keys must be installed. To do this the following steps must be carried out manually after installation of the server.

$: nsu postgres
postgres$: ssh-keygen
# Press return to accept defaults and just press return when prompted to enter (and re-enter) a passphrase so the files are created unencrypted
postgres$: cd ~/.ssh
postgres$: scp id_rsa.pub YOURUSERNAME@cvs.inf.ed.ac.uk:/tmp
postgres$: ssh -l YOURUSERNAME cvs.inf.ed.ac.uk
$: nsu postgres
postgres$: mkdir -p ~/.ssh
postgres$: cat /tmp/id_rsa.pub >> ~/.ssh/authorized_keys
postgres$: exit
$: rm /tmp/id_rsa.pub
$: exit
postgres$: ssh cvs.inf.ed.ac.uk
# above should now not prompt for a password

To reset the keys (for example on re-installation) just follow the procedure above.

-- TimColles - 27 Nov 2018

Topic revision: r3 - 01 Sep 2020 - 10:25:32 - GrahamDutton
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback
This Wiki uses Cookies