Summary of Server Moves

  • bass, havoc, ossian, methven and orator are all research funded servers currently in AT, they need no further action/provisioning.
  • copacabana, ipanema and leblon are also all research funded and need no further provisioning, currently in AT they will need to move to the IF server area (potentially in the separate secure area).
  • dendrite is also research funded and needs no further provisioning, currently in BP so will need to move the IF server area (also potentially the separate secure area).
  • The Bioinformatics rack will need to move to the IF server area, but needs no further provisioning. Since its DIY DICE at the moment it could potentially be better located in the separate secure area.
  • The license servers gambet/sonsie are currently in AT and need no further action/provisioning.
  • libra (database server) would ideally move to AT until not used for teaching anymore - a new server has been (is being) purchased to replace the hardware which will be out of warranty.
  • largo (tftp server) is being moved to AT in advance - no further action/provisioning required.
  • biscuit (teaching postgresql server) is currently in FH, will need to be moved to AT, nothing further required.
  • bulwark (web services server) currently in KB will need to move to AT, nothing further required.
  • puffin (research db server plus others) would be better moved to the IF server area.
  • wulf (condor master for central) can stay in AT (for AT pool) and focke can move to IF (for Forum pool).
  • hermes/townhill clusters will be merged and moved down to the IF server area - more hardware to split the queue master from the submission server would be good.
  • lion/lutzow clusters will ideally be left in the KB server area - in which case some kind of remote power control might be good.
  • puma (connect server) is already on its way down to AT but may be more appropriately located in the IF server area.

Notes

Splitting lion/lutzow from townhill/hermes - where is har? :- har is a sun at KB (iainr)

Server Replacement - Probably just libra (already noted in earlier spending round). Also new hardware for queue servers for townhill and hermes - IR to spec.

Space Requirements - locations. Almost all are 1U rack servers, except largo and havoc (desktops) and libra (2U). What are copacbana etc? Also need new rack servers for clusters?

Network Requirements - 100T fine for most, except the clusters (1000T to SRIF, plus fibre channel)?

Power requirements - probably don't need dual supply on any of these?

Remote management - all on serial console of some kind (except puma). Remote power management would be useful for the clusters, certainly the head nodes but potentially useful if all the nodes also had it? Also for biscuit, wulf, focke and puma.

Critical - clusters, biscuit (at times), libra, ???

Table

Host Service Current Location Proposed Location Size Power Network Fibre Replace?
bass research AT IF[X] 1U 1 100   NO
havoc research AT IF[X] Desktop 1 100   NO
ossian research AT IF[X] 1U 1 100   NO
methven research AT IF[X] 1U 1 100   NO
orator research AT IF[X] 1U 1 100(0)   NO
johnston research AT IF[X] 1U 1 100(0)   NO
boswell research AT IF[X] 1U 1 100(0)   NO
copacabana research JCMB IF[X] 1U 1 1000   NO
ipanema research JCMB IF[X] 1U 1 1000   NO
leblon research JCMB IF[X] 1U 1 1000   NO
dendrite research BP IF[X] 1U 1 100   NO
Bioinformatics research JCMB IF[X] RACK 5 1000x4,10x1   NO
gambet licenses FH IF 1U 1 100   ROLLDOWN
sonsie licenses AT AT 1U 1 100   ROLLDOWN
libra database FH AT 2U 2 1000   YES
largo tftp AT AT Desktop 1 100   ROLLDOWN
biscuit pgsql AT AT 1U 1 100   NO
bulwark webservices JCMB AT 1U 1 100   ROLLDOWN
puffin pgsql/adhoc AT IF 1U 1 100   NO
wulf condor(north->AT) AT AT 1U 1 1000   NO
focke condor(south->IF) JCMB IF 1U 1 1000   NO
puma connect AT IF 1U 1 1000   NO
hermes(cluster) research JCMB IF RACK 26 1000 har +Server
townhill(cluster) research JCMB IF RACK 34 1000 har +Server
lion(cluster) research JCMB JCMB RACKS 81 1000 har +KVM/IP
lutzow(cluster) research JCMB JCMB RACK 17 1000 har NO
illustrious lcfg JCMB IF 1U 1 1000   NO
beatty gridengine JCMB IF Desktop 1 1000   NO
arnie qmaster JCMB IF 1U 1 100   NO

Clusters

Current Beowulf infrastructure:

*Illustrious - LCFG server

*Townhill - console server for part of townhill cluster

*Beatty - test accounting machine Dell PE4600 (1Gb RAM, 1.8GHz Xeon)

*Seville - test accounting machine Dell GX 620 (1Gb RAM, 3GHz P4)

*arnie - test sl5 machine, eventual qmaster node for townhill

Lion cluster - Head node GX240 (1Gb RAM, 1.8GHz P4)

Lutzow cluster - Head node WS530 (2Gb RAM, 1.7 GHz Xeon)

Townhill cluster - Head node HP d530 (1Gb RAM, 3GHz P4)

Hermes cluster - Head node Dell GX620 (1Gb RAM, 3GHz P4)

Suggested new configuration

Merge Lion and Lutzow

*Add Lion to the cluster pool, run qmaster on Lutzow with no user ssh access, use old hermes as head node. Possibly add another desktop as second head node.

*Add KVM over IP head to current KVM setup to allow remote admin

Merge Townhill and Hermes

* Townhill becomes dedicated console server, run qmaster on arnie, new rack based hardware

-- TimColles - 04 Nov 2007

Topic revision: r7 - 22 Nov 2007 - 12:30:18 - TimColles
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback
This Wiki uses Cookies