Project 287 - Securing Web Servers

This is not the final report, but rather a place to keep notes/track of progress on this project. It will become the final report once the project comes to an end.

Project on how we can increase the security of our web servers, all web servers not just services-unit ones, though we (services unit) will implement some of what we decide on for "our" sites.

Stephen from the MPU will help.

First thing is to have a meeting to discuss all possible options, then order them into some practical/doable list. Then probably implement the "low hanging fruit" first.

Initial meeting to discuss options

Neil, Stephen and George had a short meeting on 28/11/2013 to discuss thoughts and possible strategies for securing web servers/services.

The notes from that meeting are below, the table below will evolve as things come to mind, and once we start looking at what each option might entail. I'm not sure we can judge difficulty and effectiveness, but we can guestimate to help order things.

Priority What Difficulty Effectiveness Notes
10 Regular rootkit scan Easy High Again assuming low false positives, a good way to spot a compromised machine. Low impact on users
10 Monitor outbound email (volume of, rather than content) Easy High We should spot a spike in number of messages being sent. Low impact on users
10 Reduce apache (and any other) identity strings Easy Low Low impact on users. Done SecuringWebServers
20 Be less accommodating of older browser short comings. Easy Low Should be low impact, as hopefully people not using old browsers
20 Make sure our HTTPS is securely configured eg encryption types Easy Low Low impact on users, unless common browsers affected
20 Block all outbound (uninitiated) traffic Easy Med Should be low affect on users
20 Remove common tools: wget/curl etc Easy Med Easy to do, but possibly disruptive to web service and users
20 Only allow ifriend access where necessary Easy Med from Toby. Low affect on users
50 Limit who can publish content/scripts Medium High Difficulty will depend on the site, easy for some, hard for others. Medium to high impact on users, though depends on site
50 Run tripwire Medium High Assuming false positives kept to a minimum, should highlight compromised systems quickly. Low impact on users
50 Regular scan for known exploitable software, eg old WordPress. Medium High Tricky part keeping on top of what is current, and then searching all the relevant locations. Will impact on users, forcing them to updated old software.
60 Block outbound DNS queries/force use of local DNS servers Medium Med Low affect on users
60 Cosign everything Medium Med Difficulty will depend on the site. Should be low affect on users
60 Keeping an eye out for SSL vulnerabilities Medium Med from Toby. Keeping an eye out would be a manual thing, presumably the effectiveness depends on what the vulnerability is. Should be low impact on users, unless common browsers are affected
60 Use existing auditd to spot things Medium Med The tricky part is spotting odd activity. Low impact on users
60 Make sure OS and Software up-to-date Medium Med OS easy, software could rely on manual monitoring. Possibly impact on users if they required a specific version of software
70 Restricted set of RPMs Medium Low Difficulty depends on sites, it should be simple for new site, less so for existing ones with user scripts. Affect on users probably low
70 Close unnecessary ports/services Medium Low If truly unnecessary, then affect on users low
90 Monitor logs for odd requests/traffic Hard Med Actual monitoring not too hard, making a determination of "odd" traffic harder. Low impact on users
90 Limit what daemons and scripts can do, eg some form of jail. Hard High Impact on users would depend on the site
90 DMZ in the event of server being compromised Hard High The effectiveness should be high due to limiting affect of compromised service, wouldn't lessen the service being compromised. Might impact users depending on service
90 Monitor network traffic. Does a server suddenly start talking to external hosts? Hard High Actual monitoring not too hard, making a determination of "odd" network activity harder. Low impact on users

Notes from meeting of 28/11/2013

Stephen proposed that the problem of securing web sites/services from the outside world can be dealt with at one or more of three layers.

  1. Application eg Wordpress, Drupal, Weblogin, a CGI
  2. Daemon eg Apache, tomcat, zope
  3. OS eg DICE, Windows, selfmanaged

And that there are 2 tasks that can be applied to each layer: Hardening and Monitoring.

Monitoring is likely to feed back in (ideally automatically) to the hardening, eg fail2ban style, by monitoring the logs at the daemon layer we may spot excessive/suspicious connections, so we feed that back into the hardening to deny/limit connections from that source.

Working at different layers may be easier or harder than at other layers, eg automatic monitoring at OS layer (and spotting suspicious activity)is probably easier than monitoring user WordPress posts.

Examples of what we might do at the different layers

Application

Hardening: Restrict what application can do, eg deny certain PHP modules, only enable the modules needed for that web app.

Monitoring: Detonator scan -> fix vulnerabilities

Daemon

Hardening: Run in restricted container/environment, eg selinux. ie something better than chroot, but short of running a VM.

Hardening: suexec, particularly on shared systems, do away with processes running as http where possible.

Monitoring: monitor apache logs:

  • to feedback into hardening, rate limiting.
  • for odd requests, like logwatch reports. Need to whitelist web scanner.

OS

Hardening: Turn off kernel module loading.

Hardening: minimal RPM lists, ie only those packages required for the service. These take time to evolve for existing sites. New sites are simpler, ie start with nothing, only add what's required. Next OS upgrade may be a good time to trim things from current web sites.

Hardening: DNS - only query local DNS

Monitoring: make more use of auditd, eg spot setuid usage

Monitoring: iptables on servers to just log - perphaps performance issues

Tripwire and Aide felt not to offer as much as monitoring our existing auditd logs.

Other Points

Check what IS are doing for their web virtual hosting service.

Report back to dev meeting our thoughts from this meeting.

Rootkit scanning, though these are less useful than hoped, rkhunter is the better of the two we've tried.

From Graham via email, we should consider what impact any changes may have on the usability/usefulness of the server/service by the users.

15 August 2014

Again another week goes by with nothing happening, mostly due to time off this time though. Next week though, I'll definitely get back to the root mail monitoring that I started ages ago, as there's been an incident where this would have been useful.

17 Sept 2014

Having been embarrassed into the lack of progress of this project, I have been trying to spend more time on this, and particularly the "quick wins". It was mentioned a couple of times at meetings about the desire for dice/options/apacheconf.h to just come with basic, sensible, secure options - which has always been a goal of this project, but there's just been little progress to making that so. So this is what I've been concentrating on.

We've had apacheconf-iplimit.h for a while, but it wasn't applied by default. On recent looking at it, I've made some changes to make it a bit more of a self contained solution, such as adding extra logging for denied connections. And having tested it on my own machine, I think its ready to go. I'll add it to apacheconf.h, but initially people will have to set some #define to enable it, but after suitable warning it will become the default (and people will have to #define to not use it).

Similarly I created apacheconf-denyframe.h which simply sets a header to deny a site from being framed in a browser, this will also go into apacheconf.h

4 Dec 2014

It's embarrassing how little time I'm spending on this. Fortunately its so embarrassing that I feel I must do something to rectify that! As a first step I've just discovered http://en.wikipedia.org/wiki/Mod_qos (http://mod-qos.sourceforge.net/), which I was really looking at as a possible solution to the ESISS scanner causing load issues on our servers. But it seems to have a pretty comprehensive set of options that could be used to protect web sites. Including the similar functionality to the mod_limitipconn that we currently use. That's all for now, but hopefully more soon.

19 Jan 2015

Try to put some finishing touches to the apacheconf default security settings (as we currently have them) before Wednesday's development meeting. While looking at the plan to make INF.ED.AC.UK the default CosignRequireFactor for users of apacheconf-cosign.h, then this does work, but it doesn't cascade down to any vhosts. They have to individually specify their CosignRequireFactor. We could look at this, but not in time for the meeting. Just need to flag it as a potential gotcha. But people using Cosign should be explicit about their access anyway.

20 Jan 2015

Think I've finished the tweaks and tested things for the sensible options in apacheconf.h. Created ApacheConfSensible and added mention of CosignRequireFactor to SecuringWebServers. Just doing some notes for tomorrows talk.

21 Jan 2015

At today's meeting, it was agreed that I should announce the changes and make them at the beginning of next week, so people have time to prepare. I'll also create a blog article with the changes, and add something to computing.help pointing at the SecuringWebServers as a list of general steps anyone running a web service might want to do.

It was pointed out that it's not just COs that manage DICE configured web servers. Some researchers have om apacheconf access to stop, start and configure apache, they will need to know about the changes too. Though they may need to ask their supporting COs to SKIP certain things if needed. Another reason why a blog article would be useful.

It was also restated that the SecuringWebServers is an expanding list of things people can do. And if any of those are also added to ApacheConfSensible, then these need to be announced, and probably will be made individually "opt-in" in the first place, before becoming "opt-out".

After the meeting, I've had second thoughts about the DenyFrame header, and having that as a default setting. I'm beginning to think that, though not as paranoid as DENY, SAMEORIGIN would be a better default position. People would still be able to use the DENY setting if that was more appropriate for their site. I'll do some tests, and then proceed down that route.

24/2/2015

I've not spent much time on this either lately, mostly fixing up some small things that came to light when the sensible header became the default, but nothing too major. I've been asked to give a talk at the up coming ITPF day in a few weeks time, so I'll get back to this to see if there's any other quick wins I can add before then.

29/2/2015

There was another SSL security alert the other day (FREAK). Though we weren't affected, it did highlight our continued use of the RC4 cipher, which people reckon is now crackable in a reasonable time. So after some brief testing, this cipher has been removed from our Apache SSL config. The change was put in our apacheconf-ssl.h, so not strictly in our "sensible" header, but it is was rolled out as a sensible default to all services using that header.

Though I've only spent a faction of the time (about 6.5 days out of 20 allocated) on this project, it is due to complete at the end of March. And though there are things on the list I thought I would have implemented, but haven't, I think it may be best to call it a day on this project. And either start another version of it another time, or just absorb ticking other things off on the list as part of operational work. Anyway, that will be a discussion to have with Craig and/or the Dev meeting.

15/5/2015

I've not done anything about winding this up yet. I really need to though, so we can officially look at an EdWeb project. Kenny needs some stuff done for June.

28/6/2015

Finally spent a little time finish off some ModSecurity rough edges that came to light when "sensible" became the default. The problem with the default small sizes for some upload buffers. There are now simple #defines to specify larger values in the headers and noted on SecuringWebServers#Use_mod_security. Will now try to do the necessary to bring it to sign off on Wednesday.

24/9/2015

Again all week I've been meaning to look at this, but web things and a holiday tomorrow have got in the way. I did get a chance to look at where I'd got to, and it does seem a bit cheeky to try and get this signed off, but equally we/I was never going to achieve all the possible things we could do to improve security on the web servers. Though I had expected to get more of the list done. I should perhaps have another look at the root kit checker, at least just point people at the documentation https://wiki.lcfg.org/bin/view/LCFG/RkHunterComponent that the MPU have produced. Though it will probably need a fair amount of configuration for each server.

I'll start doing the FinalProjectReport287SecuringWebServers and see what is decided. If not accepted for sign off, perhaps it could just be stalled, as I see the EdWeb and presumably a future SL7 Servers project will become more time critical.

-- NeilBrown - 03 Dec 2013 [date this wiki page was first created!]

Edit | Attach | Print version | History: r23 | r21 < r20 < r19 < r18 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r19 - 24 Sep 2015 - 16:54:25 - NeilBrown
 
This site is powered by the TWiki collaboration platformCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback
This Wiki uses Cookies