Nuffnang

Friday, May 23, 2014

Basic IP Tables


iptables is a simple firewall installed on most linux distributions. The linux manual page for iptables says it is an administration tool for IPv4 packet filtering and NAT, which, in translation, means it is a tool to filter out and block Internet traffic. iptables firewall is included by default in Centos 6.4 linux images provided by DigitalOcean.

We will set up firewall one by one rule. To simplify: a firewall is a list of rules, so when an incomming connection is open, if it matches any of the rules, this rule can accept that connection or reject it. If no rules are met, we use the default rule.

Decide which ports and services to open


To start with, we want to know what services we want to open to public. Let's use the typical web-hosting server: it is a web and email server, and we also need to let ourselves in by SSH server.

First, we want to leave SSH port open so we can connect to the VPS remotely: that is port 22. Also, we need port 80 and 443 (SSL port) for web traffic. For sending email, we will open port 25 (regular SMTP) and 465 (secure SMTP). To let users receive email, we will open the usual port 110 (POP3) and 995 (secure POP3 port). Additionally, we'll open IMAP ports, if we have it installed: 143 for IMAP, and 993 for IMAP over SSL.

Note: It is recommended to only allow secure protocols, but that may not be an option, if we cannot influence the mail service users to change their email clients.

Block the most common attacks


DigitalOcean VPSs usually come with the empty configuration: all traffic is allowed. Just to make sure of this, we can flush the firewall rules - that is, erase them all:
iptables -F

We can then add a few simple firewall rules to block the most common attacks, to protect our VPS from script-kiddies. We can't really count on iptables alone to protect us from a full-scale DDOS or similar, but we can at least put off the usual network scanning bots that will eventually find our VPS and start looking for security holes to exploit. First, we start with blocking null packets.
iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP

We told the firewall to take all incoming packets with tcp flags NONE and just DROP them. Null packets are, simply said, recon packets. The attack patterns use these to try and see how we configured the VPS and find out weaknesses. The next pattern to reject is a syn-flood attack.
iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP

Syn-flood attack means that the attackers open a new connection, but do not state what they want (ie. SYN, ACK, whatever). They just want to take up our servers' resources. We won't accept such packages. Now we move on to one more common pattern: XMAS packets, also a recon packet.
iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP

We have ruled out at least some of the usual patterns that find vulnerabilities in our VPS.

Open up ports for selected services


Now we can start adding selected services to our firewall filter. The first such thing is a localhost interface:
iptables -A INPUT -i lo -j ACCEPT

We tell iptables to add (-A) a rule to the incoming (INPUT) filter table any trafic that comes to localhost interface (-i lo) and to accept (-j ACCEPT) it. Localhost is often used for, ie. your website or email server communicating with a database locally installed. That way our VPS can use the database, but the database is closed to exploits from the internet.
Now we can allow web server traffic:
iptables -A INPUT -p tcp -m tcp --dport 80 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 443 -j ACCEPT

We added the two ports (http port 80, and https port 443) to the ACCEPT chain - allowing traffic in on those ports. Now, let's allow users use our SMTP servers:
iptables -A INPUT -p tcp -m tcp --dport 25 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 465 -j ACCEPT

Like stated before, if we can influence our users, we should rather use the secure version, but often we can't dictate the terms and the clients will connect using port 25, which is much more easier to have passwords sniffed from. We now proceed to allow the users read email on their server:
iptables -A INPUT -p tcp -m tcp --dport 110 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 995 -j ACCEPT

Those two rules will allow POP3 traffic. Again, we could increase security of our email server by just using the secure version of the service. Now we also need to allow IMAP mail protocol:
iptables -A INPUT -p tcp -m tcp --dport 143 -j ACCEPT
iptables -A INPUT -p tcp -m tcp --dport 993 -j ACCEPT

Limiting SSH access

We should also allow SSH traffic, so we can connect to the VPS remotely. The simple way to do it would be with this command:
iptables -A INPUT -p tcp -m tcp --dport 22 -j ACCEPT

We now told iptables to add a rule for accepting tcp traffic incomming to port 22 (the default SSH port). It is advised to change the SSH configuration to a different port, and this firewall filter should be changed accordingly, but configuring SSH is not a part of this article. However, we could do one more thing about that with firewall itself. If our office has a permanent IP address, we could only allow connections to SSH from this source. This would allow only people from our location to connect. First, find out your outside IP address. Make sure it is not an address from your LAN, or it will not work. You could do that simply by visiting the whatismyip.com site. Another way to find it out is to type:
w

in the terminal, we should see us logged in (if we're the only one logged in' and our IP address written down. The output looks something like this:
root@iptables# w
 11:42:59 up 60 days, 11:21,  1 user,  load average: 0.00, 0.00, 0.00
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root   pts/0    213.191.xxx.xxx  09:27    0.00s  0.05s  0.00s w

Now, you can create the firewall rule to only allow traffic to SSH port if it comes from one source: your IP address:
iptables -A INPUT -p tcp -s YOUR_IP_ADDRESS -m tcp --dport 22 -j ACCEPT

Replace YOUR_IP_ADDRESS with the actuall IP, of course.

We could open more ports on our firewall as needed by changing the port numbers. That way our firewall will allow access only to services we want. Right now, we need to add one more rule that will allow us to use outgoing connections (ie. ping from VPS or run software updates);
iptables -I INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

It will allow any established outgoing connections to receive replies from the VPS on the other side of that connection. When we have it all set up, we will block everything else, and allow all outgoing connections.
iptables -P OUTPUT ACCEPT
iptables -P INPUT DROP

Now we have our firewall rules in place.

Save the configuration


Now that we have all the configuration in, we can list the rules to see if anything is missing.
iptables -L -n

The -n switch here is because we need only ip addresses, not domain names. Ie. if there is an IP in the rules like this: 69.55.48.33: the firewall would go look it up and see that it was a digitalocean.com IP. We don't need that, just the address itself. Now we can finally save our firewall configuration:
iptables-save | sudo tee /etc/sysconfig/iptables

The iptables configuration file on CentOS is located at /etc/sysconfig/iptables. The above command saved the rules we created into that file. Just to make sure everything works, we can restart the firewall:
service iptables restart

The saved rules will persist even when the VPS is rebooted.


Once connected, we log in as root and issue the following command:
iptables -F

This will flush the filters, we'll be able to get in the VPS again.

Conclusion

This article is not exhaustive, and it only scratched the surface of running a simple firewall on a linux machine. It will do enough for a typical web and email server scenario for a developer not familiar with linux command line or iptables.

Friday, May 9, 2014

Windows Azure Pack for Windows Server

Windows Azure Pack for Windows Server is a collection of Windows Azure technologies, available to Microsoft customers at no additional cost for installation into your data center. It runs on top of Windows Server 2012 R2 and System Center 2012 R2 and, through the use of the Windows Azure technologies, enables you to offer a rich, self-service, multi-tenant cloud, consistent with the public Windows Azure experience.
Windows Azure Pack includes the following capabilities:
  • Management portal for tenants – a customizable self-service portal for provisioning, monitoring, and managing services such as Web Site Clouds, Virtual Machine Clouds, and Service Bus Clouds.
  • Management portal for administrators – a portal for administrators to configure and manage resource clouds, user accounts, and tenant offers, quotas, and pricing.
  • Service management API – a REST API that helps enable a range of integration scenarios including custom portal and billing systems.
  • Web Site Clouds – a service that helps provide a high-density, scalable shared web hosting platform for ASP.NET, PHP, and Node.js web applications. The Web Site Clouds service includes a customizable web application gallery of open source web applications and integration with source control systems for custom-developed web sites and applications.
  • Virtual Machine Clouds – a service that provides infrastructure-as-a-service (IaaS) capabilities for Windows and Linux virtual machines. The Virtual Machine Clouds service includes a VM template gallery, scaling options, and virtual networking capabilities.
  • Service Bus Clouds – a service that provides reliable messaging services between distributed applications. The Service Bus Clouds service includes queued and topic-based publish/subscribe capabilities.
  • SQL and MySQL – services that provide database instances. These databases can be used in conjunction with the Web Sites service.
  • Automation – the capability to automate and integrate additional custom services into the services framework, including a runbook editor and execution environment.

Monday, May 5, 2014

GlusterFS 3.5 Unveiled

 

We are pleased to announce that GlusterFS 3.5 is now available. The latest release includes several long-awaited features such as improved logging, file snapshotting, on-wire compression, and at-rest encryption.

You can download GlusterFS 3.5 now.

What’s New?

There’s a lot to like in the new release. Here’s a preview of what GlusterFS 3.5 includes:
  • AFR_CLI_enhancements: Improved logging with more clarity and statistical information. Additional clarity in logging has been on the wish list for the Gluster community for some time. This improvement addresses eight different bugzilla issues in one fell swoop. It allows visibility into why a self-heal process was initiated and which files are affected, for example. Prior to this enhancement, clearly identifying split-brain issues from the logs was often difficult for an end user or administrator, and there was no facility to identify which files were affected by a split brain issue automatically. Remediating split brain without quorum will still require some manual effort, but with the tools provided this will become much simpler.
  • Exposing Volume Capabilities: Provides client-side insight into whether a volume is using the BD translator and, if so, which capabilities are being utilized.
  • File Snapshot: Provides a mechanism for snapshotting individual files. One of the more anticipated features of the 3.5 release, this precedes the upcoming ability to snapshot entire volumes. The most prevalent use case for this feature will be to snapshot running VMs, allowing for point-in-time capture. This also allows a mechanism to revert VMs to a previous state directly from Gluster, without needing to use external tools.
  • GFID Access: A new method for accessing data directly by GFID. With this method, we can consume the data in changelog translator, which is logging ‘gfid’ internally, very efficiently. This feature yet again extends the methods by which you can access Gluster, and should be well-received by members of the developer community, who will have a simple way to perform file operations programmatically within a Gluster volume.
  • On-Wire Compression + Decompression: Use of this feature reduces the overall network overhead for Gluster operations from a client. Depending on workload, this could show dramatic increases in the performance of Gluster volumes. This feature also allows a good trade-off of CPU to network resources, which will be a boon to most users as CPU is not generally being consumed to anywhere near its full potential, whereas network has traditionally been the bottleneck in high performance workloads.
  • Prevent NFS restart on Volume change (Part 1): Previously, any volume change (volume option, volume start, volume stop, volume delete, brick add, etc.) would restart the NFS server, which led to service disruptions.This feature allow modifying certain NFS-based volume options without such interruptions occurring. Part 1 is anything not requiring a graph change.
  • Quota Scalability: Massively increase the amount of quota configurations from a few hundred to 65536 per volume.
  • readdir_ahead: Gluster now provides read-ahead support for directories to improve sequential directory read performance.
  • zerofill: Enhancement to allow zeroing out of VM disk images, which is useful in first time provisioning or for overwriting an existing disk.
  • Brick Failure Detection: Detecting failures on the filesystem that a brick uses makes it possible to handle errors that are caused from outside of the Gluster environment.
  • Disk encryption: Implement the previous work done in HekaFS into Gluster. This allows a volume (or per-tenant part of a volume) to be encrypted “at rest” on the server using keys only available on the client. [Note: We encrypt only content of regular files. File names are not encrypted! Also, encryption does not work in NFS mounts.]
  • Geo-Replication Enhancement: A massive rewrite of the existing geo-replication architecture, this set of enhancements brings geo-replication to an entirely new level. Previously, the geo-replication process, gsyncd, was a single point of failure as it only ran on one node in the cluster. If the node running gsyncd failed, the entire geo-replication process was offline until the issue was addressed. The original geo-rep was a vast improvement over plain rsync checksumming and made intelligent use of xattrs to identify a reduced list of candidates for block- or file-level copy, massively improving on full directory crawls performed by rsync. In this latest incarnation, the improvement is extended even further by foregoing use of xattrs to identify change candidates and directly consuming from the changelog, which will improve performance twofold: one, by keeping a running list of only those files that may need to be synced; and two, the changelog is maintained in memory, which will allow near instant access to which data needs to be changed and where by the gsync daemon.