- Listen on network ports for mail.
- Sort mail and deliver it locally or externally to other servers.
- Append mail to files or pipe it through other programs.
- Queue mail (if immediate delivery fails).
- Convert email addresses to/from user names, or handle mailing lists.
- Reads rules for special mail handling, so it can try to catch spam, or check for correctness.
Nuffnang
Monday, July 22, 2013
Sendmail
What Sendmail does:
Monday, July 15, 2013
Basics of Shell Programming
- To get a Linux shell, you need to start a terminal.
- To see what shell you have, run: echo $SHELL.
- In Linux, the dollar sign ($) stands for a shell variable.
- The ‘echo‘ command just returns whatever you type in.
- The pipeline instruction (|) comes to rescue, when chaining several commands.
- Linux commands have their own syntax, Linux won’t forgive you whatsoever is the mistakes. If you get a command wrong, you won’t flunk or damage anything, but it won’t work.
- #!/bin/sh – It is called shebang. It is written at the top of a shell script and it passes the instruction to the program /bin/sh.
Tuesday, July 9, 2013
Xen Server : Features Comparison
Xen 4.0 | Xen 4.1 | Xen 4.2 | Xen 4.3 | |
Initial Release | 7-Apr-10 | 25-Mar-11 | 17-Sep-12 | 2-Jul-13 |
Feature List | FL 4.2 | FL 4.3 | ||
Release Notes | RN 4.0 | RN 4.1 | RN 4.2 | RN 4.3 |
Supported Mainline Architectures | ||||
IA-A32 | ✓ | ✓ | ✓ | ✓ |
X86-64 | ✓ | ✓ | ✓ | ✓ |
Itanium | ✓ | ✓deprecated in this release | ✓deprecated | |
ARM v7 + Virtualization Extensions | ✓tech preview [ 6 ] | |||
ARM v8 | ✓tech preview [ 6 ] | |||
Guest Types | ||||
For X86 Architectures | ||||
Paravirtualised | ✓ | ✓ | ✓ | ✓ |
Traditional Xen PV guest | ||||
HVM Guest [ 1 ] | ✓ | ✓ | ✓ | ✓ |
Fully virtualised guest using hardware virtualisation extensions | ||||
PV-on-HVM Guest [ 1 ] | ✓ | ✓ | ✓ | ✓ |
Fully virtualised guest using PV extensions/drivers for improved performance | ||||
For ARM Architectures | ||||
ARM Guest | ✓tech preview [ 6 ] | |||
Optimal combination of full virtualization and PV extensions | ||||
Host Limits | ||||
For X86 Architectures | ||||
Physical CPUs | 128 [ 0 ] | >255 | 4095 | 4095 |
Physical RAM | 1TB | 5TB | 5TB | 16TB |
For ARM Architectures | ||||
Physical CPUs | 8 | |||
Physical RAM | 16GB | |||
Guest Limits | ||||
X86 PV Guest Limits | ||||
Virtual CPUs | 128 | >255 | 512 | 512 |
Virtual RAM | 512GB | 512GB | 512GB | 512GB |
X86 HVM Guest Limits | ||||
Virtual CPUs | 128 | 128 | 256 | 256 |
Virtual RAM | 1TB | 1TB | 1TB | 1TB |
ARM Guest Limits | ||||
Virtual CPUs | 8 | |||
Virtual RAM | 16GB | |||
Toolstack | ||||
Built-in | ||||
xend / xm | ✓ | ✓ | ✓deprecated in this release | ✓deprecated |
XL | ✓initial implementation | ✓preview release | ✓ | ✓ |
Qemu based disk backend (qdisk) for XL | ✓ [ 5 ] | ✓ [ 5 ] | ✓ [ 5 ] | |
XL Open vSwitch integration | ✓tech preview [ 7 ] | |||
3rd Party | ||||
libvirt driver for XL | ✓ | ✓ | ✓ | |
Features | ||||
Advanced Memory Management | ||||
Memory Ballooning | ✓ | ✓ | ✓ | ✓ |
Memory Sharing | ✓tech preview | ✓tech preview | ✓tech preview [ 3 ] | ✓tech preview [ 3 ] |
allow sharing of identical pages between HVM guests | ||||
Memory Paging | ✓tech preview | ✓tech preview | ✓tech preview [ 3 ] | ✓tech preview [ 3 ] |
allow pages belonging to HVM guests to be paged to disk | ||||
TMEM - Transcendent Memory | ✓experimental [ 2 ] | ✓experimental [ 2 ] | ✓experimental [ 2 ] | ✓experimental [ 2 ] |
Resource Management | ||||
Cpupool | ✓ | ✓ | ✓ | |
advanced partitioning | ||||
Credit 2 Scheduler | ✓prototype | ✓prototype | ✓experimental | |
designed for latency-sensitive workloads and very large systems. | ||||
NUMA scheduler affinity | ✓ | |||
Scalability | ||||
1GB/2MB super page support | ✓ | ✓ | ✓ | |
Deliver events to PVHVM guests using Xen event channels | ✓ | ✓ | ✓ | |
Interoperability / Hardware Support | ||||
Nested Virtualisation | ✓experimental | ✓experimental | ||
Running a hypervisor inside an HVM guest | ||||
HVM PXE Stack | gPXE | iPXE | iPXE | iPXE |
Physical CPU Hotplug | ✓ | ✓ | ✓ | ✓ |
Physical Memory Hotplug | ✓ | ✓ | ✓ | ✓ |
Support for PV kernels in bzImage format | ✓ | ✓ | ✓ | ✓ |
PCI Passthrough | ✓ | ✓ | ✓ | ✓ |
X86 Advanced Vector eXtension (AVX) | ✓ [ 4 ] | ✓ | ✓ | |
High Availability and Fault Tolerance | ||||
Live Migration, Save & Restore | ✓ | ✓ | ✓ | ✓ |
Remus Fault Tolerance | ✓ | ✓ | ✓ | ✓ |
vMCE | ? | ? | ✓ | ✓ |
Forward Machine Check Exceptions to Appropriate guests | ||||
Network and Storage | ||||
Blktap2 | ✓ | ✓ | ✓ | ✓ |
Online resize of virtual disks | ✓ | ✓ | ✓ | ✓ |
Security (also see this presentation or this document) | ||||
Driver Domains | ✓ | ✓ | ✓ | ✓ |
Device Model Stub Domains | ✓ | ✓ | ✓ | ✓ |
Memaccess API | ✓ | ✓ | ✓ | |
enabling integration of 3rd party security solutions into Xen virtualized environments | ||||
XSM & FLASK | ✓ | ✓ | ✓ | ✓ |
mandatory access control policy providing fine-grained controls over Xen domains, similar to SELinux | ||||
XSM & FLASK support for IS_PRIV | ✓ | |||
vTPM Support | ✓ | ✓ | ✓ | ✓ |
updates and new functionality | ||||
Tooling | ||||
gdbsx | ✓ | ✓ | ✓ | ✓ |
debugger to debug ELF guests | ||||
vPMU | ✓ [ 4 ] | ✓ [ 4 ] | ✓ [ 4 ] | ✓ [ 4 ] |
Virtual Performance Management Unit for HVM guests | ||||
Serial console | ✓ | ✓ | ✓ | ✓Add EHCI debug support |
xentrace | ✓ | ✓ | ✓ | ✓ |
performance analysis | ||||
Device Models and Virtual Firmware for HVM guests | ||||
For X86 Architectures | ||||
Traditional Device Model | ✓ | ✓ | ✓ | ✓ |
Device emulator based on Xen fork of Qemu | ||||
Qemu Upstream Device Model | ✓tech preview | ✓default, unless stubdomains are used | ||
Device emulator based on upstream Qemu | ||||
ROMBIOS | ✓ | ✓ | ✓ | ✓ |
BIOS used with traditional device model only | ||||
SeaBIOS | ✓ | ✓ | ||
BIOS used with upstream qemu device model and XL only | ||||
OVMF/Tianocore | ✓experimental [ 4 ] | ✓experimental [ 4 ] | ||
UEFI Firmware used with upstream qemu device model and XL only | ||||
PV Bootloader support | ||||
For X86 Architectures | ||||
PyGrub support for GRUB 2 | ✓ | ✓ | ✓ | ✓ |
PyGrub support for /boot on ext4 | ✓ | ✓ | ✓ | ✓ |
pvnetboot support | ✓ | ✓ | ||
Bootloader supporting network boot of PV guests |
Wednesday, July 3, 2013
Fedora 19 New Features : NFStest
NFStest provides a set of tools for testing either the NFS client or the NFS server, included tests focused mainly on testing the client.
Test utilities package
Provides a set of tools for testing either the NFS client or the NFS server, most of the functionality is focused mainly on testing the client. These tools include the following:- Process command line arguments
- Provide functionality for PASS/FAIL
- Provide test grouping functionality
- Provide multiple client support
- Logging mechanism
- Debug info control
- Mount/Unmount control
- Create files/directories
- Provide mechanism to start a packet trace
- Provide mechanism to simulate a network partition
- Support for pNFS testing
Installation
- Install the rpm as root
- # rpm -i NFStest-1.0.1-1.noarch.rpm
- All manual pages are available
- $ man nfstest
- Run tests:
- $ nfstest_pnfs --help
- Untar the tarball
- $ cd ~
- $ tar -zxvf NFStest-1.0.1.tar.gz
- The tests can run without installation, just set the python path
- environment variable:
- $ export PYTHONPATH=~/NFStest-1.0.1
- $ cd NFStest-1.0.1/test
- $ ./nfstest_pnfs --help
- Or install to standard python site-packages and executable directories:
- $ cd ~/NFStest-1.0.1
- $ sudo python setup.py install
- All manual pages are available
- $ man nfstest
- Run tests:
- $ nfstest_pnfs --help
- Clone the git repository
- $ cd ~
- $ git clone git://git.linux-nfs.org/projects/mora/nfstest.git
- The tests can run without installation, just set the python path
- environment variable:
- $ export PYTHONPATH=~/nfstest
- $ cd nfstest/test
- $ ./nfstest_pnfs --help
- Or install to standard python site-packages and executable directories:
- $ cd ~/nfstest
- $ sudo python setup.py install
- All manual pages are available
- $ man nfstest
- Run tests:
- $ nfstest_pnfs --help
Setup
Make sure user running the tests can run commands using 'sudo' without the need for a password.Make sure user running the tests can run commands remotely using 'ssh' without the need for a password. This is only needed for tests which require multiple clients.
Create the mount point specified by the --mtpoint (default: /mnt/t) option on all the clients:
- $ sudo mkdir /mnt/t
- $ sudo chmod 777 /mnt/t
Examples
- nfstest_pnfs
- The only required option is --server
- $ nfstest_pnfs --server 192.168.0.11
- nfstest_cache
- Required options are --server and --client
- $ nfstest_cache --server 192.168.0.11 --client 192.168.0.20
- Testing with different values of --acmin and --acmax (this takes a long time)
- $ nfstest_cache --server 192.168.0.11 --client 192.168.0.20 --acmin 10,20 --acmax 20,30,60,80
- nfstest_delegation
- The only required option is --server but only the basic delegation tests will
- be run. In order to run the recall tests the --client option must be used
- $ nfstest_delegation --server 192.168.0.11 --client 192.168.0.20
- nfstest_dio
- The only required option is --server
- $ nfstest_dio --server 192.168.0.11
- nfstest_posix
- The only required option is --server
- $ nfstest_posix --server 192.168.0.11
Friday, June 14, 2013
RedHat Enterprise Virtualization 3.2 release
Red Hat Enterprise Virtualization 3.2 is designed to meet an
increasing industry need for open virtualization solutions without
compromising performance, scalability, security or features. Red Hat
Enterprise Virtualization is also an essential component of Red Hat
Cloud Infrastructure, which combines traditional datacenter
virtualization features provided by Red Hat Enterprise Virtualization,
cloud-enabled infrastructure provided by Red Hat OpenStack, and hybrid
cloud management provided by Red Hat CloudForms, to deliver a
comprehensive open hybrid cloud solution for customers seeking to adopt
cloud-enabled applications and infrastructure in coexistence with their
traditional infrastructure and migrate over time as their needs demand.
Red Hat Enterprise Virtualization 3.2 brings a vast array of new features, including:
New Third-Party Plug-ins
A key feature in Red Hat Enterprise Virtualization 3.2 is the availability of a new third-party plug-in framework. Developed through community and vendor collaboration and contributions, the plug-in framework enables third parties to integrate new features and actions directly into the Red Hat Enterprise Virtualization management user interface. New menu items, panes, and dialog boxes allow users to access the new functionality the same way they use Red Hat Enterprise Virtualization's native functionality. The framework continues to evolve based on vendor and community requests, and any vendor may choose to consume the plug-in framework and add unique functionality to Red Hat Enterprise Virtualization.
Red Hat is already collaborating with several industry leaders to integrate their solutions with Red Hat Enterprise Virtualization via the new plug-in, including high availability and disaster recovery solutions from NetApp, Symantec, and Insight Control from HP:
Red Hat Enterprise Virtualization 3.2 brings a vast array of new features, including:
- Fully supported Storage Live Migration, allowing virtual machine images to be moved from one storage domain to another without disrupting service;
- Support for the latest industry-standard processors from Intel and AMD, including Intel Haswell series and AMD Opteron G5 processors; and
- Enhancements in storage management, networking management, fencing and power management, Spice console enhancements, logging and monitoring, and more.
New Third-Party Plug-ins
A key feature in Red Hat Enterprise Virtualization 3.2 is the availability of a new third-party plug-in framework. Developed through community and vendor collaboration and contributions, the plug-in framework enables third parties to integrate new features and actions directly into the Red Hat Enterprise Virtualization management user interface. New menu items, panes, and dialog boxes allow users to access the new functionality the same way they use Red Hat Enterprise Virtualization's native functionality. The framework continues to evolve based on vendor and community requests, and any vendor may choose to consume the plug-in framework and add unique functionality to Red Hat Enterprise Virtualization.
Red Hat is already collaborating with several industry leaders to integrate their solutions with Red Hat Enterprise Virtualization via the new plug-in, including high availability and disaster recovery solutions from NetApp, Symantec, and Insight Control from HP:
- NetApp: Virtual Storage Console (VSC) for Red Hat Enterprise Virtualization helps improve efficiency while reducing cost and complexity in virtual environments using NetApp storage. VSC provides integrated virtual storage management, including rapid domain provisioning and cloning of hundreds of virtual machines, while enabling Red Hat administrators to access and execute all of these capabilities using the standard Red Hat Enterprise Virtualization management interface NetApp is now accepting applications for the VSC for Red Hat Enterprise Virtualization beta.
- Symantec: Veritas Cluster Server from Symantec, the comprehensive disaster recovery and high availability solution, will support Red Hat Enterprise Virtualization 3.2 environments. Once integration is complete, Veritas Cluster Server will offer "push button" disaster recovery orchestration, and together with the Veritas Operations Manager Recovery Plan feature completely automate the failover of a Red Hat Enterprise Virtualization environment over to a disaster recovery site. This includes guests, guest network reconfiguration, and storage reconfiguration. Veritas Cluster Server is designed to provide high availability and disaster recovery for databases, custom applications, and complete multi-tiered applications across physical and virtual environments over any distance.
- HP: Insight Control for Red Hat Enterprise Virtualization, a new plug-in currently being developed by HP to enable deployments using HP ProLiant servers to view a wealth of information provided by the HP Insight Control platform. Within the Red Hat Enterprise Virtualization graphical interface, administrators can quickly and easily obtain detailed information on the health of the HP server hardware.
Monday, June 3, 2013
Samba 4 as AD Server.
ENVIRONMENT ----------------------- The new server runs CentOS6, version 6.3, 64-bit, minimal install with current updates applied. Samba4 version 4.0.0beta5 BIND version 9.8.2 INSTALLATION ---------------------- I was disappointed to say the least when I discovered that the Samba4 package included with RHEL/CentOS is basically an empty shell. Even though it's still at beta release, RH could have built it and put it in the EPEL repo. Or even the folks at CentOS could have added it to their centosplus or extras repo. Although I am capable, the thought of building Samba4 from source did not excite me. And it's a maintenance nightmare for upgrades. With some searching I found that the nice people at SOGo have built Samba4 for several distributions, including RPMs for RHEL/CentOS. I believe it's required for their OpenChange groupware product. The Samba4 RPM is now in their main repo and can be easily installed via yum. See their website (www.sogo.nu) for instructions. # yum install samba4 This required only one dependency on my system - perl-Parse-Yapp. I therefore commenced the Samba4 HowTo at step 4 (provisioning). EXAMPLE --------------- For this example I will assume a domain name of DEMO, AD domain demo.local, server IP 192.168.0.1. DNS ------ Setup /etc/resolv.conf to work correctly: domain demo.local nameserver 192.168.0.1 or search demo.local demo.com nameserver 192.168.0.1 if using multiple domains. Because of the AD interaction with DNS it is important to have BIND installed and working before attempting Samba4. But do NOT create a zone file for the demo.local AD domain. This will be done by the Samba4 DLZ backend. I initially setup zone files for demo.com, demo.local and a reverse zone. The demo.local zone file clashed with Samba4 so I removed it. PROVISIONING ----------------------- To provision Samba4: # provision --realm=demo.local --domain=DEMO --adminpass=secret --server-role=dc --host-ip=192.168.0.1 I have IP aliases setup for virtual hosting so I specify here which address I want to use (just in case). Provision worked OK. The following lines were added to /etc/named.conf as per the instructions: tkey-gssapi-keytab "/var/lib/samba4/private/dns.keytab" include "/var/lib/samba4/private/named.conf" The kerberos file was copies to /etc: # cp /var/lib/samba4/private/krb5.conf /etc/ replacing the default (cp will keep the correct SELinux file context) BEFORE STARTUP ----------------------------- Before starting Samba4 you should be aware that you will have multiple problems with iptables and SELinux, so turn them off for now. More on this later. Stop iptables and set SELinux to permissive mode. # service iptables stop # setenforce 0 STARTING SAMBA ---------------------------- The first problem I found when trying to start Samba4 was the init script - it doesn't work. The script tries to start smbd, but somewhere in the development the name was changed to samba, but script was never updated. There were other aspects of the script I didn't like either so I rewrote the init script as follows: ---***--- #! /bin/bash # # samba4 This shell script takes care of starting and stopping # the Samba4 server # # chkconfig: - 91 35 # description: Samba provides file and print services to SMB/CIFS clients. \ # Version 4 adds an Active Directory domain controller. # config: /etc/samba4/smb.conf # config: /etc/sysconfig/samba4 # pidfile: /var/rum/samba4/samba.pid # ### BEGIN INIT INFO # Provides: # Should-Start: # Short-Description: Start and stop the Samba4 server # Description: Samba provides file and print services to SMB/CIFS clients. # Version 4 adds an Active Directory domain controller. ### END INIT INFO # Source function library. . /etc/rc.d/init.d/functions if [ -f /etc/sysconfig/samba4 ]; then . /etc/sysconfig/samba4 fi # Avoid using root's TMPDIR unset TMPDIR prog=samba4 samba=/usr/sbin/samba pidfile=/var/run/samba4/samba.pid smbdpidfile=/var/run/samba4/smbd.pid # Check that smb.conf exists. [ -f /etc/samba4/smb.conf ] || exit 6 RETVAL=0 start() { echo -n $"Starting $prog: " daemon --pidfile=$pidfile $samba $SMBDOPTIONS RETVAL=$? echo return $RETVAL } stop() { echo -n $"Shutting down $prog: " killproc -p $pidfile $samba RETVAL=$? echo [ $RETVAL -eq 0 ] && rm -f $pidfile && rm -f $smbdpidfile return $RETVAL } # See how we were called. case "$1" in start) start ;; stop) stop ;; status) status -p $pidfile $samba RETVAL=$? ;; restart|reload) stop sleep 1 start ;; *) echo $"Usage: $0 {start|stop|restart|status}" exit 1 esac exit $RETVAL ---***--- The next problem was the permissions on the /var/lib/samba4/private directory. This is created as owner root:root mode 0700, as you would expect with Samba3. However, Samba4 puts AD domain files under here, but BIND cannot access them because of the permissions. Changing the owner to root:named mode 0750 fixes this issue. # chgrp named /var/lib/samba4/private # chmod g+rx /var/lib/samba4/private Restart BIND and start Samba4: # service named restart # service samba4 start Both should startup successfully. TESTING ------------- I tested my Samba4 server using a Windows 7 VM. I was able to join the demo.local domain, then login as DEMO\Administrator. Success! I installed the RSAT tools for Windows 7 and enabled the AD DS snap-in tools, Group Policy tools and DNS server tools. This provides just about everything needed to manage the AD domain - Users and Computers, Group Policy Manager, DNS Manager - all done from a Windows client using the native Windows tools. Very cool! I created a dummy user account and a couple of GPOs for roaming profiles and folder redirection, and tested those with the user account. It is important to mention here that, after creating shares for the user profiles and home directories and defining them in smb.conf, browse those top-level shares in Windows as Administrator and add to the security properties the group Domain Users with read/write access. Otherwise the user will not be able to write to them. Note: I created the user home directory share as /home/DEMO to segregate from unix users. IPv6 ------ Despite setting IPv6 to NO in the network script, netstat will still show everything listening on IPv6 addresses. Honestly, why can't RH provide one simple setting to turn IPv6 off globally? To disable this nuisance for Samba add this to smb.conf and restart Samba4. interfaces = 127.0.0.1 192.168.0.1 bind interfaces only = yes AD DNS ------------ When using the DNS Manager to add new entries to the demo.local domain I noticed something peculiar. The SOA record, created by the provision step, had the default TTL=0. Therefore, any records created will inherit this. According to MS AD documentation this should be 3600. Attempts to change this from DNS Manager failed. Is this a bug? I then attempted to change it using samba-tool, but this seems to be rather broken, as it just spat out errors. And the help is far from helpful. I eventually found that I can use nsupdate (part of BIND), but requests to update the AD domain must be authenticated by kerberos (using kinit). To do this, install the krb5-workstation package. # yum install krb5-workstation # kinit Administrator Password for Administrator@DEMO.LOCAL: secret Update the existing records by adding them again with new values. # nsupdate -g > update add demo.local 3600 SOA server.demo.local hostmaster.demo.local900 600 86400 3600 > update add demo.local 3600 NS server.demo.local > update add demo.local 3600 A 192.168.0.1 > update add server.demo.local 3600 A 192.168.0.1 > send Note that I successfully used the DNS Manager to add other A, CNAME and MX records to demo.local. NTP ------ I decided to build a local RPM with updated NTP to support signed ntp required for AD clients. I created the build user and environment (plenty of info available on how to do this) and installed rpm-build, gcc and make, with any dependencies. # yum install rpm-build gcc make I downloaded the ntp-4.2.4p8 SRPM from CentOS and ntp-4.2.6p5 tarball from ntp.org, installed the SRPM and put the new tarball under SOURCES. The only edits I made to the ntp.spec file were: - Update version/release numbers - Comment out all patch lines - Add --enable-ntp-signd after --enable-linuxcaps - Add %{_sbindir}/sntp after %{_ntptime} The HowTo had some other edits related to man8 but I ignored those because they didn't seem right to me. Run the build (not as root!): $ rpmbuild -ba ntp.spec This initially produced some errors, as expected, for required dependencies. I installed those packages, but removed them later to keep the system clean (yum history undo is great for this). The build ran successfully, producing RPMs for ntp and ntpdate to install, upgrading the existing 4.2.4 versions. # yum install ./ntp-4.2.6* ./ntpdate-4.2.6* I then edited the /etc/ntp.conf file as follows and restarted ntpd: restrict 192.168.0.0 mask 255.255.255.0 mssntp ntpsigndsocket /var/run/samba4/ntp_signd/ The Windows 7 client should now sync its clock to the DC. From Windows 7 run: C:> w32tm /resync /rediscover This should report successful completion. If not check: C:> w32tm /query /configuration The time provider should be type NT5DS. If not try: C:> w32tm /config /syncfromflags:domhier /update C:> net stop w32time && net start w32time When it's working the command C:> w32tm /monitor will report on the time source of the DC. Initially my Windows 7 VM would not sync. I believe the problem was, before joining the domain, the VM had 192.168.0.1 set as an additional time source. After joining a domain this option is no longer visible, but the setting is still trapped somewhere. Searching the registry did not help. In the end I removed the VM from the domain, which made the additional time source option visible again, deleted the setting that was still there, then join the domain again. After that it worked. I will also mention that I read comments saying that the ownership and mode of the ntp_signd directory had to be changed to give ntpd write access. I tried this and I disagree. In fact, if you play with the permissions Samba4 will refuse to create the socket. The error is clearly shown in the /var/log/samba4/samba.log file. Samba4 creates the ntp_signd directory as root:root 0755 and this works. AVAHI ---------- I read in a couple of places that Avahi can be used as a replacement for nmbd, which is currently missing in Samba4. I installed avahi: # yum install avahi A couple of dependencies were required. I created the file /etc/avahi/services/samba.service as per examples with port 139 and started the service. But my Windows 7 VM will not see the server when browsing the network. If I run avahi-browse from the server it reports the MS Windows network demo.local with server.demo.local at 192.168.0.1 as might be expected. I am not sure why this does not work. Maybe someone has the answer to this. IPTABLES --------------- With the Samba4 server and Windows 7 client running it's time to try enabling the security features again. I read a number websites listing port requirements for AD/Samba4 and not one them was complete and correct. This list is produced from netstat and by using Wireshark to monitor the traffic. To use netstat run: # netstat -lntup | grep -e samba -e smbd The ports needed are: 53, TCP & UDP (DNS) 88, TCP & UDP (Kerberos authentication) 135, TCP (MS RPC) 137, UDP (NetBIOS name service) 138, UDP (NetBIOS datagram service) 139, TCP (NetBIOS session service) 389, TCP & UDP (LDAP) 445, TCP (MS-DS AD) 464, TCP & UDP (Kerberos change/set password) 1024, TCP (this is a strange one but AD is using it) Add these to iptables: # iptables -A INPUT -p tcp --dport 53 -j ACCEPT # iptables -A INPUT -p udp --dport 53 -j ACCEPT # iptables -A INPUT -p udp --dport 137:138 -j ACCEPT # iptables -A INPUT -p tcp --dport 139 -j ACCEPT # iptables -A INPUT -p tcp --dport 445 -j ACCEPT # iptables -A INPUT -p tcp --dport 135 -j ACCEPT # iptables -A INPUT -p tcp --dport 88 -j ACCEPT # iptables -A INPUT -p udp --dport 88 -j ACCEPT # iptables -A INPUT -p tcp --dport 464 -j ACCEPT # iptables -A INPUT -p tcp --dport 389 -j ACCEPT # iptables -A INPUT -p udp --dport 389 -j ACCEPT # iptables -A INPUT -p tcp --dport 1024 -j ACCEPT Start the service and save the changes # service iptables save # service iptables start SELINUX ------------- Although I am experienced with unix/linux I had not used SELinux before setting up this server. I knew that it's a pain, but I wanted to persevere with it, so I did a lot of reading to learn how to use it. However, in my opinion, the people who manage SELinux really need to do something to improve its useability and administration because, honestly, it's awful. It's little wonder that many administrators just turn it off. And making GUI tools is not the answer. Administrators will not (and should not) install a desktop like Gnome on a server, so the GUI tools are unavailable. The problem with Samba4 and SELinux is that SELinux has no pre-defined policy at this time for Samba4. So I started out to make one. With SELinux in permissive mode it's allowing Samba4 to run but recording all the things it doesn't like in the audit.log file. To examine this log I installed the policycoreutils-python tools package and its dependencies: # yum install policycoreutils-python To get a list of events run # aureport -a and for more detailed messages # ausearch -m avc Piping the messages through audit2allow generates a list of rules that would allow the actions. The initial results seemed a bit overwhelming. To produce something more reasonable I decided to utilise the file contexts defined in the Samba3 policy as a basis and apply them to the Samba4 installation. To list these contexts: # semanage fcontext -l | grep -e samba -e smbd I modified these to suit the Samba4 installation and defined a set of rules to relabel the Samba4 directories acordingly. These are applied as follows: # semanage fcontext -a -t samba_initrc_exec_t "/etc/rc\.d/init\.d/samba4" # semanage fcontext -a -t samba_etc_t "/etc/samba4(/.*)?" # semanage fcontext -a -t samba_var_t "/var/lib/samba4(/.*)?" # semanage fcontext -a -t named_var_run_t "/var/lib/samba4/private/dns(/.*)?" # semanage fcontext -a -t named_conf_t "/var/lib/samba4/private/named.conf.*" # semanage fcontext -a -t named_conf_t "/var/lib/samba4/private/dns.keytab" # semanage fcontext -a -t samba_unconfined_script_exec_t "/var/lib/samba4/sysvol/[^/]*/scripts(/.*)?" # semanage fcontext -a -t winbind_var_run_t "/var/lib/samba4/winbindd_privileged(/.*)?" # semanage fcontext -a -t samba_log_t "/var/log/samba4(/.*)?" # semanage fcontext -a -t smbd_var_run_t "/var/lock/samba4(/.*)?" # semanage fcontext -a -t smbd_var_run_t "/var/run/samba4(/.*)?" # semanage fcontext -a -t ntpd_var_run_t "/var/run/samba4/ntp_signd(/.*)?" # semanage fcontext -a -t winbind_var_run_t "/var/run/samba4/winbindd(/.*)?" # semanage fcontext -a -t winbind_var_run_t "/var/run/samba4/winbindd_privileged(/.*)?" Then apply the new contexts: # restorecon -v /etc/rc.d/init.d/samba4 # restorecon -R -v /etc/samba4 # restorecon -R -v /var/lib/samba4 # restorecon -R -v /var/log/samba4 # restorecon -R -v /var/lock/samba4 # restorecon -R -v /var/run/samba4 Locally defined file contexts are stored in /etc/selinux/targeted/contexts/files/file_contexts.local but this file cannot be edited by hand. Be aware that the order these are entered IS important. With pre-defined policies SELinux will apply the rules in a logical order with more specific rules taking preference over less specific ones. This is not the case with locally created rules. They are applied sequentially as they are entered, so if the order is wrong you get the wrong result. That means having to delete some or all of the rules and enter them again in the correct order. I actually created a script to do this tedious task. With the new file contexts in place I allowed SELinux to gather log data for a while, then used audit2allow to produce a file for generating a policy module # ausearch -m avc -ts dd/mm/yy | audit2allow -m samba4local > samba4local.te I edited the samba4local.te file to remove the unwanted commentary. The result looked like this: ---***--- module samba4local 1.0; require { type initrc_t; type named_t; type named_var_run_t; type ntpd_t; type ntpd_var_run_t; type smbd_t; type samba_unconfined_script_exec_t; type urandom_device_t; type var_lock_t; class unix_stream_socket connectto; class unix_dgram_socket sendto; class sock_file write; class chr_file write; class file { read write getattr open lock }; class dir { read search }; } #============= named_t ============== allow named_t urandom_device_t:chr_file write; #============= ntpd_t ============== allow ntpd_t initrc_t:unix_stream_socket connectto; allow ntpd_t ntpd_var_run_t:sock_file write; #============= smbd_t ============== allow smbd_t initrc_t:unix_dgram_socket sendto; allow smbd_t initrc_t:unix_stream_socket connectto; allow smbd_t named_var_run_t:file { read write getattr open lock }; allow smbd_t samba_unconfined_script_exec_t:dir read; allow smbd_t urandom_device_t:chr_file write; allow smbd_t var_lock_t:dir search; ---***--- Compile the module and create the policy package: # checkmodule -M -m -o samba4local.mod samba4local.te # semodule_package -o samba4local.pp -m samba4local.mod Load the module: # semodule -i samba4local.pp With this policy in place SELinux should be able to run in enforcing mode without affecting Samba. I also enabled the following SELinux booleans: # setsebool -P samba_domain_controller on # setsebool -P samba_enable_home_dirs on
Wednesday, May 29, 2013
DATA DEDUPLICATION
Data deduplication technology has gained rapid acceptance in the IT industry over the past several years for its ability to dramatically reduce the amount of backup data stored byeliminating redundant data. In its simplest terms, datadeduplication maximizes storage utilization while allowingorganizations to retain more backup data on disk for longer periods of time. This tremendously improves the efficiency ofdisk-based backup, lowering storage costs and changing theway data is protected.
Although data deduplication solutions vary in terms of howdeduplication is accomplished, in general, data deduplicationworks by comparing new data with existing data from previous backup or archiving jobs, and eliminating there dundancies. Because only unique blocks are transferred,replication bandwidth requirements are reduced.
Subscribe to:
Posts (Atom)