- Listen on network ports for mail.
- Sort mail and deliver it locally or externally to other servers.
- Append mail to files or pipe it through other programs.
- Queue mail (if immediate delivery fails).
- Convert email addresses to/from user names, or handle mailing lists.
- Reads rules for special mail handling, so it can try to catch spam, or check for correctness.
Nuffnang
Monday, July 22, 2013
Sendmail
What Sendmail does:
Monday, July 15, 2013
Basics of Shell Programming
- To get a Linux shell, you need to start a terminal.
- To see what shell you have, run: echo $SHELL.
- In Linux, the dollar sign ($) stands for a shell variable.
- The ‘echo‘ command just returns whatever you type in.
- The pipeline instruction (|) comes to rescue, when chaining several commands.
- Linux commands have their own syntax, Linux won’t forgive you whatsoever is the mistakes. If you get a command wrong, you won’t flunk or damage anything, but it won’t work.
- #!/bin/sh – It is called shebang. It is written at the top of a shell script and it passes the instruction to the program /bin/sh.
Tuesday, July 9, 2013
Xen Server : Features Comparison
| Xen 4.0 | Xen 4.1 | Xen 4.2 | Xen 4.3 | |
| Initial Release | 7-Apr-10 | 25-Mar-11 | 17-Sep-12 | 2-Jul-13 |
| Feature List | FL 4.2 | FL 4.3 | ||
| Release Notes | RN 4.0 | RN 4.1 | RN 4.2 | RN 4.3 |
| Supported Mainline Architectures | ||||
| IA-A32 | ✓ | ✓ | ✓ | ✓ |
| X86-64 | ✓ | ✓ | ✓ | ✓ |
| Itanium | ✓ | ✓deprecated in this release | ✓deprecated | |
| ARM v7 + Virtualization Extensions | ✓tech preview [ 6 ] | |||
| ARM v8 | ✓tech preview [ 6 ] | |||
| Guest Types | ||||
| For X86 Architectures | ||||
| Paravirtualised | ✓ | ✓ | ✓ | ✓ |
| Traditional Xen PV guest | ||||
| HVM Guest [ 1 ] | ✓ | ✓ | ✓ | ✓ |
| Fully virtualised guest using hardware virtualisation extensions | ||||
| PV-on-HVM Guest [ 1 ] | ✓ | ✓ | ✓ | ✓ |
| Fully virtualised guest using PV extensions/drivers for improved performance | ||||
| For ARM Architectures | ||||
| ARM Guest | ✓tech preview [ 6 ] | |||
| Optimal combination of full virtualization and PV extensions | ||||
| Host Limits | ||||
| For X86 Architectures | ||||
| Physical CPUs | 128 [ 0 ] | >255 | 4095 | 4095 |
| Physical RAM | 1TB | 5TB | 5TB | 16TB |
| For ARM Architectures | ||||
| Physical CPUs | 8 | |||
| Physical RAM | 16GB | |||
| Guest Limits | ||||
| X86 PV Guest Limits | ||||
| Virtual CPUs | 128 | >255 | 512 | 512 |
| Virtual RAM | 512GB | 512GB | 512GB | 512GB |
| X86 HVM Guest Limits | ||||
| Virtual CPUs | 128 | 128 | 256 | 256 |
| Virtual RAM | 1TB | 1TB | 1TB | 1TB |
| ARM Guest Limits | ||||
| Virtual CPUs | 8 | |||
| Virtual RAM | 16GB | |||
| Toolstack | ||||
| Built-in | ||||
| xend / xm | ✓ | ✓ | ✓deprecated in this release | ✓deprecated |
| XL | ✓initial implementation | ✓preview release | ✓ | ✓ |
| Qemu based disk backend (qdisk) for XL | ✓ [ 5 ] | ✓ [ 5 ] | ✓ [ 5 ] | |
| XL Open vSwitch integration | ✓tech preview [ 7 ] | |||
| 3rd Party | ||||
| libvirt driver for XL | ✓ | ✓ | ✓ | |
| Features | ||||
| Advanced Memory Management | ||||
| Memory Ballooning | ✓ | ✓ | ✓ | ✓ |
| Memory Sharing | ✓tech preview | ✓tech preview | ✓tech preview [ 3 ] | ✓tech preview [ 3 ] |
| allow sharing of identical pages between HVM guests | ||||
| Memory Paging | ✓tech preview | ✓tech preview | ✓tech preview [ 3 ] | ✓tech preview [ 3 ] |
| allow pages belonging to HVM guests to be paged to disk | ||||
| TMEM - Transcendent Memory | ✓experimental [ 2 ] | ✓experimental [ 2 ] | ✓experimental [ 2 ] | ✓experimental [ 2 ] |
| Resource Management | ||||
| Cpupool | ✓ | ✓ | ✓ | |
| advanced partitioning | ||||
| Credit 2 Scheduler | ✓prototype | ✓prototype | ✓experimental | |
| designed for latency-sensitive workloads and very large systems. | ||||
| NUMA scheduler affinity | ✓ | |||
| Scalability | ||||
| 1GB/2MB super page support | ✓ | ✓ | ✓ | |
| Deliver events to PVHVM guests using Xen event channels | ✓ | ✓ | ✓ | |
| Interoperability / Hardware Support | ||||
| Nested Virtualisation | ✓experimental | ✓experimental | ||
| Running a hypervisor inside an HVM guest | ||||
| HVM PXE Stack | gPXE | iPXE | iPXE | iPXE |
| Physical CPU Hotplug | ✓ | ✓ | ✓ | ✓ |
| Physical Memory Hotplug | ✓ | ✓ | ✓ | ✓ |
| Support for PV kernels in bzImage format | ✓ | ✓ | ✓ | ✓ |
| PCI Passthrough | ✓ | ✓ | ✓ | ✓ |
| X86 Advanced Vector eXtension (AVX) | ✓ [ 4 ] | ✓ | ✓ | |
| High Availability and Fault Tolerance | ||||
| Live Migration, Save & Restore | ✓ | ✓ | ✓ | ✓ |
| Remus Fault Tolerance | ✓ | ✓ | ✓ | ✓ |
| vMCE | ? | ? | ✓ | ✓ |
| Forward Machine Check Exceptions to Appropriate guests | ||||
| Network and Storage | ||||
| Blktap2 | ✓ | ✓ | ✓ | ✓ |
| Online resize of virtual disks | ✓ | ✓ | ✓ | ✓ |
| Security (also see this presentation or this document) | ||||
| Driver Domains | ✓ | ✓ | ✓ | ✓ |
| Device Model Stub Domains | ✓ | ✓ | ✓ | ✓ |
| Memaccess API | ✓ | ✓ | ✓ | |
| enabling integration of 3rd party security solutions into Xen virtualized environments | ||||
| XSM & FLASK | ✓ | ✓ | ✓ | ✓ |
| mandatory access control policy providing fine-grained controls over Xen domains, similar to SELinux | ||||
| XSM & FLASK support for IS_PRIV | ✓ | |||
| vTPM Support | ✓ | ✓ | ✓ | ✓ |
| updates and new functionality | ||||
| Tooling | ||||
| gdbsx | ✓ | ✓ | ✓ | ✓ |
| debugger to debug ELF guests | ||||
| vPMU | ✓ [ 4 ] | ✓ [ 4 ] | ✓ [ 4 ] | ✓ [ 4 ] |
| Virtual Performance Management Unit for HVM guests | ||||
| Serial console | ✓ | ✓ | ✓ | ✓Add EHCI debug support |
| xentrace | ✓ | ✓ | ✓ | ✓ |
| performance analysis | ||||
| Device Models and Virtual Firmware for HVM guests | ||||
| For X86 Architectures | ||||
| Traditional Device Model | ✓ | ✓ | ✓ | ✓ |
| Device emulator based on Xen fork of Qemu | ||||
| Qemu Upstream Device Model | ✓tech preview | ✓default, unless stubdomains are used | ||
| Device emulator based on upstream Qemu | ||||
| ROMBIOS | ✓ | ✓ | ✓ | ✓ |
| BIOS used with traditional device model only | ||||
| SeaBIOS | ✓ | ✓ | ||
| BIOS used with upstream qemu device model and XL only | ||||
| OVMF/Tianocore | ✓experimental [ 4 ] | ✓experimental [ 4 ] | ||
| UEFI Firmware used with upstream qemu device model and XL only | ||||
| PV Bootloader support | ||||
| For X86 Architectures | ||||
| PyGrub support for GRUB 2 | ✓ | ✓ | ✓ | ✓ |
| PyGrub support for /boot on ext4 | ✓ | ✓ | ✓ | ✓ |
| pvnetboot support | ✓ | ✓ | ||
| Bootloader supporting network boot of PV guests | ||||
Wednesday, July 3, 2013
Fedora 19 New Features : NFStest
NFStest provides a set of tools for testing either the NFS client or the NFS server, included tests focused mainly on testing the client.
Test utilities package
Provides a set of tools for testing either the NFS client or the NFS server, most of the functionality is focused mainly on testing the client. These tools include the following:- Process command line arguments
- Provide functionality for PASS/FAIL
- Provide test grouping functionality
- Provide multiple client support
- Logging mechanism
- Debug info control
- Mount/Unmount control
- Create files/directories
- Provide mechanism to start a packet trace
- Provide mechanism to simulate a network partition
- Support for pNFS testing
Installation
- Install the rpm as root
- # rpm -i NFStest-1.0.1-1.noarch.rpm
- All manual pages are available
- $ man nfstest
- Run tests:
- $ nfstest_pnfs --help
- Untar the tarball
- $ cd ~
- $ tar -zxvf NFStest-1.0.1.tar.gz
- The tests can run without installation, just set the python path
- environment variable:
- $ export PYTHONPATH=~/NFStest-1.0.1
- $ cd NFStest-1.0.1/test
- $ ./nfstest_pnfs --help
- Or install to standard python site-packages and executable directories:
- $ cd ~/NFStest-1.0.1
- $ sudo python setup.py install
- All manual pages are available
- $ man nfstest
- Run tests:
- $ nfstest_pnfs --help
- Clone the git repository
- $ cd ~
- $ git clone git://git.linux-nfs.org/projects/mora/nfstest.git
- The tests can run without installation, just set the python path
- environment variable:
- $ export PYTHONPATH=~/nfstest
- $ cd nfstest/test
- $ ./nfstest_pnfs --help
- Or install to standard python site-packages and executable directories:
- $ cd ~/nfstest
- $ sudo python setup.py install
- All manual pages are available
- $ man nfstest
- Run tests:
- $ nfstest_pnfs --help
Setup
Make sure user running the tests can run commands using 'sudo' without the need for a password.Make sure user running the tests can run commands remotely using 'ssh' without the need for a password. This is only needed for tests which require multiple clients.
Create the mount point specified by the --mtpoint (default: /mnt/t) option on all the clients:
- $ sudo mkdir /mnt/t
- $ sudo chmod 777 /mnt/t
Examples
- nfstest_pnfs
- The only required option is --server
- $ nfstest_pnfs --server 192.168.0.11
- nfstest_cache
- Required options are --server and --client
- $ nfstest_cache --server 192.168.0.11 --client 192.168.0.20
- Testing with different values of --acmin and --acmax (this takes a long time)
- $ nfstest_cache --server 192.168.0.11 --client 192.168.0.20 --acmin 10,20 --acmax 20,30,60,80
- nfstest_delegation
- The only required option is --server but only the basic delegation tests will
- be run. In order to run the recall tests the --client option must be used
- $ nfstest_delegation --server 192.168.0.11 --client 192.168.0.20
- nfstest_dio
- The only required option is --server
- $ nfstest_dio --server 192.168.0.11
- nfstest_posix
- The only required option is --server
- $ nfstest_posix --server 192.168.0.11
Subscribe to:
Comments (Atom)