Nuffnang

Wednesday, May 29, 2013

DATA DEDUPLICATION


Data deduplication technology has gained rapid acceptance in the IT industry over the past several years for its ability to dramatically reduce the amount of backup data stored byeliminating redundant data. In its simplest terms, datadeduplication maximizes storage utilization while allowingorganizations to retain more backup data on disk for longer periods of time. This tremendously improves the efficiency ofdisk-based backup, lowering storage costs and changing theway data is protected. 
 
Although data deduplication solutions vary in terms of howdeduplication is accomplished, in general, data deduplicationworks by comparing new data with existing data from previous backup or archiving jobs, and eliminating there dundancies. Because only unique blocks are transferred,replication bandwidth requirements are reduced.

Wednesday, May 22, 2013

Debian GNU/Hurd 2013 release.

It is with huge pleasure that the Debian GNU/Hurd team announces the release of Debian GNU/Hurd 2013. This is a snapshot of Debian "sid" at the time of the Debian "wheezy" release (May 2013), so it is mostly based on the same sources. It is not an official Debian release, but it is an official Debian GNU/Hurd port release.

The installation ISO images can be downloaded from Debian Ports in the usual three Debian flavors: NETINST, CD, DVD. Besides the friendly Debian installer, a pre-installed disk image is also available, making it even easier to try Debian GNU/Hurd.

Debian GNU/Hurd is currently available for the i386 architecture with more than 10.000 software packages available (more than 75% of the Debian archive, and more to come!).

Monday, May 13, 2013

Virtualization

The virtualization layer is implemented by the VMM and is software that runs in a layer between the VMkernel and one or more VMs. It provides the VM abstraction to the guest operating systems.

It is through the VMM that the VM leverages key technologies in the VMkernel. The VMM is part of the VMkernel and is provided for each VM.

The vSphere VMM manages CPU, memory, and input/output, or I/O, device virtualization by implementing software virtualization, hardware virtualization, or paravirtualization techniques.

Software virtualization is an approach in which the hypervisor uses binary translation technology to provide virtualization of CPU and memory for the guest operating systems.

Hardware virtualization is a technique in which hardware capabilities of current CPUs are exploited to provide virtualization of CPU and memory.

Paravirtualization is a virtualization approach that exports a modified hardware abstraction that requires operating systems to be explicitly modified and ported to run. VMware uses paravirtualization to optimize the performance of certain drivers, for example, Small Computer Systems Interface, or SCSI, and network drivers.

Wednesday, May 1, 2013

New Features in Linux Kernel 3.9

Ten weeks to the day after the arrival of version 3.8, Linux creator Linus Torvalds on Monday released version 3.9 of the Linux kernel.
“This week has been very quiet, which makes me much more comfortable doing the final 3.9 release, so I guess the last -rc8 ended up working,” wrote Torvalds in the announcement email early Monday. “Because not only aren't there very many commits here, even the ones that made it really are tiny and not pretty obscure and not very interesting.”
Tux
Linus Torvalds on Monday released version 3.9 of the Linux kernel.
That's certainly not to say that this new kernel release doesn't include a number of interesting features overall, however – quite the contrary, in fact. Here's a quick look at some of the highlights. 1. SSD Caching
It's always nice to see new features that enable faster performance, and one such example is Linux 3.9's addition of a device mapper target (dm-cache) that enables the use of speedy devices such as solid-state drives (SSDs) as a cache for slower devices such as rotating hard disks.  “Different 'policy' plugins can be used to change the algorithms used to select which blocks are promoted, demoted, cleaned etc.,” explains the changelog on KernelNewbies.org. “It supports writeback and writethrough modes.”
2. Expanded Architecture Support
Broadened support is another change that's pretty much always welcome, and Linux 3.9 actually adds two new architectures to the list of those supported. Specifically, this new release brings the Linux kernel port to the ARC700 processor family (750D and 770D) from Synopsys as well as the Meta ATP (Meta 1) and HTP (Meta 2) processor cores from Imagination. Meta cores can be found in many digital radios, while the ARC700 family is commonly embedded in SoCs in TV set-top boxes and digital media players.
3. Better Power Efficiency
Thanks to the inclusion of the Intel PowerClamp driver, which performs synchronized idle injection across all online CPUs, Linux 3.9 also offers improved power efficiency in terms of performance per watt.
4. Chromebook Support
Particularly useful for Chromebook owners yearning to get their favorite distro up and running on their machine, meanwhile, is that Linux 3.9 adds full support for “all the devices present in the Chrome laptops sold by many companies,” as KernelNewbies puts it.
5. Another Boost for ARM
Linux's support for ARM has improved considerably over the past few releases, and kernel 3.9 brings a key improvement in the form of support for the KVM virtualization system in the ARM architecture port. As KernelNewbies notes, “this brings virtualization capabilities to the Linux ARM ecosystem.”
6. Android Developer Support
Finally, targeting Android developers this time, Linux 3.9 adds support for the “Goldfish” virtualized platform that's part of the Android development environment. Essentially, that means it's now possible to develop for Android with “out-of-the-box” kernels.

Monday, April 8, 2013

How to write Redhat cloned distro initialization scripts with Upstart


On Linux systems, initialization (init) scripts manage the state of system services during system startup and shutdown. When the system goes through its runlevels, the System V init system starts and stops services as configured. While this tried-and-true technology has been around since the dawn of Unix, you can now create modern and efficient CentOS 6 init scripts by using Upstart, an event-based replacement for System V init.
Until its latest release, CentOS used the System V init system by default. SysV init scripts are simple and reliable, and guarantee a certain order of starting and stopping.
Starting with version 6, however, CentOS has turned to a new and better init system – Upstart. Upstart is faster than System V init because it starts services simultaneously rather than one by one in a certain order. Upstart is also more flexible and robust, because it is event-based. Upstart generates events at various times, including while going through the system runlevels, similar to the SysV init system. However, Upstart may also generate custom events. For example, with Upstart you can generate an event that requires certain services to be started, regardless of the runlevel. And Upstart not only generates events, it also handles them – so, for example, when it acknowledges the event for starting a service it will do so. This event-based behavior is robust and fast.
Upstart supports SysV init scripts for compatibility reasons; most service init scripts in CentOS 6 continue to be SysV-based. You might someday have to create an init script yourself if you write custom software. If you do, you should write your new init scripts with Upstart in mind so you can benefit from the new init system's faster performance and additional features.

Beginning the Upstart init script

Upstart keeps init scripts in the /etc/init/ directory. A script's name should correspond to the name of the service or job it controls, with a .conf extension. The init script for the Tomcat service, for example, should be named /etc/init/tomcat.conf.
Like SysV init scripts, Upstart init scripts are regular Bash scripts, but extended with some Upstart-specific directives, which are called stanzas in Upstart. In SysV init scripts you commonly see the line . /etc/init.d/functions, which provides access to additional necessary SysV functions. Upstart scripts are more sophisticated and complete; you don't have to include any additional functions or libraries.
Just as in any Bash script, comments in Upstart scripts start with #. Put descriptive comments at the beginning of each script to explain its purpose, and in other places where the code may need explanation. You can use two special stanzas, author and description, for documentation.

Defining when a service starts

Thursday, April 4, 2013

OpenStack 'Grizzly' 2013.1 release.

Roughly six months after the launch of its “Folsom” release last fall, OpenStack on Thursday unveiled version 2013.1 “Grizzly,” the seventh and latest release of the open source software for building public, private and hybrid clouds.
openstack cloud software
More than 500 contributors made 7,620 updates in the OpenStack "Grizzly" release.
More than 500 contributors made 7,620 updates in this new release, which, “more than any before it, was driven by users who have been running OpenStack in production for the past year (or more) and have asked for broader support for the compute, storage and networking technologies they trust and even greater scale and ease of operations,” explained Mark Collier, chief operating officer at the OpenStack Foundation, in a blog post announcing the new software. Best Buy, Bloomberg, NSA, Cisco WebEx, Comcast, CERN, HP, NeCTAR, PayPal, Rackspace and Samsung are among the companies using OpenStack in production.
Developers who contributed to this release came from more than 45 companies, including Red Hat, Rackspace, IBM, HP, Nebula, Intel, eNovance, Canonical, VMware, Cloudscaling, DreamHost and SINA.

'Support for Security Groups'

Seven integrated projects make up OpenStack, each with source code now publicly available: Compute ("Nova"), Object Storage ("Swift"), Image Service ("Glance"), Networking ("Quantum"), Block Storage ("Cinder"), Identity ("Keystone") and Dashboard ("Horizon").      
More than 200 new features are included in this Grizzly release, and some 1,900 bugs were fixed. In anticipation of the launch, Linux.com spoke earlier this week with Thierry Carrez, release manager for the project, to hear about some of the highlights.
“Personally, my favorite key features for this release would be in OpenStack Compute (Nova): introduction of the 'Cells' deployment model for massive scale, and isolation of the compute nodes from the rest of the system for better security ('no-db-compute'),” Carrez began. “I also like how OpenStack Networking introduced support for security groups, as well as a load-balancing-as-a-service feature.”
Meanwhile, “I would also mention how OpenStack Block Storage (Cinder) managed to add a large number of storage drivers from all of the storage industry,” he told Linux.com. Ten new drivers were added, in fact, including Ceph/RBD, Coraid, EMC, Hewlett-Packard, Huawei, IBM, NetApp, Red Hat/Gluster, SolidFire and Zadara.
Finally, “the general drive towards more reuse of code across the various OpenStack projects (through the introduction of the common 'Oslo' libraries) is also worth mentioning,” Carrez said.

Cross-Origin Resource Sharing 

Other highlights of the new release include significant improvements in virtualization management on the Compute side, with full support for ESX, KVM, XEN and Hyper-V. Quotas were added to the Object Storage system, meanwhile, as was cross-origin resource sharing (CORS), enabling browsers to “talk directly to back-end storage environments,” Collier noted.
For Networking, Grizzly aims to achieve greater scale and higher availability by distributing L3/L4 and dynamic host configuration protocol (DHCP) services across multiple servers. New plug-ins were also added from Big Switch, Hyper-V, PlumGrid, Brocade and Midonet.
The Grizzly Dashboard, meanwhile, is backwards-compatible with the Folsom release.

Sunday, March 24, 2013

vCloud Director : vSphere enter to maintenance mode

  • System-> Manage & Monitor
  • vSphere Resources -> Hosts
  1. Find the host you need to place in to maintenance mode, right click and select Disable Host.
  2.  At that point, the status will turn from a green circle with a check, to a red circle.
  3. Right click on the host again and select Redeploy All VMs.
  4. The ESXi host will go in to maintenance mode in the vCenter server and evacuate all virtual machines as usual.
  5. (Optional!) If you see vsla errors (such as the screenshot), issues with deleting vApps, Unprepare the host which removes the vCloud agent from ESXi
  6. (Optional!) Prepare the host for vCloud by pushing the vCloud agent to ESXi
  7. When maintenance is complete, right click and Enable Host.
  8. And your work is complete!