Nuffnang

Monday, August 11, 2014

RHEL Brings Software Defined Storage to Big Data

cache tier diagram
Cache tiering divides the data cluster so that hot data being accessed regularly can be held on faster storage, typically SSD's, while erasure-coded cold data sits below on cheaper storage media. Image credit: Ceph.
Red Hat last month released the latest version of Inktank Ceph Enterprise, their object and block storage product based on the upstream open source Ceph project. It's notable not only as the first release since Red Hat acquired the two-year-old startup, Inktank, in April, but also for two key features that help open up a

new market for Ceph.



While Ceph gained prominence as the open source software-defined storage tool commonly used on the back end of OpenStack deployments, it's not strictly software for the cloud. With the latest new enterprise feature addition, Ceph has begun to see adoption among a new class of users interested in software-defined storage for big data applications.
The new enterprise features can be used in both legacy systems and in a cloud context, “but there's almost a third category of object storage within an enterprise,” said Sage Weil, Ceph project leader, in an interview at OSCON. “They're realizing that instead of buying expensive systems to store all of this data that's relatively cold, they can use software-defined open platforms to do that.”


Inktank Ceph Enterprise logo








“It's sort of cloudy in the sense that it's scale out,” Weil said, “but it's not really related to compute; it's just storage.”

Two Important New Features

Ceph Enterprise 1.2 contains erasure coding and cache-tiering, two features first introduced in the May release of Ceph Firefly 0.8. Erasure coding can pack more data into the same amount of space and requires less hardware than traditional replicated storage clusters, providing a cost savings benefit to companies that need to keep a lot of archival data around. Cache tiering divides the data cluster so that hot data being accessed regularly can be held on faster storage, typically SSD's, while erasure-coded cold data sits below on cheaper storage media.
Used together, erasure coding and cache tiering allow companies to combine the value of storing relatively cold, unused data in large quantities, with faster performance – all in the same cluster, said Ross Turk, director of product marketing for storage and big data at Red Hat.
It's a set of features that are both useful in a cloud platform context as well as in standalone storage for companies that want to benefit from the scale-out capabilities that the cloud has to offer but aren't entirely ready to move to the cloud.
“In theory it's great to have elastic resources and move it all to the cloud, but training organizations to adapt to that new paradigm and have their own ops teams able to run it, takes time,” Weil said.

Appealing to big data users

OpenStack was a good first use case for Ceph to target because developers and system administrators on those projects understand distributed software, Weil said. Similarly, a greenfield private cloud deployment is a good use case for Ceph because it's easy to stand up a new storage system at the same time “rather than attack legacy use cases head on,” he said.
But enterprise private and hybrid cloud adoption still lags behind public cloud use, according to two recent reports by IDC and Technology Business Research. One reason is that most companies lack the internal IT resources and expertise to move a significant portion of their resources to the cloud, according to a March 2014 enterprise cloud adoption study by Everest Group.
Storage faces an even longer road to adoption than the cloud, given the high standards and premium that companies place on retaining data and keeping it secure.
“People require their storage to be a certain level of quality and stability – you can reboot a server but not a broken disk and get your data back,” Turk said.
By providing an economic advantage to users in the growing cold storage market, Ceph has the added benefit of encouraging enterprise adoption of open source storage in the short term without relying on cloud adoption to fuel it.

The path to the open source data center

Over the long term, cloud computing and the software-defined data center – including storage, compute, and networking – will become the new paradigm for the enterprise, Weil said. And Ceph, already a dominant open source project in this space, will rise along with it.
“A couple of decades ago you had a huge transformation with Linux going from proprietary Unix OSes sold in conjunction with expensive hardware to what we have today in which you can run Linux or BSD or whatever on a huge range of hardware,” Weil said. “I think you'll see the same thing happen in storage, but that battle is just starting to happen.”
Red Hat's acquisition of Inktank will help shepherd Ceph along that path to widespread enterprise adoption -- starting with this first Ceph Enterprise release. Ceph will also eventually integrate with a lot of the other projects Red Hat is involved with, Weil says, including the Linux kernel, provisioning tools, and OpenStack itself.

Saturday, August 9, 2014

What Is Cloud Computing ?

Cloud Computing is the use of Computing Resources(Hardware like Hypervisor,Storage,switches  & Software like Virtualization,vlan trafficing , dynamic ip allocation ) that are delivered as Service over the Network.It's called cloud since all these above mentioned resources can be scaled on request  and based on usage.



















































Why cloud Computing is preferred / benefits of Cloud Computing

  • Scalability : -The customer doesn't have to know (and buy) the full capacity they might need at a peak time. Cloud computing makes it possible to scale the resources available to the application. A start-up business doesn't have to worry if the advertising campaign works a bit too well and jams the servers.
  • Pay Per Use :- Customers pay only for what they use. They don’t have to buy servers or capacity for their maximum needs. Often, this is a cost savings.
  • The cloud will automatically (or, in some services, with semi-manual operations) allocate and de-allocate CPU, storage, and network bandwidth on demand. When there are few users on a site, the cloud uses very little capacity to run the site, and vice versa.
  • Reduces Cost :- Because the data centers that run the services are huge, and share resources among a large group of users, the infrastructure costs are lower (electricity, buildings, and so on). Thus, the costs that are passed on to the customer are smaller.
  • Application programming interface (API):- Accessibility to software that enables machines to interact with cloud software in the same way that a traditional user interface (e.g., a computer desktop) facilitates interaction between humans and computers
  • Virtualization technology allows servers and storage devices to be shared and utilization be increased. Applications can be easily migrated from one physical server to another.


Types of Cloud Computing :-

  • Public Cloud
  • Private Cloud
  • Hybrid Cloud

Public Cloud  :- In public cloud applications, storage, and other resources are made available to the general public by a service provider. These services are free or offered on a pay-per-use model. Generally, public cloud service providers like Amazon AWS, Microsoft and Google own and operate the infrastructure and offer access only via Internet

Private Cloud : - Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third-party and hosted internally or externally.

Hybrid Cloud : -  Hybrid cloud uses both public and private cloud infrastructure.

Cloud computing  Models





















  • Infrastructure as a Service (IaaS).  IaaS  offers  computers - physical or  virtual machines - and other resources like storage so that developers and IT organizations can use to deliver business solutions.Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the amount of resources allocated and consumed.
  • Platform as a Service (PaaS). Pass offers computing platform typically including operating system, programming language execution environment, database, and web server. Application developers can develop and run their software solutions on a cloud platform without the cost and complexity of buying and managing the underlying hardware and software layers.
  • Software as a Service (SaaS). In the SaaS , the service provider hosts the software so you don’t need to install it, manage it, or buy hardware for it. All you have to do is connect and use it. SaaS Examples include customer relationship management as a service.

Saturday, August 2, 2014

Whats New in Red Hat Enterprise Linux 7

Red Hat Enterprise Linux 7 has been released successfully on June 10, 2014. RHEL 7 is providing better performance and scalability. At system administrators it provides unified management tools and system-wide resource management that reduce he administrative burden.
welcome-rhel7
Release Notes: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/7.0_Release_Notes/index.html

Whats New in Red Hat Enterprise Linux 7:

RedHat Enterprise Linux 7 has been released with lots of major changes and migration considerations. Currently its Release Candidate version is released and we will look final release very soon. Here is some description of major changes coming in RHEL 7 than RHEL 6. This list doesn’t contain all the changes available in RHEL 7. I have tried to provide those changes details which comes general in use.

1. System Limitations

a. RedHat recommends minimum 5 GB of disk space to install this release of RHEL series for all supported architectures.
b. For AMD64 and Intel® 64 systems – Red Hat recommends at least 1 GB memory per logical CPU.
c. For 64-bit Power systems now require at least 2 GB of memory to run.

2. New Boot Loader

A new book loader GRUB2 has been introduced in RHEL 7. It supports more filesystems and block devices.
GRUB2 Configuration File: /boot/grub2/grub.cfg

3. New Init System

As we know RHEL 6 and older releases used SysV init system. But RHEL 7 has been released with new init system systemd which is compatible with SysV.

4. New Installer

Red Hat Enterprise Linux 7 has and redesigned Anaconda version. Which have many improvements in system installation.

5. Changes to firstboot Implementation

RHRL 7 replaces firstboot with the Initial Setup utility, initial-setup, for better interoperability with the new installer.

6. Changes in File System Layout

Red Hat Enterprise Linux 7 introduces two major changes to the layout of the file system.
The /bin, /sbin, /lib and /lib64 directories are now under the /usr directory.
The /tmp directory can now be used as a temporary file storage system (tmpfs)

7. ncat (Network Configuration utility) introduced

A new networking utility ncat has been introduced in RHEL 7 which replaces netcat. ncat is a reliable back-end tool that provides network connectivity to other applications.It uses both TCP and UDP for communication.

8. Released with Apache 2.4

RHEL 7 is coming with apache 2.4, which has significant changes with number of new features.

9. Chrony – A new Package Introduced

Chrony is introduced as new NTP client provided in the chrony package. Chrony does not provides all features available in old ntp client (ntp). So ntp is still provided due to compatibility.

10. Introducing HAProxy

HAProxy has been introduced in RHEL 7. It is a TCP/HTTP reverse proxy that is well-suited to high availability environments.