Nuffnang

Friday, February 13, 2015

Red Hat Enterprise Virtualization 3.5

RedHat announced the general availability of Red Hat Enterprise Virtualization 3.5, enabling organizations to deploy an IT infrastructure that services traditional virtualization workloads while creating an enterprise-grade foundation for cloud infrastructure. Red Hat Enterprise Virtualization 3.5 delivers standardized services for mission critical workloads, and offers IT organizations greater visibility into provisioning, configuring and monitoring of their virtualization infrastructure, all based on open standards.
"The healthcare industry is undergoing significant changes that require us to rapidly adopt to new business and regulatory compliance requirements. Because Red Hat Enterprise Virtualization is built on open standards that enable flexibility and fast innovation, we can more quickly adopt our IT infrastructure and deploy services with stability and speed.”
Steven BellistriManager, IT, LDI Integrated Pharmacy Services
Red Hat is a recognized leader in the scale and performance of virtual machine workloads, and Red Hat Enterprise Virtualization 3.5 extends this leadership with support for four terabytes (4 TB) of memory per host, 4 TB of vRAM, and 160 vCPUs per virtual machine.
Notable new features in Red Hat Enterprise Virtualization 3.5 include:
  • Lifecycle management and provisioning of bare-metal hosts via integration with Red Hat Satellite.
  • Compute resource optimization through advanced real-time analytics with oVirt Optimizer integration. This enables users to identify the balance of resource allocation that best meets their needs while provisioning new virtual machines.
  • Workload performance and scalability provided through non-uniform memory access (NUMA) support, which is extended to Host NUMA, Guest Pinning and Virtual NUMA. This enables customers to deploy highly scalable workloads with improved performance and minimizes resource overload related to physical memory access times.
  • Enhanced disaster recovery via improved storage domain handling, providing support for migrating storage domains between different datacenters supported by Red Hat Enterprise Virtualization, enabling partner technologies to deliver site recovery capabilities.

Red Hat Enterprise Virtualization also serves as an ideal foundation for both traditional virtualization and highly flexible cloud-enabled workloads built on OpenStack. Red Hat Enterprise Virtualization 3.5 includes features that enhance this foundation for cloud-enabled workloads:
  • Integration and shared common services with OpenStack Image Service (Glance) and OpenStack Networking (Neutron), available as a Tech Preview, enabling administrators to break down silos and to deploy resources once across the infrastructure.
  • Instance types, unifying the process of provisioning virtual machines for both virtual and cloud-enabled workloads.
Red Hat Enterprise Virtualization Availability
  • As a standalone offering - Red Hat Enterprise Virtualization 3.5 - including Hypervisor and Manager for virtualized enterprise workloads for supported guest operating systems.
  • As an integrated offering called Red Hat Enterprise Linux with Smart Virtualization, aimed at customers looking to maximize the benefits of their virtualized infrastructure with Linux workloads. This offering combines the innovation, performance, scalability, reliability and security features of Red Hat Enterprise Linux with the advanced virtualization management capabilities of Red Hat Enterprise Virtualization.
  • Via Red Hat Cloud Infrastructure, a comprehensive solution that supports organizations on their journey from traditional datacenter virtualization to OpenStack-powered clouds. Red Hat Cloud Infrastructure is a single subscription offering that includes Red Hat CloudForms, Red Hat Satellite, Red Hat Enterprise Linux OpenStack Platform, and Red Hat Enterprise Virtualization.

Sunday, February 8, 2015

Internet Explorer Error After Windows Patches

Windows 7 Pro 64-bit SP1 systems

Solved the "invalid address" problem by removing the

 dword value "MoveImages"

from the following Windows registry key:

 HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management

After that I was able to start IE10 without errors.

( Thank you Marcus Jaken for pointing me in the right direction: http://cloudsurvivalguide.com/?p=564 )

"MoveImages" seems to be related to "Address space layout randomization" ( http://en.wikipedia.org/wiki/Address_space_layout_randomization#Microsoft_Windows ).

For analysis I have checked some of our earlier Windows 7 installs:
  • On two different Win7 installs without "MoveImages" we could start IE10 without error.
  • On several Win7 installs with "MoveImages" equal to 0x00 we got an "invalid error" when starting IE10.  After removing "MoveImages" we could start IE10 without error.
We suspect that running EMET ( http://support.microsoft.com/kb/2458544/en-us ) caused the creation of the "MoveImages" dword in the registry.


Monday, February 2, 2015

Install VMware vSphere Client on Domain Controller Machine

As VMware admin’s ,we are so much used to work with vSphere windows client against vSphere web Client. Have you tried to installing vSphere client on Domian controller machine. By default, that is not possible. When we try to install vSphere windows client on Domain Controller, We may end up with the error message” vSphere Client fails with a message saying the as a requirements the management station has to be running XP SP2 and not a domain controller”. For people running Lab environment, Will not prefer to install another windows VM just to install vSphere client. In that situation, You can make use of this OS SKIP command to install the vSphere client on Windows Domain controller as a workaround.
Below is the error message you will receive, when you try to install vSphere client on Windows Domain Controller machine.
Install vSphere Client on Domain Controller-1 You can use an advanced switch when installing VI client on Domain Controller . You can launch the installer from a command line and in this case there is a switch to use which skips the OS check. Here is the command to use:
VMware-viclient.exe /VSKIP_OS_CHECKS=”1″
Install vSphere Client on Domain Controller-2
Install vSphere Client on Domain Controller-3

Sunday, January 18, 2015

RHEL 7 : systemd

As the Systemd now replaces SysVinit, it is time to get familiar with it and learn new commands.
Systemd is quicker because it uses fewer scripts and tries to run more tasks in parallel (Systemd calls them units).
The Systemd configuration is stored in the /etc/systemd directory.

Boot process

Systemd primary task is to manage the boot process and provides informations about it.
To get the boot process duration, type:
# systemd-analyze
Startup finished in 422ms (kernel) + 2.722s (initrd) + 9.674s (userspace) = 12.820s
To get the time spent by each task during the boot process, type:
# systemd-analyze blame
7.029s network.service
2.241s plymouth-start.service
1.293s kdump.service
1.156s plymouth-quit-wait.service
1.048s firewalld.service
632ms postfix.service
621ms tuned.service
460ms iprupdate.service
446ms iprinit.service
344ms accounts-daemon.service
...
7ms systemd-update-utmp-runlevel.service
5ms systemd-random-seed.service
5ms sys-kernel-config.mount
To get the list of the dependencies, type:
# systemctl list-dependencies
default.target
├─abrt-ccpp.service
├─abrt-oops.service
...
├─tuned.service
├─basic.target
│ ├─firewalld.service
│ ├─microcode.service
...
├─getty.target
│ ├─getty@tty1.service
│ └─serial-getty@ttyS0.service
└─remote-fs.target

Journal analysis

In addition, Systemd handles the system event log, a syslog daemon is not mandatory any more.
To get the content of the Systemd journal, type:
# journalctl
To get all the events related to the crond process in the journal, type:
# journalctl /sbin/crond
Note: You can replace /sbin/crond by `which crond`.
To get all the events since the last boot, type:
# journalctl -b
To get all the events that appeared today in the journal, type:
# journalctl --since=today
To get all the events with a syslog priority of err, type:
# journalctl -p err
To get the 10 last events and wait for any new one (like “tail -f /var/log/messages“), type:
# journalctl -f

Control groups

Systemd organizes tasks in control groups. For example, all the processes started by an apache webserver will be in the same control group, CGI scripts included.
To get the full hierarchy of control groups, type:
# systemd-cgls
├─user.slice
│ └─user-1000.slice
│ └─session-1.scope
│ ├─2889 gdm-session-worker [pam/gdm-password]
│ ├─2899 /usr/bin/gnome-keyring-daemon --daemonize --login
│ ├─2901 gnome-session --session gnome-classic
. .
└─iprupdate.service
└─785 /sbin/iprupdate --daemon
To get the list of control group ordered by CPU, memory and disk I/O load, type:
# systemd-cgtop
Path Tasks %CPU Memory Input/s Output/s
/ 213 3.9 829.7M - -
/system.slice 1 - - - -
/system.slice/ModemManager.service 1 - - - -
To kill all the processes associated with an apache server (CGI scripts included), type:
# systemctl kill httpd
To put resource limits on a service (here 500 CPUShares), type:
# systemctl set-property httpd.service CPUShares=500
Note1: The change is written into the service unit file. Use the –runtime option to avoid this behavior.
Note2: By default, each service owns 1024 CPUShares. Nothing prevents you from giving a value smaller or bigger.
To get the current CPUShares service value, type:
# systemctl show -p CPUShares httpd.service

Service management

Systemd deals with all the aspects of the service management. The systemctl command replaces the chkconfig and the service commands. The old commands are now a link to the systemctl command.
To activate the NTP service at boot, type:
# systemctl enable ntpd
Note1: You should specify ntpd.service but by default the .service suffix will be added.
Note2: If you specify a path, the .mount suffix will be added.
Note3: If you mention a device, the .device suffix will be added.
To deactivate it, start it, stop it, restart it, reload it, type:
# systemctl disable ntpd
# systemctl start ntpd
# systemctl stop ntpd
# systemctl restart ntpd
# systemctl reload ntpd
Note: It is also possible to mask and unmask a service. Masking a service prevents it from being started manually or by another service.
To know if the NTP service is activated at boot, type:
# systemctl is-enabled ntpd
enabled
To know if the NTP service is running, type:
# systemctl is-active ntpd
inactive
To get the status of the NTP service, type:
# systemctl status ntpd
ntpd.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)
If you change a service configuration, you will need to reload it:
# systemctl daemon-reload
To get the list of all the units (services, mount points, devices) with their status and description, type:
# systemctl
To get a more readable list, type:
# systemctl list-unit-files
To get the list of services that failed at boot, type:
# systemctl --failed
To get the status of a process (here httpd) on a remote server (here rhel7.example.com), type:
# systemctl -H root@rhel7.example.com status httpd.service

Run levels

Systemd also deals with run levels. As everything is represented by files in Systemd, target files replace run levels.
To move to single user mode, type:
# systemctl rescue
To move to the level 3 (equivalent to the previous level 3), type:
# systemctl isolate runlevel3.target
Or:
# systemctl isolate multi-user.target
To move to the graphical level (equivalent to the previous level 5), type:
# systemctl isolate graphical.target
To set the default run level to non-graphical mode, type:
# systemctl set-default multi-user.target
To set the default run level to graphical mode, type:
# systemctl set-default graphical.target
To get the current default run level, type:
# systemctl get-default
graphical.target
To stop a server, type:
# systemctl poweroff
Note: You can still use the poweroff command, a link to the systemctl command has been created (the same thing is true for the halt and reboot commands).
To reboot a server, suspend it or put it into hibernation, type:
# systemctl reboot
# systemctl suspend
# systemctl hibernate

Linux standardization

Systemd‘s authors have decided to help Linux standardization among distributions. Through Systemd, changes happen in the localization of some configuration files.

Miscellaneous

To get the server hostnames, type:
# hostnamectl
Static hostname: rhel7.example.com
Icon name: computer-laptop
Chassis: laptop
Machine ID: bcdc71f1943f4d859aa37e54a422938d
Boot ID: f84556924b4e4bbf9c4a82fef4ac26d0
Operating System: Red Hat Enterprise Linux Everything 7.0 (Maipo)
CPE OS Name: cpe:/o:redhat:enterprise_linux:7.0:beta:everything
Kernel: Linux 3.10.0-54.0.1.el7.x86_64
Architecture: x86_64
Note: There are three kinds of hostnames: static, pretty, and transient.
“The static host name is the traditional hostname, which can be chosen by the user, and is stored in the /etc/hostname file. The “transient” hostname is a dynamic host name maintained by the kernel. It is initialized to the static host name by default, whose value defaults to “localhost”. It can be changed by DHCP or mDNS at runtime. The pretty hostname is a free-form UTF8 host name for presentation to the user.” Source: RHEL 7 Networking Guide.
To assign the rhel7 hostname permanently to the server, type:
# hostnamectl set-hostname rhel7
Note: With this syntax all three hostnames (static, pretty, and transient) take the rhel7 value at the same time. However, it is possible to set the three hostnames separately by using the –pretty, –static, and –transient options.
To get the current locale, virtual console keymap and X11 layout, type:
# localectl
System Locale: LANG=en_US.UTF-8
VC Keymap: en_US
X11 Layout: en_US
To assign the en_GB.utf8 value to the locale, type:
# localectl set-locale LANG=en_GB.utf8
To assign the en_GB value to the virtual console keymap, type:
# localectl set-keymap en_GB
To assign the en_GB value to the X11 layout, type:
# localectl set-x11-keymap en_GB
To get the current date and time, type:
# timedatectl
Local time: Fri 2014-01-24 22:34:05 CET
Universal time: Fri 2014-01-24 21:34:05 UTC
RTC time: Fri 2014-01-24 21:34:05
Timezone: Europe/Madrid (CET, +0100)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: no
Last DST change: DST ended at
Sun 2013-10-27 02:59:59 CEST
Sun 2013-10-27 02:00:00 CET
Next DST change: DST begins (the clock jumps one hour forward) at
Sun 2014-03-30 01:59:59 CET
Sun 2014-03-30 03:00:00 CEST
To set the current date, type:
# timedatectl set-time YYYY-MM-DD
To set the current time, type:
# timedatectl set-time HH:MM:SS
To get the list of time zones, type:
# timedatectl list-timezones
To change the time zone to America/New_York, type:
# timedatectl set-timezone America/New_York
To get the users’ list, type:
# loginctl list-users
UID USER
42 gdm
1000 tom
0 root
To get the list of all current user sessions, type:
# loginctl list-sessions
SESSION UID USER SEAT
1 1000 tom seat0

1 sessions listed.
To get the properties of the user tom, type:
# loginctl show-user tom
UID=1000
GID=1000
Name=tom
Timestamp=Fri 2014-01-24 21:53:43 CET
TimestampMonotonic=160754102
RuntimePath=/run/user/1000
Slice=user-1000.slice
Display=1
State=active
Sessions=1
IdleHint=no
IdleSinceHint=0
IdleSinceHintMonotonic=0

Thursday, January 8, 2015

DMARC and the Email Authentication Process

DMARC is designed to fit into an organization's existing inbound email authentication process. The way it works is to help email receivers determine if the purported message "aligns" with what the receiver knows about the sender. If not, DMARC includes guidance on how to handle the "non-aligned" messages. For example, assuming that a receiver deploys SPF and DKIM, plus its own spam filters, the flow may look something like this:
In the above example, testing for alignment according to DMARC is applied at the same point where ADSP would be applied in the flow. All other tests remain unaffected.

At a high level, DMARC is designed to satisfy the following requirements:
  • Minimize false positives.
  • Provide robust authentication reporting.
  • Assert sender policy at receivers.
  • Reduce successful phishing delivery.
  • Work at Internet scale.
  • Minimize complexity.
It is important to note that DMARC builds upon both the DomainKeys Identified Mail (DKIM) and Sender Policy Framework (SPF) specifications that are currently being developed within the IETF. DMARC is designed to replace ADSP by adding support for:
  • wildcarding or subdomain policies,
  • non-existent subdomains,
  • slow rollout (e.g. percent experiments)
  • SPF
  • quarantining mail

Thursday, January 1, 2015

Junos OS 13.3R5

New and Changed Features

This section describes the new features and enhancements to existing features in Junos OS Release 13.3R5 for the EX Series.

Hardware

  • Extended cable manager for EX9214 switches—An extended cable manager is now available for EX9214 switches. The extended cable manager enables you to route cables away from the front of the line cards and Switch Fabric modules and provides easier access to the switch than the standard cable manager. To obtain the extended cable manager, order the MX960 Enhanced Cable Manager, ECM-MX960. (Note that installation of the extended cable manager must be done by a Juniper-authorized technician and that the service cost is in addition to the component cost.) 

Infrastructure

  • Support for IPv6 for TACACS+ authentication (EX9200)—Starting with Release 13.3, Junos OS supports IPv6 along with the existing IPv4 support for user authentication using TACACS+ servers.

Multicast

  • MLD snooping on EX9200 switches—EX9200 switches support Multicast Listener Discovery (MLD) snooping. MLD snooping constrains the flooding of IPv6 multicast traffic on VLANs on a switch. When MLD snooping is enabled on a VLAN, the switch examines MLD messages between hosts and multicast routers and learns which hosts are interested in receiving traffic for a multicast group. Based on what it learns, the switch then forwards multicast traffic only to those interfaces in the VLAN that are connected to interested receivers instead of flooding the traffic to all interfaces. You configure MLD snooping at either the [edit protocols] hierarchy level or the [edit routing-instances routing-instance-name protocols] hierarchy level. 

Network Management and Monitoring

  • sFlow technology on EX9200 switches—EX9200 switches support sFlow technology, a monitoring technology for high-speed switched or routed networks. The sFlow monitoring technology randomly samples network packets and sends the samples to a monitoring station. You can configure sFlow technology on an EX9200 switch to continuously monitor traffic at wire speed on all interfaces simultaneously. The sFlow technology is configured at the [edit protocols sflow] hierarchy level. 

OpenFlow

Support for OpenFlow v1.0—Starting with Junos OS Release 13.3, EX9200 switches support OpenFlow v1.0. You use the OpenFlow remote controller to control traffic in an existing network by adding, deleting, and modifying flows on switches. You can configure one OpenFlow virtual switch and one active OpenFlow controller at the [edit protocols openflow] hierarchy level on each device running Junos OS that supports OpenFlow