Nuffnang

Saturday, February 20, 2016

ZFS at Linux

Canonical have conducted a legal review, including discussion with the industry's leading software freedom legal counsel, of the licenses that apply to the Linux kernel and to ZFS. And in doing so, we have concluded that we are acting within the rights granted and in compliance with their terms of both of those licenses.

While the CDDL and GPLv2 are both "copyleft" licenses, they have different scope. The CDDL applies to all files under the CDDL, while the GPLv2 applies to derivative works. The CDDL cannot apply to the Linux kernel because zfs.ko is a self-contained file system module -- the kernel itself is quite obviously not a derivative work of this new file system.

Friday, January 8, 2016

6 Proven Ways to Build Your Best Team - Luke Heine


Today our organizations are becoming increasingly complex, with projects spanning multiple teams, managers, and even continents. In our highly interdisciplinary and global world, the teams that end up achieving the most success are the ones whose members can access resources outside the team—those who can acquire assets and get help from experts who might be useful to the primary team. [1][2][3]
Here’s what organizational greats Deborah Ancona and David Caldwell have to say about making great cross-boundary teams.
1. Know Your Type
While bureaucratic workplaces vary from broken water coolers to segways, a team’s structure—based on its members’ involvement and the group’s position in the broader organization—typically falls into one of four quadrants.
First we have the seven-year-old soccer squad, where no one passes the ball and everyone gets watermelon after the game. These teams have low internal cohesion and low external pressures, and are more accurately characterized as groups providing aggregated output divorced from the broader organization.
(Don’t tell your aspiring soccer stars!)
Next, we have teams with high internal pressures and low external demands or
 timelines. Think brainstorming groups—or aspiring musicians backed by their billionaire families. Because all the information used for hitting goals in this group is found within the team, the trick for managers here is to find members who possess all the necessary information for the goal at hand. [4a]
The third group is a rare breed. It’s rare to find teams that only have external demands and low internal demands, and they might just be a construct by Ancona and Caldwell (gasp*).
However, teams of the fourth variety—those that need to juggle both high internal and external demands—are becoming increasingly common as work becomes more interdependent and complex. In these industries, the key is to find members who not only possess the right skillsets, but who can also harness the skills, expertise, and resources of others’.
Here’s how you can engineer a team for success.
2. Pick People with Different Essential Strengths
Research shows that teams comprised of members with more functional areas perform at higher levels than others [7]. Just like when Tom Cruise formed his
team in Mission Impossible, your first step in forming an effective team is to recruit individuals with skills in all the functional areas necessary to accomplish the mission—or for the rest of us, to  build the product.
Functionally diverse teams offer a two-fold benefit. They have more frequent internal exchanges of information, allowing for more innovative decisions than less diverse teams. They are also more likely to communicate more frequently to outside groups, making them them better at acquiring external resources. [8]
(The one caveat to this is that if you load your team in one functional area, members will start to compete for status. Just like in the animal kingdom, balance is important!)
3. Build a Diverse Network
To maximize the chances of success for your group, include both individuals with strong and weak relational ties. Strong ties exhibit high reciprocity and high time investment (i.e. every serious romantic relationship you’ve ever had); weak relational ties exhibit low reciprocity and time investment (i.e. every Tinder match you’ve ever had) [9]. Each type of tie has its advantages, and you want both for your team.
Networks characterized by strong ties have high information value but low variety. Although there are fewer options to choose from, the value of those relationships can can be effectively transferred to the team. Networks characterized by weak ties have low information value but high variety, as weak ties require less energy to maintain and provide high access to information. However, because those ties are weak, transference of information is more difficult.
Ideally, you want to the best of both worlds. Recruiting individuals with strong and weak ties is the best strategy to cultivate a highly connected team with a large information flow.
4. Offer Temporary Roles
Bring in the interns! But actually ;)
Expanding a core team is expensive in terms of energy, finances, and time. Temporary positions can be a great solution.
Teams can get the added information and resource boost of expansion by bringing in experts for limited times or specific functions, offering membership contingent on the progress of a project, assigning part-time roles, or providing tiered membership options
5. Send Fewer EmailsBut Make Them Count!
A study of 45 product development teams found that team productivity was boosted when individuals participated in activities to promote their team, secure resources, or strengthen bonds with groups linked in the work flow [1].
The study also found that while the frequency of communications members had with other teams had little impact on effectiveness, the quality of their interactions did.
One good voice message left every 3 months could do a better job of staying in touch then daily “hey” text messages. Repeatedly spraying your business cards like Rick Ross sprays dollar bills may not be as effective as you (and only you) think.
Remember, the quality and content of a team’s external communications are significantly more important than the frequency of those interactions [4][5]
6. Focus
Before you engage your valuable resources, it’s important to know what you’re looking for.
When groups engaged in unfocused attempts to acquire key information outside the team, broadly scanning the environment for resources, performance among team members suffered, particularly later in the production life cycle [1].
Once a product idea was developed, less successful teams sought out general information, while more successful teams cut down on broad, generalized communication, increased the number of communications aimed at acquiring specific information, and began coordinating distinct tasks.
Takeaways to Keep You Cool
If you want to form a team that works and plays well with others, consider the following questions:
  • Do the people I’m selecting have the skills and experience to match their assigned goals? Can they actualize their own projects?
  • Do potential members have connections to relevant networks inside and outside organizations? Do they have relevant contacts who can help them out?
  • Does management understand what the team can and can’t do? How much does the team need to rely on others? How much of the work needs to be outsourced?
Keep these in mind as well [6]:
  • What is the political structure of the organization at large?
  • How does the group’s product fit into the broader strategy of the organization?
  • What are the expectations for the group by management?
  • Which key resources do members need from other groups?
  • Where is the location of outside information that would benefit the group?
As work gets increasingly complex, Ancona and Caldwell give us a framework to make team selection simpler.

Sunday, December 20, 2015

Why is Hyperconvergence So Hot?


To understand why hyperconvergence has gotten so popular so quickly it’s necessary to keep in mind other trends that are taking place.
There’s pressure on IT departments to be able to provision resources instantly; more and more applications are best-suited for scale-out systems built using commodity components; software-defined storage promises great efficiency gains; data volume growth is unpredictable; and so on.
More and more enterprises look at creation of software products and services as a way to grow revenue and therefore want to adopt agile software development methodologies, which require a high degree of flexibility from IT. In other words, they want to create software and deploy it much more often than they used to, so IT has to be ready to get new applications up and running quickly.

Saturday, December 12, 2015

What is Hyperconverged Infrastructure?


Given that the concept is only about two years old, it’s worth explaining what hyperconverged infrastructure is and how it’s different from its cousin converged infrastructure.
Hyperconvergence is the latest step in the now multiyear pursuit of infrastructure that is flexible and simpler to manage, or as Butler put it, a centralized approach to “tidying up” data center infrastructure. Earlier attempts include integrated systems and fabric infrastructure, and they usually involve SANs, blade servers, and a lot of money upfront.
Converged infrastructure has similar aims but in most cases seeks to collapse compute, storage, and networking into a single SKU and provide a unified management layer.
Hyperconverged infrastructure seeks to do the same, but adds more value by throwing in software-defined storage and doesn’t place much emphasis on networking. The focus is on data control and management.
Hyperconverged systems are also built using low-cost commodity x86 hardware. Some vendors, especially early comers, contract manufacturers like Supermicro, Quanta, or Dell for the hardware bit, adding value with software. More recently, we have seen the emergence of software-only hyperconverged plays, as well as hybrid plays, where a vendor may sell software by itself but will also provide hardware if necessary.
Today hyperconverged infrastructure can come as an appliance, a reference architecture, or as software that’s flexible in terms of the platform it runs on. The last bit is where it’s sometimes hard to tell the difference between a hyperconverged solution or software-defined storage, Butler said.

Monday, November 9, 2015

SSH Tunneling

You can create SSH Tunnels using different kinds of forwarding like
a) Local Port Forwarding, 
b) Remote Port Forwarding,
c) Dynamic Port Forwarding
c) X Forwarding

For a full command syntax check the online man pages for ssh  here  




Local Port Forward

-L [bind_address:]port:host:hostport

          Specifies that the given port on the local (client) host is to be forwarded to the given 
             host and port on the remote side.  This works by allocating a socket to listen to port 
             on the local side, optionally bound to the specified bind_address.  Whenever a
             connection is made to this port, the connection is forwarded over the secure channel, 
             and a connection is made to host port  hostport from the remote machine. The 
             bind_address of ``localhost'' indicates that the listening port be bound for local use 
             only, while an empty address or `*' indicates that the port should be available from all interfaces.

Suppose that I want to access a remote host ( 192.168.2.1:80) that is behind an ssh server (myremotemachine). On local machine , set up a port forward from port 8080 to 192.168.2.1:80.
 $ ssh myremotemachine -L 8080:192.168.2.1:80  

Open another console session , on local machine, and check that the service is available on the loopback interface only , listening on tcp/8080.
 $ netstat -tunelp | grep 8080  
 tcp    0   0 127.0.0.1:8080     0.0.0.0:*    LISTEN   1000   74471   4269/ssh    

Then on local browser goto http://localhost:8080/  to access the webpage

On another example, you need to telnet on network device that is accessible only from inside the network.
 $ ssh myremotemachine -L 2323:192.168.0.1:23  

On local machine confirm that service is run on the loopback interface only , listening on tcp/2323.
 $ netstat -nlp | grep 2323  
 tcp    0   0 127.0.0.1:2323     0.0.0.0:*        LISTEN   4406/ssh      


Then open another console session and telnet to the loopback interface .
 $ telnet localhost 2323  
 Trying 127.0.0.1...  
 Connected to localhost.  
 Escape character is '^]'.  
 **WELCOME TO PIX501**  

Add -g to allow others on same home subnet to connect to remote machine.
 $ ssh myremotemachine -L 2323:192.168.0.1:23 -g  

Service appears on all interfaces of local host
 $ netstat -nlp | grep 2323  
 tcp    0   0 0.0.0.0:2323      0.0.0.0:*        LISTEN   4490/ssh    

Other machines on same subnet should use:
 $ telnet <address-of-localhost> 2323  



Remote Port Forwarding

-R [bind_address:]port:host:hostport
             Specifies that the given port on the remote (server) host is to be forwarded to the given 
             host and port on the local side.  This  works by allocating a socket to listen to port on the 
             remote side, and whenever a connection is made to this port, the  connection is forwarded 
             over the secure channel, and a connection  is made to host port hostport from the local machine.


              By default, the listening socket on the server will be bound to  the loopback interface only.  
              This may be overridden by specifying a bind_address.  An empty bind_address, or the address
             '*', indicates that the remote socket should listen on all interfaces.  Specifying a remote 
             bind_address will only succeed  if the server's GatewayPorts option is enabled 

Initiates a ssh connection with reverse port forward which will open listening port, to be forwarded back to destination' s port on destination host.

For example you need to access your PC at work but the firewall does not allow a connection initiated from outside. So you bypass company firewall by 
using an allowed port and create an incoming tunnel from computer at work to your computer at home. And then browse/use the port from home.

 office$ ssh -R 2222:localhost:22 homeserver   

Confirm that service is running on the loopback interface
 homeserver$ netstat -nlp | grep 2222  
 tcp    0   0 127.0.0.1:2222     0.0.0.0:*        LISTEN   -       

We are initiating ssh connection with reverse port forwarding (-R) which will open listening port 2222 to be forwarded back to localhost's port 22 and all this will happen on homeserver. If you now open up a terminal on homeserver and type in:
homeserver $ ssh localhost -p 2222  

we will try to connect to localhost (homeserver) on port 2222. Since that port is setuped by remote ssh connection it will tunnel the request back via that link to the office computer.


Dynamic Port Forwarding (SSH SOCKS proxy )

If you are using a connection that is not secure, then create an ssh tunnel to the ssh server and use it as a proxy.

 $ ssh -D 8080 remotemachine  

Then setup browser  SOCKS proxy at localhost:8080



X Forwarding

To run a GUI application installed on a remote machine  but display it locally
$ ssh -X -p 10022 192.168.2.10  

Then I run application PUTTY , installed only on the remote machine
$ putty  

Thursday, October 15, 2015

Junos: Corrupt pam.conf file allows unauthenticated root access

Product Affected:

This issue can affect any product or platform running Junos OS.​​
 
Problem:

When the pam.conf file is corrupted in certain ways, it may allow connection to the device as the root user with no password. This "fail-open" behavior allows an attacker who can specifically modify the file to gain full access to the device.

Note that inadvertent manipulation of the pam.conf by an authorized administrator can also lead to unauthenticated root access to the device. Extreme care should be taken by administrators to avoid modifying pam.conf directly.

While the standalone vulnerability may not be directly exploitable, this issue increases the severity of other attacks that may be chained together to launch a multi-stage advanced attack against the device.

This issue is assigned ​CVE-2015-7751.

Solution:
The following software releases have been updated to resolve this specific issue: Junos OS 12.1X44-D50, 12.1X46-D35, 12.1X47-D25, 12.3R9, 12.3X48-D15, 13.2R7, 13.2X51-D35, 13.3R6, 14.1R5, 14.1X50-D105, 14.1X51-D70, 14.1X53-D25, 14.1X55-D20, 14.2R1, 15.1F2, 15.1R1, 15.1X49-D10, and all subsequent releases.​

This issue was found during internal product security testing.

Juniper SIRT is not aware of any malicious exploitation of this vulnerability.

No other Juniper Networks products or platforms are affected by this issue.

This issue is being tracked as PR 965378 and is visible on the Customer Support website.

KB16765 - "In which releases are vulnerabilities fixed?" describes which release vulnerabilities are fixed as per our End of Engineering and End of Life support policies.​​


Workaround:
​Use access lists or firewall filters to limit CLI access to the router only from trusted hosts.

In addition to the recommendations listed above, it is good security practice to limit the exploitable attack surface of critical infrastructure networking equipment. Use access lists or firewall filters to limit access to the router via SSH or telnet only from trusted, administrative networks or hosts.

Friday, October 2, 2015

VMware Tools unleashed

Prior to this change the ISO file that was downloaded with a new build would be placed on the local datastore of an ESXi host. When applying the VMware Tools to a virtual machine the ISO would be mounted form the local datastore and unmounted after the installation / update was done. Now that the VMware Tools aren’t included with the ESXi builds it is no longer possible to download them through Update Manager. You can now download them directly from the VMware site, but that still means you need to get the ISO ready for use.
Copying the ISO to every ESXi would be a time consuming task and luckily there is an alternative. It is possible to change the location where an ESXi host looks for the VMware Tools. On each host there is a location called the “ProductLocker”, which is a symbolic link that is created when booting a host. This link by default points to a directory on the local datastore. Fortunately for us the location that the symbolic link points to can be changed so that a shared datastore can be used.
First you would need to create a directory on one of your shared datastores like I have done in the screenshot below.
VMwaretools_Folder
It doesn’t matter what name you give the directory as long as the sub folder has the name “vmtools“. In this folder you will place the new VMware Tools ISO file that you downloaded van the VMware site.
Next you will need to adjust the “UserVars.ProductLockerLocation” setting on each host (you could use host profiles to reduce the manual repetition). You can find this setting within the Advanced settings for the host using the vSphere (web) client. Change this setting so that it contains the path to the directory you created in the previous step. Make note that you do not enter the sub directory that is holding the actual ISO file. The host will automatically search the sub directory within the parent directory you entered.
VMwareTools_UserVars.ProductLockerLocation_oud
Now that the configuration is changed we need to apply it. This can be done by either rebooting the host or manually by recreating the symbolic link. For the manual way you need to run commands from the ESXi shell. For ESXi 5.x or later you can use this command:
jumpstart –plugin=libconfigure-locker.so
or
rm /productLocker ln -s /vmfs/volumes/shared_datastore_name/vmware-tools /productLocker
After this you should be able to change to the productlocker directory and find the ISO file you placed on the datastore
VMwaretools_productlocker_new
No you can install or update the VMware Tools just like you would otherwise.

Open-VM-Tools
Furthermore VMware Tools for Linux (Open-VM-Tools or OVT for short) has been handed over to the Linux community enabeling the adoption of the tools in the Linux kernel main line. This means that customers don’t have to manage the lifecycle of the VMware Tools for certain Linux distributions anymore. Updating OVT will be done through the Linux update mechanic and you can no longer update them using vCenter.
For now the following distributions include OVT:
  • Fedora 19 and later releases
  • Debian 7.x and later releases
  • openSUSE 11.x and later releases
  • Recent Ubuntu releases (12.04 LTS, 13.10 and later)
  • Red Hat Enterprise Linux 7.0 and later releases
  • SUSE Linux Enterprise 12 and later releases
  • CentOS 7 and later releases
  • Oracle Linux 7 and later