Tag Archives: HP

Install HP Virtual rooms on Fedora 16

As a partner of HP, I use their collaboration platform HP Virtual Rooms, that is also available on Red Hat Linux. As I use Fedora, I needed to install some more packages.Here is what I did

# wget https://www.rooms.hp.com/vRoom_Cab/hpvirtualrooms-install64-F4-8.0.0.4282.tar.gz

# tar -xzvf hpvirtualrooms-install64-F4-8.0.0.4282.tar.gz

# cd hpvirtualrooms-install

# ./install-hpvirtualrooms
virtualrooms-install : /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory

Then I learned a cool feature of yum : you just need to enter the file that you need and yum will download and install the package that needs the file for you. For example :

# yum -y install /lib/ld-linux.so.2

So, all in all, you need to install the following packages :

# yum -y install glibc-2.14.90-24 libXi.so.6 libSM.so.6 libXi.so.6 libXrender.so.1 libXrandr.so.2 libz.so.1 libglib-2.0.so.0 libXfixes.so.3 libasound.so.2 libfontconfig.so.1 libpng12.so.0 libGLU.so.1

and then test it.

HP CloudSystem Matrix Part 3: manage your resources

This post is the last of a series of three that explain the concepts and technologies that are used in HP CloudSystem Matrix. The first one was about creating a CloudMap. The second one was about how to deploy a complete IT service automatically.  This post is about the management of the resources (servers, storage, networking, software) that can be used and shared as a pool across several services.

The idea behind CloudSystem Matrix is relatively simple : the whole environment should be as easy to manage as possible.

This starts first with the firmware management. All c-Class enclosures have a defined firmware level according to their Matrix version. This means that the server firmware (HBAs, BIOS, iLO, NICs, etc.), the interconnect modules (HP Virtual Connect Flex-10, Fibre channel or FlexFabric) as well as the Onboard Administrator (the enclosure management processor) have a defined firmware level that was tested and qualified to work together in the best way. Given that HP implementation services take care of the firmware deployment, the administrators don’t have to bother about it.

What can be managed by CloudSystem Matrix ?

The physical servers to be deployed must be HP blades (ProLiant x86_64 or Integrity Itanium servers).The reason for that is that we leverage the capabilities of Virtual Connect to apply network profiles (MAC addresses and WWN) and this technology is available on our blade servers.

However, the virtual machine hosts (VMware, Hyper-V, or HP-UX Integrity Virtual Machines) can be HP blades, HP rack-mount servers (Integrity and ProLiant) and even third-party servers (Dell PowerEdge 2000 series, e300 series , IBM System x servers 6000 series, r800, r900, x300 and x3000 series and IBM blade GS and LS servers) making CloudSystem Matrix probably one of the most open cloud solutions on the market.

In order CloudSystem Matrix to work, the management server needs to discover and manage the targeted equipment. The management console of the VM hosts, the management processors and the interconnect modules must be recognized by the so-called CMS (central management server). It will recognize the presence of the Virtual Connect domain group (which manages Virtual Connect for multiple enclosures) and will put the servers not used as VM hosts as possibly usable for physical deployments.

As soon as the CMS has discovered the equipment, the administrator can use its console on the CMS to create and assign pool of resources to different users.

From this management console, the administrator can manage all the elements provided to both IT architects and business users.

What IT architects need to create their cloud maps first is network connectivity. The VLANs at disposal to the IT architects are the Virtual Connect vNetworks. The administrator provides them to the IT architects using the tab “Networking” on the management console.
There, the CMS communicates with Virtual Connect Enterprise Manager and retrieves all available networks. Each network must then be configured to provide information about the range of IP addresses usable, if the IP address is allocated via DHCP or if the CMS allocates it from its pool of fix addresses.

As soon as a server is put in the enclosure and is managed by Virtual Connect Enterprise Manager, it appears in the “Unassigned” pool of resources. From here, it can be moved to a pool of resources that can be dynamically assigned to a business user. This user will only see the pool of resources that are allowed to him in his self-service portal.

In CloudSystem Matrix, the group of Administrators has all rights, hence they can see all services currently running. The business users can also FlexUp his service by adding either disks or servers to the currently running service, in case, for example, that an unexpected load occurs on the service.

From this console, the administrators can see all items that can be deployed via CloudSystem Matrix: network items, operating systems (retrieved from RDP job, Ignite depots and golden images as well as Hyper-V and VMware templates), storage pool entries, as well as servers. They can control all requests as well as currently deployed services. I will write a new post to explain exactly how the storage provisioning works.

All in all, this third post explained how administrators can, from a single point of control, manage their resources and put them at disposal to the users. The CloudSystem solution is a complete solution that can help IT departments reduce their TCO of up to 56% compared with traditional rack-mount servers. I have already deployed it for customers and must say that many of them are really impressed of the power of the overall solution.

New HP 3PAR storage arrays

The new high-end HP 3PAR high-end storage arrays P10000 were launched a couple of days ago. Here is a nice video that explains the biggest advantages of the product. To me, the most interesting feature is the storage peer motion feature. It creates some kind of a cluster / load balancing approach for storage devices. It can move data across arrays without application disruption and resolves one of the biggest thin provisioning problem: when the capacity overcommitment cannot be increased because there is no physical space left. This 3PAR array solves that issue and it really looks cool !

How to provide SMI-S connectivity via Command View EVA without HBA ?

What is SMI-S ? SMI-S is a standard communication protocol based on WBEM that helps manage heterogeneous storage arrays in the same way. Say you want to create a disk of 50GB on a HP EVA and an EMC Clariion, you will send the same SMI-S request to both and the arrays will translate the command to create the disks.

As Command View provides SMI-S connectivity out of the box, it should be easy, right ? Wrong ! (at least in my case).

Usually, you would have a fiber channel host bus adapter connected to the Command View server. However, my CV server does not have one. Also,  the EVA ABM (Array-Based Management), an embedded tool that helps manage EVA arrays, in my case a 4400, does not provide any SMI-S connectivity. The Command View documentation nonetheless states that “If you have layered applications requiring HP SMI-S EVA, you can install the HP SMI-S EVA component on any server that is either connected to the EVA/SAN or has access to HP Command View EVA via Ethernet.”

The hard task was to find how to make this work.

SMI-S provides a utility called discoverer.bat, located in C:\Program Files (x86)\Hewlett-Packard\SMI-S\EVAProvider (yes, Command View only runs on Windows…)

Execute it

Press 1 to add the IP address and the credentials of the ABM.

Verify that the ABM was successfully discovered

Everything ran fine and now, I can discover my array through the Storage Provisioning Manager (SPM), a technology designed to present and deploy storage LUNs automatically as part of BladeSystem Matrix. I will write an entry about it later on !