Category Archives: Job

One year in Lausanne

As I mentioned a year ago, I was accepted at IMD (International Institute for Management Development) to participate to the one-year MBA program. I graduated in December 2014 and am looking back at very intense year, filled with work, challenges and friendships.

I moved to Lausanne from Munich at the beginning of January 2014, left a “regular” life and made a significant financial commitment to take on a challenge that would reveal itself the best decision I ever made in my life.

The program itself was extremely intense, and that is what made it so interesting. The first six months were absolutely horrific in terms of workload, but the classes were so interesting that it was not at the expense of the learning experience. The fact that we were a tight-knit group of only 90 people really helped bonding with everybody. I definitely enjoyed the wide variety of classes, such as operations, finance or entrepreneurship. My natural curiosity was constantly satisfied with case studies and learning experiences that made me discover the wide range of business challenges beyond the IT world with which I was familiar given my background.

After the first six months during which we were prepared to have a better understanding of the business, we had the chance to make a discovery trip to Singapore and Kuala Lumpur, where we met business and government leaders. We learned from people of Singapore’s economic development board, the private equity practice of Bain, and many others. We then participated to a so-called “International Consulting Project”, during which we worked as a team of five students for 2 months to help a global company define its Big Data strategy. The topic, the team I worked with and the interaction with senior executives of the company made the project absolutely thrilling!DSC_2541

Beyond the “regular” learning, one of the strongest points of the IMD MBA is the leadership component. During this year, I had multiple opportunities to learn about myself and, most importantly, receive feedback about the perception that others had of me. This helped me increase my self-awareness and will definitely have an impact in business situations.

Finally, this IMD MBA would be nothing without the friendship I built with my 89 classmates. Through all the work, the sports and the fun, I can really say that I have 89 friends around the world. The bonds that form between us are simply incredible and I’ll keep them with me all my life. They made my MBA experience what is really was and I’ll be forever grateful to them for that.

During the graduation ceremony, Mark Cornell, a 1999 MBA alumni congratulated us for receiving “the finest MBA in the world”. It certainly was the best decision in my life so far and I look forward to applying what I learned in my future positions… and see my friends again.

Red Hat Certified Engineer

When starting at Red Hat as a solution architect, one of the things on is expected to do is become Red Hat Certified Engineer.
This certification happens in two steps. The first exam is the Red Hat Certified System Administrator (RHCSA) exam, and the second is the actual RHCE exam. You need to pass both successfully to become a RHCE. Although I was a RHCSA for a couple of weeks now already, I failed at my first attempt (as do 60% of all participants !) at the RHCE, and only last week did I get my RHCE.

This certification is made of a couple of hands-on tasks. Unlike the LPI, which is a multiple-choice questionnaire and where luck can play a role, you  actually need to really know how things work with the RHCE,which makes it so interesting and challenging.

Now that I am a RHCE (which can be verified) I will continue with other courses, next one will be the Red Hat Enterprise Virtualization class to become RHCVA (Red Hat Certified Virtualization Administrator). You can see below the entire curriculum that leads to the ultimate title, the Red Hat Certified Architect (RHCA)

Update : I get my RHCVA last week. Again, it was a hands-on exam with standard tasks for administrators (i.e. setup a complete virtualization environment with management server, hypervisor, etc.). My next goal is the EX436, clustering and storage management !

Move to Red Hat

After four years spent at HP, I accepted an offer to work for Red Hat, thus moving from Stuttgart to Munich.

I lived four amazing years at HP, surrounded by fantastic and dedicated people. I learned a tremendous amount of things. I could acquire a sound technical knowledge about enterprise IT environments and also learned a lot from my mentor and my colleagues. I also attended very useful and interesting sales and soft skills trainings.

This change to Red Hat is quite a challenge. First of all, the company’s business is radically different. I always found right to sell Free and Open Source Software and this is a great opportunity to do my job according to my ethical principles. Moreover, I change from the biggest IT company in the world, with 300,000 employees (not counting all the contractors and partners) to a roughly 4,000 persons company. Red Hat is clearly not a start-up any more, but it is way smaller and things need to be handled in a creative way.

I’ll take over a new role in Red Hat to support systems integrators, OEMs (such as HP) and ISVs from a presales perspective at the EMEA level. I look forward to passing my Red Hat Certified Engineer certification and to learning a lot. The fact that KVM is installed and ready to create virtual machines on all PCs inside the company is a great sign of geekiness and that is already a good start!

HP brings x86 on the the Superdome !

Big announcements for HP !
As internally already rumored, the next generation of Superdome 2 servers will be able to use x86 processors, such as the Intel Xeon and run Linux x86_64 natively !

As stated in this press conference, HP has launched a project called “Odissey” that will probably be a complete game changer in the x86 industry.

So far, only HP-UX could be run on a Superdome, but now, customers will have the capability of running HP-UX as well as Linux in the same Superdome server. The lowest-level virtualization layer of the Superdome is the nPar (node partition) and is an electrically-isolated group of Superdome cells (the picture on the right shows the SD2 enclosure populated with cell blades). As nPars are electrically isolated from each other, it will be possible to have nPars equipped with Xeon CPUs and other nPars with Itanium CPUs. Just as the first generation of Superdomes could run PA-RISC and Itanium processors in different nPars in the same server. A mix of CPUs types or families will not be possible.

Of course, the HP-UX cell blade will need Itanium CPUs and the Linux cell blade will need Xeon CPUs (as Linux is not supported on the latest Itanium-based servers), however, this opens the door to bringing Linux to new levels of availability, making use, for example, of the highly available crossbar of the Superdome 2 that routes all IO signals from the IO extenders, which contain the PCI-e cards, to the cell blades. This crossbar is able to retry all possible transactions and to reroute signals to make sure that every IO is performed accurately.

HP-UX will not be ported under under x86 and it will continue to run on the Integrity blades, rx2800 i2 rack-mount servers, as well as on the Superdome cells with Itanium CPUs. Also, this integration will only be for Intel Xeon processors, not AMD Opterons. The development of HP-UX will continue, as the Itanium roadmap still has two CPUs codenamed “Poulson” and “Kittson” to be delivered in the future.

It would be possible to run Linux (with the current Xeon CPUs – the number of cores of Intel’s next platform, codenamed Sandy bridge, for servers is not clear as of now) on 32 sockets, or 320 cores, or 640 threads !! That is huge and great news for all the customers who wanted to switch smoothly from Unix to Linux, or needed scale-up servers going beyond the 8 sockets provided by most of the vendors.

Also, the Integrity blades, which were very modular (they could be extended from two sockets to four sockets and even to eight sockets by just combining blades together and linking them with a blade link pictured below), will also be made available for Xeon processors.

The new servers (Superdome 2 and scalable blades) are planned for 2013.

Finally, HP announced that the Linux HA portfolio would be similar to the HP-UX one, which means that ServiceGuard for Linux (that was stopped two years ago) will be reactivated.

I think that all these announcements are great news for Linux customers who wanted to push their Linux infrastructures to mission-critical levels. Although HP-UX still has a clear roadmap, the attractiveness of the Xeon processor with Linux on such a scalable and available platform will be very strong.

This offer could also be interesting for customers of other commercial Unix versions by offering amazing scale-up capabilities for Linux on the x86 platform, which is the most open one.

HP CloudSystem Matrix Part 3: manage your resources

This post is the last of a series of three that explain the concepts and technologies that are used in HP CloudSystem Matrix. The first one was about creating a CloudMap. The second one was about how to deploy a complete IT service automatically.  This post is about the management of the resources (servers, storage, networking, software) that can be used and shared as a pool across several services.

The idea behind CloudSystem Matrix is relatively simple : the whole environment should be as easy to manage as possible.

This starts first with the firmware management. All c-Class enclosures have a defined firmware level according to their Matrix version. This means that the server firmware (HBAs, BIOS, iLO, NICs, etc.), the interconnect modules (HP Virtual Connect Flex-10, Fibre channel or FlexFabric) as well as the Onboard Administrator (the enclosure management processor) have a defined firmware level that was tested and qualified to work together in the best way. Given that HP implementation services take care of the firmware deployment, the administrators don’t have to bother about it.

What can be managed by CloudSystem Matrix ?

The physical servers to be deployed must be HP blades (ProLiant x86_64 or Integrity Itanium servers).The reason for that is that we leverage the capabilities of Virtual Connect to apply network profiles (MAC addresses and WWN) and this technology is available on our blade servers.

However, the virtual machine hosts (VMware, Hyper-V, or HP-UX Integrity Virtual Machines) can be HP blades, HP rack-mount servers (Integrity and ProLiant) and even third-party servers (Dell PowerEdge 2000 series, e300 series , IBM System x servers 6000 series, r800, r900, x300 and x3000 series and IBM blade GS and LS servers) making CloudSystem Matrix probably one of the most open cloud solutions on the market.

In order CloudSystem Matrix to work, the management server needs to discover and manage the targeted equipment. The management console of the VM hosts, the management processors and the interconnect modules must be recognized by the so-called CMS (central management server). It will recognize the presence of the Virtual Connect domain group (which manages Virtual Connect for multiple enclosures) and will put the servers not used as VM hosts as possibly usable for physical deployments.

As soon as the CMS has discovered the equipment, the administrator can use its console on the CMS to create and assign pool of resources to different users.

From this management console, the administrator can manage all the elements provided to both IT architects and business users.

What IT architects need to create their cloud maps first is network connectivity. The VLANs at disposal to the IT architects are the Virtual Connect vNetworks. The administrator provides them to the IT architects using the tab “Networking” on the management console.
There, the CMS communicates with Virtual Connect Enterprise Manager and retrieves all available networks. Each network must then be configured to provide information about the range of IP addresses usable, if the IP address is allocated via DHCP or if the CMS allocates it from its pool of fix addresses.

As soon as a server is put in the enclosure and is managed by Virtual Connect Enterprise Manager, it appears in the “Unassigned” pool of resources. From here, it can be moved to a pool of resources that can be dynamically assigned to a business user. This user will only see the pool of resources that are allowed to him in his self-service portal.

In CloudSystem Matrix, the group of Administrators has all rights, hence they can see all services currently running. The business users can also FlexUp his service by adding either disks or servers to the currently running service, in case, for example, that an unexpected load occurs on the service.

From this console, the administrators can see all items that can be deployed via CloudSystem Matrix: network items, operating systems (retrieved from RDP job, Ignite depots and golden images as well as Hyper-V and VMware templates), storage pool entries, as well as servers. They can control all requests as well as currently deployed services. I will write a new post to explain exactly how the storage provisioning works.

All in all, this third post explained how administrators can, from a single point of control, manage their resources and put them at disposal to the users. The CloudSystem solution is a complete solution that can help IT departments reduce their TCO of up to 56% compared with traditional rack-mount servers. I have already deployed it for customers and must say that many of them are really impressed of the power of the overall solution.