Thursday 7 July 2016

n-1 isn't necessarily the wisest choice

BMW 328 - original and hommage versions

Ask any vendor & you will find that one of their greatest frustrations is when customers insist on implementing only the "n-1" release of a particular product. 

At almost every meeting, vendors are asked about the availability of new features, or new capabilities, or new supported configurations that will match what the customer is trying to achieve, and yet when these are finally made available after much development and testing, customers will wait, and stick with supposedly safer older versions.

The risk-management logic, of course, is that the latest release is untried, and may contain flaws and bugs. Unfortunately this misses the point that fixes to older flaws are made possible by deploying a new release. It also brings up the laughable scenario of customers asking for new features to be "back-ported" to the older, "safe" release. Pro-tip: if you back-port all of your new features to the older release, then you end up with the new release anyway!

There are also some times when you just can't take advantage of the latest technology unless you're up-to-date: for example getting the most benefit out of new CPU's requires the operating system software to be in-sync. As SUSE VP of engineering Olaf Kirch points out in this article from 2012, when new features are introduced, you can either back-port to old code (possibly introducing errors) or take the new code and harden it. 

Which brings me to the real point of this article - when we're  talking about open source, the rate of change can be extremely rapid. This means that by the time you get a hardened, tested,  enterprise version of software out of the door, it is already at least version "n-1" : the bleeding-edge stuff is happening at the forefront of the community project, where many eyes and many egos are working on improvements to correctness and performance as well as features. So there's really no reason to require an n-1 release of, say, enterprise Linux ... all you're doing in that case is hobbling your hardware, paying more for extended support, and missing out on access to improvements.

So when SUSE introduces a new kernel revision mid-way through a major release, as it is doing with SUSE Linux Enterprise 12 Service Pack 2 (SLE12SP2), don't fret about the risks: the bleeding edge has already moved forward, and what you're getting is just the best, hardened, QA'd, engineered version of Linux with the most functionality.

Tuesday 7 June 2016

More on Scalability, Again


This week Intel announced its new Xeon E7 v4 processor, which takes x86_64 processor scale to another level: a single CPU socket now gives you 24 cores and access to 3TiB of RAM.  That means a medium-sized server of 8 sockets can now give you access to 192 cores and 24TiB RAM.  The upshot of this is that is you actually want to access all of that RAM with a supported operating system, SUSE Enterprise Linux is your only choice.

The new architecture also raises the limit for CPU sockets in a box to 64 – which means that you could max out this system in a standard kind of configuration at 1,536 cores. Again, SUSE Enterprise Linux is the only OS to support this degree of scalability for this kind of processor.

I wrote about this just a few weeks ago in the context of the HPE Integrity Superdome X,  which still only has published benchmarks running SUSE Linux Enterprise Server. It's interesting to see that all of the numbers have doubles (yet again) in such a short time.

Of course, SGI has been doing this degree of scaling with NUMA systems for a while, which is why SUSE Enterprise Linux is known to scale to 8,192 CPU cores and 64TiB RAM (they couldn't fit in any more memory): it's a little frightening to consider what they might end up doing with these new CPU's – at the very least 128TiB RAM will be near on the horizon.

So when the processor hardware manufacturers can still drop a doubling of capacity on us, it's worthwhile taking a note of whether your software can deal with it....

Wednesday 20 April 2016

Deploying OpenStack is NOT Difficult


There are too many articles around at the moment claiming that OpenStack is difficult to set up, and too many vendors claiming the only answer is consulting.

Yes, consulting can be important to get the business side of your private cloud worked out, but setting up OpenStack doesn't need to be difficult, when you have a distribution that's main purpose is ease-of-deployment in enterprise environments.

Here are some factoids about SUSE OpenStack Cloud:
  • It was the first enterprise OpenStack distro
  • SUSE introduced the concept of "distro" for OpenStack
  • It is actually a distro that can be set up by normal people - not just an invitation to a consulting engagement by a vendor
  • It is the only distro to ever win Intel's "Rule the Stack" competition for ease-of-installation-and- management (3 times in a row, and sometimes when no-one else was able to complete the task)
  • It is the only distro that supports KVM and Xen and VMware and HyperV and Docker and zVM – yes! you can even send workloads to mainframes!
  • The deployment tool can deploy SUSE Linux Enterprise and HyperV/Windows Server as compute nodes.
  • Installation of SUSE Enterprise Storage (powered by Ceph) is integrated into the deployment tool
  • The deployment tool will communicate with your existing infrastructure if you want: plugins make it easy to include your favourite storage system or converged networking
  • It is the distro used by some of the well-known OpenStack users
  • HA deployment is included with a couple of extra clicks
  • SUSE's fleet management tool, SUSE Manager, can be easily integrated into the cloud infrastructure so that new compute/storage/etc nodes automatically get patch/update/security/configuration management
  • SUSE's template creation tool, SUSE Studio, can be used to set up the VM's to offer to end users & directly add them into the OpenStack image repository.
Given that ease-of-install has been the hallmark of SUSE OpenStack Cloud since day one, it seems a shame that everyone seems to think it's difficult when it doesn't have to be – it should be a piece of cake.

Some links:
FIS-ASP deploys in one day:  https://www.youtube.com/watch?v=aFdlUIdYwQU
BMW OpenStack + Ceph case study: https://www.susecon.com/doc/2015/sessions/CAS19964.pdf
SAP using SUSE including OpenStack: https://www.youtube.com/watch?v=mLkPzFB1m_w

More (including downloads) at: https://suse.com/cloud


Saturday 2 April 2016

Why SUSE Linux Is the Only Sensible Choice for HPE Superdome-X


Unlike other material herein, this is an unashamedly partisan post. It's here's to collect together mostly links, for reference.

In December 2014, HP (now HPE) announced the successor to its long line of proprietary enterprise class computer systems:  the Integrity Superdome X.

What was particularly interesting about this announcement is that the focus was not just on the Intel Xeon processor, or the scalability of the machine to 16 CPU's and 24 TB of RAM, but what Jeff Kyle, director of product management for mission-critical systems at HP said was most important: "It's all about the software," Kyle told eWEEK.

Superdome X was the first flagship system from HP to not ship with HPUX.  Instead its launch OS was Linux. SUSE Linux.

Why?  Well SUSE Linux was the launch OS for Superdome X. And yes, that article mentions our old friends from Raleigh, but the fact remains that at launch time, the only benchmarks provided by HP were those with SUSE Linux.

So how about now, 18 months later?   Well according to the SPEC website, as of today ALL of the SpecJBB and SpecCPU benchmarks published by HPE for Superdome X use SUSE Linux Enterprise Server:



So there's a clear message here – when HPE is trying to get the best possible performance form their machines, they turn to SUSE Linux.

Why is this? well in terms of raw scalability, SUSE far exceeds RedHat linux:
Max supported CPUsMax supported RAM 
SLES 12:81921024 TB
RHEL 7.2:28812 TB


At the very least, this means that to handle a fully-loaded Superdome X with 24TB of RAM you must use SUSE Linux or risk falling into the "experimental" or at best "unsupported" category.  And risk isn't really want you want when running a mission-critical system.

In particular, SUSE Enterprise Linux has actually tested scalability on other systems to 8192 CPU cores and 64TB RAM (no-one could supply a machine with more RAM;  the CPU count was qualified after that article was written). SUSE's numbers are therefore not theoretical when it comes to the demands of Superdome X: there is no risk that scaling will not follow the expected path.

This difference is likely to continue as well: both HPE and Intel spend a lot of effort developing code for the Linux to improve scalability, etc, and this is sent “upstream” to the latest kernel versions.  This means that you would normally expect the most recent kernel version would have the best performance, due to HPE & Intel contributions.  Typically SUSE leads its competitor in implementing the latest kernel features whilst maintaining application and kernel binary compatibility between major kernel releases (e.g. from SLES 11SP1 with 2.6 kernel to SLES 11 SP2 with 3.0 kernel).  Given this background it should not be surprising for SUSE to continue to make similar advances in its current SLES 12 major release. Historically Red Hat has not changed major kernel releases:  RHEL 5 & 6 were kernel level 2.6, with 3.10 introduced only in 2014 with RHEL 7.

In other words: as HPE continues to contribute performance & other features to Linux for all of its server platforms, these are most likely to appear (with full global enterprise-class support) on SUSE Enterprise Linux long before they are available on RHEL. 



So it's no mistake that HP themselves chose SUSE Enterprise Linux when they migrated their internal systems from HP-UX.