Showing posts with label SUSE. Show all posts
Showing posts with label SUSE. Show all posts

Tuesday, 31 October 2017

How Open-Source Can Be the True Catalyst for Digital Change

[This article was first published in an abridged form in CIO Outlook magazine]

Twenty years ago, when the web was just starting to become available to consumers, a large proportion of non-tech websites was essentially advertising space: ephemeral virtual billboards promoting some aspect of an organisation’s product or service, but driven mostly by marketing departments with no solid connection to day to day business. Having been involved in developing website sin those days, and having often argued in vain with my customers that their web presence needed to be more than just a one-off disconnected experiment, it’s still remarkable to me that today the online world has so quickly become intrinsic to our lives. Today it is the disconnected organisation that is the anomaly: the first thing anyone does when starting up a new business these days is to check whether their proposed company name is available as a domain name, with changes made if it is not. Customers expect an online-first and mobile-first experience: we all know people who will skip past providers if they don’t have an easy-to-navigate site that lets them conduct business online, rather than having to telephone, or even worse: do something in person.

So, it’s a truism that customers expect to be able to access business services on their own terms, and to be able to do as much as possible online without having to resort to (potentially) slower methods of interaction. Businesses have to provide this ease of access or risk being passed by – but developing and maintaining the degree of interaction required is much more complex than the advertising websites of the 1990’s. In order to be truly effective, a company's digital entry-point must be able to reach directly into the workings of the organisation to be able to handle transactions in real-time and satisfy customer’s demands. This is the crux of digital transformation: the business processes that used to be kept internal to an organisation have to be codified and expressed in a way that makes it easy for customers to interact with the company… or else they will simply go to a competitor who can offer that experience.

This approach can be relatively easy when starting from scratch, but for established businesses the need to unravel years (or decades) of business logic and interconnected systems can become a nightmare, especially when time-to-market is important to satisfy customers’ ever-growing demands. Even for new companies, the need to constantly refresh and update quickly brings its own challenges: the market is never static, and if a competitor finds a new and more attractive way of providing the service, then a response needs to be found quickly.

Many organisations are turning to agile methodology and the associated concepts of DevOps and continuous integration to attempt to address the need for speedy time-to-market, yet the challenge for IT organisations is how to provide the systems needed to support the new approaches when up to 70% of their resources are spent just keeping the lights on in their day-to-day operations.

This is where open-source technology can help. In recent years the vast majority new systems innovation has arisen from the open-source arena; almost all cutting-edge technologies have an open-source aspect, with industry giants such as Google, Intel, IBM, and even Microsoft embracing open-source as a way to accelerate development and spread the adoption of the technologies underlying the modern web beyond the traditional proprietary boundaries. While open source software itself is not at all new, it has in recent years become the mainstream way of developing innovative ideas – based in most part on the foundation of that quintessential open source project: GNU/Linux. The freely-available nature of the Linux OS provides a platform with a level playing field for involvement, and the free GNU tools (compilers, libraries, and development environments) also lower entry barrier for new developers who can contribute towards community projects. Some of these projects can be tiny, perhaps with only a single contributor. Others, such as the OpenStack cloud infrastructure project, can include thousands of developers, project managers, and technical writers from big corporations, research organisations, and individuals.

The open nature of development means that more eyes and more ideas are brought to bear on a project: performance and security issues have a higher chance of being observed and resolved, with individual developers eager to make and maintain their personal credentials, and little to no opportunity for sweeping problems “under the carpet” to meet a specific deadline, as is the risk with closed, proprietary code. The open development also means that users aren’t locked-in to a particular vendor’s technology, or subjected to the risk of either unconscionable price hikes, or the prospect of a product being “killed” due to an acquisition or other business imperative. 

Of course, the challenge for IT managers when applying open source technology is how to support it – and especially how to support it whilst maintaining their existing systems. The widely-available nature of open source may make it very cost-effective to acquire, but those savings can be quickly eroded if an organisation has to employ their own experts to build, manage, maintain, and integrate those technologies, and can be a risky undertaking if those key employees leave the company for any reason (or even want to take a vacation). That’s where open-source software companies such as SUSE come in: over 25 years ago the Germany-based software company produced the first ever enterprise-ready version of Linux, and it has been building and integrating “infrastructure software” for enterprise ever since. This (profitable) longevity means that even though the technologies it products may be cutting-edge, the engineering and support is solid and reliable. A lot of this success is based on the hugely experienced development team, which makes up over 50% of the company’s employees. The egalitarian nature of open source development communities means that an individual developer’s personal credibility is extremely important in making a difference in the direction of an upstream project, and since SUSE boasts some of the most experienced developers in the industry, their influence can be seen across a wide range of projects, with the added result that the real-world scenarios they observe in customers’ workloads are considered when making changes or improvements.

The message, then, for CIO’s wanting help with digital transformation, is to look to the open source world for the reactive, adaptive, and innovative technologies that will make it possible to deliver on consumer expectations at a price point that is affordable, and to do so with the help of an experienced open-source partner who can provide the enterprise-grade support necessary to provide the stability needed for a reliable business.

Thursday, 7 July 2016

n-1 isn't necessarily the wisest choice

BMW 328 - original and hommage versions

Ask any vendor & you will find that one of their greatest frustrations is when customers insist on implementing only the "n-1" release of a particular product. 

At almost every meeting, vendors are asked about the availability of new features, or new capabilities, or new supported configurations that will match what the customer is trying to achieve, and yet when these are finally made available after much development and testing, customers will wait, and stick with supposedly safer older versions.

The risk-management logic, of course, is that the latest release is untried, and may contain flaws and bugs. Unfortunately this misses the point that fixes to older flaws are made possible by deploying a new release. It also brings up the laughable scenario of customers asking for new features to be "back-ported" to the older, "safe" release. Pro-tip: if you back-port all of your new features to the older release, then you end up with the new release anyway!

There are also some times when you just can't take advantage of the latest technology unless you're up-to-date: for example getting the most benefit out of new CPU's requires the operating system software to be in-sync. As SUSE VP of engineering Olaf Kirch points out in this article from 2012, when new features are introduced, you can either back-port to old code (possibly introducing errors) or take the new code and harden it. 

Which brings me to the real point of this article - when we're  talking about open source, the rate of change can be extremely rapid. This means that by the time you get a hardened, tested,  enterprise version of software out of the door, it is already at least version "n-1" : the bleeding-edge stuff is happening at the forefront of the community project, where many eyes and many egos are working on improvements to correctness and performance as well as features. So there's really no reason to require an n-1 release of, say, enterprise Linux ... all you're doing in that case is hobbling your hardware, paying more for extended support, and missing out on access to improvements.

So when SUSE introduces a new kernel revision mid-way through a major release, as it is doing with SUSE Linux Enterprise 12 Service Pack 2 (SLE12SP2), don't fret about the risks: the bleeding edge has already moved forward, and what you're getting is just the best, hardened, QA'd, engineered version of Linux with the most functionality.

Tuesday, 7 June 2016

More on Scalability, Again


This week Intel announced its new Xeon E7 v4 processor, which takes x86_64 processor scale to another level: a single CPU socket now gives you 24 cores and access to 3TiB of RAM.  That means a medium-sized server of 8 sockets can now give you access to 192 cores and 24TiB RAM.  The upshot of this is that is you actually want to access all of that RAM with a supported operating system, SUSE Enterprise Linux is your only choice.

The new architecture also raises the limit for CPU sockets in a box to 64 – which means that you could max out this system in a standard kind of configuration at 1,536 cores. Again, SUSE Enterprise Linux is the only OS to support this degree of scalability for this kind of processor.

I wrote about this just a few weeks ago in the context of the HPE Integrity Superdome X,  which still only has published benchmarks running SUSE Linux Enterprise Server. It's interesting to see that all of the numbers have doubles (yet again) in such a short time.

Of course, SGI has been doing this degree of scaling with NUMA systems for a while, which is why SUSE Enterprise Linux is known to scale to 8,192 CPU cores and 64TiB RAM (they couldn't fit in any more memory): it's a little frightening to consider what they might end up doing with these new CPU's – at the very least 128TiB RAM will be near on the horizon.

So when the processor hardware manufacturers can still drop a doubling of capacity on us, it's worthwhile taking a note of whether your software can deal with it....

Wednesday, 20 April 2016

Deploying OpenStack is NOT Difficult


There are too many articles around at the moment claiming that OpenStack is difficult to set up, and too many vendors claiming the only answer is consulting.

Yes, consulting can be important to get the business side of your private cloud worked out, but setting up OpenStack doesn't need to be difficult, when you have a distribution that's main purpose is ease-of-deployment in enterprise environments.

Here are some factoids about SUSE OpenStack Cloud:
  • It was the first enterprise OpenStack distro
  • SUSE introduced the concept of "distro" for OpenStack
  • It is actually a distro that can be set up by normal people - not just an invitation to a consulting engagement by a vendor
  • It is the only distro to ever win Intel's "Rule the Stack" competition for ease-of-installation-and- management (3 times in a row, and sometimes when no-one else was able to complete the task)
  • It is the only distro that supports KVM and Xen and VMware and HyperV and Docker and zVM – yes! you can even send workloads to mainframes!
  • The deployment tool can deploy SUSE Linux Enterprise and HyperV/Windows Server as compute nodes.
  • Installation of SUSE Enterprise Storage (powered by Ceph) is integrated into the deployment tool
  • The deployment tool will communicate with your existing infrastructure if you want: plugins make it easy to include your favourite storage system or converged networking
  • It is the distro used by some of the well-known OpenStack users
  • HA deployment is included with a couple of extra clicks
  • SUSE's fleet management tool, SUSE Manager, can be easily integrated into the cloud infrastructure so that new compute/storage/etc nodes automatically get patch/update/security/configuration management
  • SUSE's template creation tool, SUSE Studio, can be used to set up the VM's to offer to end users & directly add them into the OpenStack image repository.
Given that ease-of-install has been the hallmark of SUSE OpenStack Cloud since day one, it seems a shame that everyone seems to think it's difficult when it doesn't have to be – it should be a piece of cake.

Some links:
FIS-ASP deploys in one day:  https://www.youtube.com/watch?v=aFdlUIdYwQU
BMW OpenStack + Ceph case study: https://www.susecon.com/doc/2015/sessions/CAS19964.pdf
SAP using SUSE including OpenStack: https://www.youtube.com/watch?v=mLkPzFB1m_w

More (including downloads) at: https://suse.com/cloud


Saturday, 2 April 2016

Why SUSE Linux Is the Only Sensible Choice for HPE Superdome-X


Unlike other material herein, this is an unashamedly partisan post. It's here's to collect together mostly links, for reference.

In December 2014, HP (now HPE) announced the successor to its long line of proprietary enterprise class computer systems:  the Integrity Superdome X.

What was particularly interesting about this announcement is that the focus was not just on the Intel Xeon processor, or the scalability of the machine to 16 CPU's and 24 TB of RAM, but what Jeff Kyle, director of product management for mission-critical systems at HP said was most important: "It's all about the software," Kyle told eWEEK.

Superdome X was the first flagship system from HP to not ship with HPUX.  Instead its launch OS was Linux. SUSE Linux.

Why?  Well SUSE Linux was the launch OS for Superdome X. And yes, that article mentions our old friends from Raleigh, but the fact remains that at launch time, the only benchmarks provided by HP were those with SUSE Linux.

So how about now, 18 months later?   Well according to the SPEC website, as of today ALL of the SpecJBB and SpecCPU benchmarks published by HPE for Superdome X use SUSE Linux Enterprise Server:



So there's a clear message here – when HPE is trying to get the best possible performance form their machines, they turn to SUSE Linux.

Why is this? well in terms of raw scalability, SUSE far exceeds RedHat linux:
Max supported CPUsMax supported RAM 
SLES 12:81921024 TB
RHEL 7.2:28812 TB


At the very least, this means that to handle a fully-loaded Superdome X with 24TB of RAM you must use SUSE Linux or risk falling into the "experimental" or at best "unsupported" category.  And risk isn't really want you want when running a mission-critical system.

In particular, SUSE Enterprise Linux has actually tested scalability on other systems to 8192 CPU cores and 64TB RAM (no-one could supply a machine with more RAM;  the CPU count was qualified after that article was written). SUSE's numbers are therefore not theoretical when it comes to the demands of Superdome X: there is no risk that scaling will not follow the expected path.

This difference is likely to continue as well: both HPE and Intel spend a lot of effort developing code for the Linux to improve scalability, etc, and this is sent “upstream” to the latest kernel versions.  This means that you would normally expect the most recent kernel version would have the best performance, due to HPE & Intel contributions.  Typically SUSE leads its competitor in implementing the latest kernel features whilst maintaining application and kernel binary compatibility between major kernel releases (e.g. from SLES 11SP1 with 2.6 kernel to SLES 11 SP2 with 3.0 kernel).  Given this background it should not be surprising for SUSE to continue to make similar advances in its current SLES 12 major release. Historically Red Hat has not changed major kernel releases:  RHEL 5 & 6 were kernel level 2.6, with 3.10 introduced only in 2014 with RHEL 7.

In other words: as HPE continues to contribute performance & other features to Linux for all of its server platforms, these are most likely to appear (with full global enterprise-class support) on SUSE Enterprise Linux long before they are available on RHEL. 



So it's no mistake that HP themselves chose SUSE Enterprise Linux when they migrated their internal systems from HP-UX.





Saturday, 22 August 2015

Why Open Source Software Defined Storage Will Win

Last week NetApp posted it's first quarterly loss in many years. It's not something I take pleasure in, since I have many friends still working at that company and it still has some very interesting technologies. The storm clouds are gathering though, and I can't help but liken NetApp's situation to that of Sun Microsystems, another former employer, as the tech bubble burst in the early 2000's, especially when I look at some of the comments reported from the NetApp earnings call.

Back in the day, Sun was making a lot of money and good margins with high performance proprietary hardware

Then along came Linux running some simple tasks on commodity hardware. it was good enough to do the simple jobs, had a lot of the essential features & functionality that unix provided, and the hardware was at such a low price that the economics were impossible to ignore. Some of the first adopters were the same companies that been the early sun customers, those who had replaced mainframes & minicomputers with cheaper, yet effective Sun hardware.

Unfortunately the lesson of their  own early success wasn't remembered by Sun's senior management, who thought they could win customers back by offering Linux on Sun hardware. Of course, the software wasn't the point here – it was the low-cost hardware that was attracting attention. Some of us in the field & parts of engineering tried to convince Sun's management to put more effort behind Solaris for x86, but just at a critical juncture, Solaris 9 was released on SPARC, along with the message that x86  support would be put on the back-burner ... the die was cast and Sun's fate sealed as commodity hardware pushed the boundaries of performance and drove the need for proprietary hardware into an upper niche. I still contend that if Sun had instead fully committed support for x86 & embraced a software-first approach,  the Solaris would dominate the market & Linux would have been relegated to a similar position as FreeBSD occupies today.

What does this have to do with software defined storage & NetApp? Well it appears that there is a similar approach from NetApp's senior management:  they see that customers "want scale out and software defined storage functionality",  but seem to think that the only response is a solution running on NetApp hardware. Like Sun, they (and the other major storage vendors) are chained to their high-margin proprietary hardware. Breaking free of this kind of entanglement is at the crux of Christensen's Innovator's Dilemma.

Meanwhile open source, software-defined storage solutions running on commodity hardware, such as SUSE Enterprise Storage based on Ceph – the Linux of enterprise storage if you like – are starting to gain attention, especially for large bulk data stores facing exponential growth and price-sensitivity. For the moment these solutions are best suited to relatively simple (albeit large) deployments, but the technology is not standing still. Ceph already features high-end technologies like snapshots, zero-copy cloning, cache-tiering and erasure coding, and enterprises are finding that it is "good enough" and at a dramatically lower price-point.  The open source nature of the development means that progress is rapid, and the software-defined nature means that hardware costs are driven relentlessly down. This is the same dynamics as we saw with the transition of UNIX to Linux, and it likely to have the same impact on the proprietary enterprise storage vendors.

So, change is here: I hope NetApp can navigate the course better than Sun did, though it will be interesting to see how.

Meanwhile, enterprises looking to rein in the exponential costs of enterprise storage can now look to open source for answers, and take advantage of the power and economics of commodity hardware systems.

Wednesday, 22 April 2015

A Turning Point for OpenStack Cloud in the Enterprise

This past couple of days I've been at the CONNECT Expo in Melbourne, which included an OpenStack conference, where I participated as a speaker & panelist.

The focus of that conference was about whether OpenStack is ready for the enterprise, and included contributions from Dave Medbury from Time Warner Cable, Mike Dorman from GoDaddy, and Rik Harris from Telstra, who presented their real-world experiences with OpenStack in commercial settings.

One of the things that was very clear from the conference is that OpenStack is ready for the enterprise. This is probably not news to many who have been following the project for the past few years, but there definitely seems to have been a turning point in recent times, especially enterprise versions of the Juno release (such as SUSE OpenStack Cloud 5) that have become available in the past few months.

The turning point I've really noticed is that OpenStack is becoming much less of a developer's toy or interesting plaything, where new features are the most important aspect of development, and is transitioning (in the core areas at least) to a more practical enterprise-class framework, where component stability, infrastructure high-availability, and robust support options are necessary.  This was reflected in the composition of the audience for the event, which included many more "enterprise architect" types than have been present in the past.  It was also reflected in the questions asked during the panel sessions, which often seemed focused as much on business & organisation as on implementation and technicalities.

So OpenStack is definitely ready for the enterprise, yet as with any complex system and organisational change, there are a few considerations to bear in mind (and many of these are relevant, regardless of the private cloud infrastructure software to be used):

  • Implementing cloud computing is an organisational as well as operational change, and should be handled carefully
  • Successful cloud implementations rely on an existing IT operations mindset that includes automation & policy-driven deployment (see my previous article).
  • Executive-level sponsorship (ideally CIO-level or more) is required to effectively marshall all the disparate IT disciplines towards making cloud deployment a success
  • Integration with the lines of business is critical – internal customers of the private cloud have to agree to use the standardised cloud resources, rather than expecting bespoke IT services at cloud prices.
  • Not every workload is suitable for cloud deployment right now, as applications really need to be well suited to that kind of environment; this is OK – it's all about using the right tool for the job, so it's best not to try to force the issue and encounter failure.
  • The OpenStack control plane must provide High Availability (99.999%+ uptime). Without a highly-available control plane, access to the cloud resources disappears, and even cloud-aware services can fail.
  • OpenStack deployment and management can be difficult without the right tools and support, so it makes sense to work with an open source infrastructure software vendor (such as SUSE) to provide the technology, integration, support and training necessary to build up your own capabilities.

There is a growing number of enterprises (including PayPal and BMW) that have adopted OpenStack as a significant part of their IT infrastructure, and this trend will continue as the framework becomes more mature, and as the vendors and other members of the OpenStack Foundation build the body of knowledge in terms of documentation and training.

On that note – SUSE is hiring. At the time of writing there are at least 13 roles open at SUSE related to OpenStack & Cloud. Check https://suse.com/jobs for details.

Monday, 3 November 2014

Is Open Source Secure?


In the light of the recent Heartbleed and ShellShock vulnerabilities (both of which were apparently innocuous code bugs),  I was approached for comment for an article about the security of open source software.  The journalist (Anthony Caruana) wanted to know about how code gets approved for inclusion in open source projects, and especially about the potential for the inclusion of "malicious" code. This is a subject I touched on in a previous posting, so it was good to be able to address it more directly.

Here are the questions & my responses:

With such a huge community of developers contributed code to open source applications, what steps are in place to prevent malicious code being injected into widely distributed applications?

I take the term "malicious code"  to mean code that has been deliberately added for some nefarious purpose – to introduce a security back-door, or capture credentials and identifies, or do some other kind of damage. 
It's important to differentiate between this kind of deliberate attack, and the kind of programming errors that can result in exploitation, such as the openSSL Heartbleed bug.
The way that open source projects are managed actively works against the inclusion of malicious code. Firstly, the fact that the source is open and available to be audited by a large audience means that there is a much greater opportunity to find deliberate attacks than there is in closed source. It was source code audits that found the bugs in openSSL and Bash.  Malicious code would have to be carefully hidden - without leaving trace that it is being hidden – to avoid detection. Obfuscated code immediately attracts attention. 
The second factor is that major open source projects operate in a community where reputation is the critical factor:  simply submitting code is not necessarily enough to have it included into the main codeline.  The author must be known to, and trusted by,  the project maintainer.  It's human nature that when a new programmer joins a project, his or her code will be vetted more carefully than already-known contributors.  Establishing and maintaining a high reputation requires significant effort, so lends against the rapid or repeated insertion of malicious code. 
In contrast with this, closed-source code is not open for wide review by a large audience, and simply requires a disgruntled employee or the instruction of some agency to add the malicious code (as has been alleged by the Snowden leaks ).


What testing is done with open source software before it is released into the community? 

Testing varies from project to project - in this sense "open source" is not some homogenous group with identical policies. One of the features of the open source community is the rapid turn-around and release of projects:  with a major project there may be several different versions of software available at once - for example  a "stable" release, a "development" release, and a "nightly build".  A stable release can be considered to have gone through the most amount of testing; a development release is a version that developers and testers are working on; and a nightly build is version of the code that has the latest changes integrated & is almost certainly still buggy.  Of course, with commercial open source organisations like SUSE,  only stable versions are released.
Since this is open source it is the community itself that does the testing. The community may include commercial organisations like SUSE, but also includes researchers and hobbyists.  Users of open source software have to decide what level of comfort they have for possible bugs when they choose which version to use. One of the biggest contributions non-programmers can make is to try development release software & report back any problems.  Again - the potentially wider audience than you can achieve in a closed-source project means that this phase can be more effective and faster than in proprietary software (the "crowd source" effect). 
Hobbyists and non-programmers may only be able to investigate to a certain level, but commercial organisations and researchers can perform more involved tests. When SUSE contributes changes – either as new product features or patches – the software goes through our QA team to test functionality, stability, scalability and for regressions including vulnerabilities, as well as integration testing with our partners. SUSE makes heavy use of testing automation, and a lot of effort goes into maintaining & refining our automated methods. SUSE also relies on individual expertise to review results and find errors. 
At the end of the day for any software - open source, closed source or embedded - it's a question of confidence.  The attractive thing about open source is that there is potential for a much higher degree of confidence than for closed source.  It is much harder to hide mistakes, much harder to secretly introduce malicious code, and much more likely that a programmer wanting to make a name for him or herself will discover (and fix) problems when the source is open and available for review.  Ultimately, if the users of open source software want to perform rigorous and exhaustive examinations of code, they can;  the option is not even there for closed source software.

Wednesday, 3 September 2014

Bits and Bytes: SUSE® Cloud 4 OpenStack Admin Appliance – An Easier way to Start Your Cloud

Bits and Bytes: SUSE® Cloud 4 OpenStack Admin Appliance – An Easier way to Start Your Cloud: If you used the SUSE Cloud 3 OpenStack Admin Appliance, you know it was a downloadable, OpenStack Havana-based appliance, which even a non-technical user could get off the ground to deploy an OpenStack cloud.