Showing posts with label open source. Show all posts
Showing posts with label open source. Show all posts

Tuesday, 31 October 2017

How Open-Source Can Be the True Catalyst for Digital Change

[This article was first published in an abridged form in CIO Outlook magazine]

Twenty years ago, when the web was just starting to become available to consumers, a large proportion of non-tech websites was essentially advertising space: ephemeral virtual billboards promoting some aspect of an organisation’s product or service, but driven mostly by marketing departments with no solid connection to day to day business. Having been involved in developing website sin those days, and having often argued in vain with my customers that their web presence needed to be more than just a one-off disconnected experiment, it’s still remarkable to me that today the online world has so quickly become intrinsic to our lives. Today it is the disconnected organisation that is the anomaly: the first thing anyone does when starting up a new business these days is to check whether their proposed company name is available as a domain name, with changes made if it is not. Customers expect an online-first and mobile-first experience: we all know people who will skip past providers if they don’t have an easy-to-navigate site that lets them conduct business online, rather than having to telephone, or even worse: do something in person.

So, it’s a truism that customers expect to be able to access business services on their own terms, and to be able to do as much as possible online without having to resort to (potentially) slower methods of interaction. Businesses have to provide this ease of access or risk being passed by – but developing and maintaining the degree of interaction required is much more complex than the advertising websites of the 1990’s. In order to be truly effective, a company's digital entry-point must be able to reach directly into the workings of the organisation to be able to handle transactions in real-time and satisfy customer’s demands. This is the crux of digital transformation: the business processes that used to be kept internal to an organisation have to be codified and expressed in a way that makes it easy for customers to interact with the company… or else they will simply go to a competitor who can offer that experience.

This approach can be relatively easy when starting from scratch, but for established businesses the need to unravel years (or decades) of business logic and interconnected systems can become a nightmare, especially when time-to-market is important to satisfy customers’ ever-growing demands. Even for new companies, the need to constantly refresh and update quickly brings its own challenges: the market is never static, and if a competitor finds a new and more attractive way of providing the service, then a response needs to be found quickly.

Many organisations are turning to agile methodology and the associated concepts of DevOps and continuous integration to attempt to address the need for speedy time-to-market, yet the challenge for IT organisations is how to provide the systems needed to support the new approaches when up to 70% of their resources are spent just keeping the lights on in their day-to-day operations.

This is where open-source technology can help. In recent years the vast majority new systems innovation has arisen from the open-source arena; almost all cutting-edge technologies have an open-source aspect, with industry giants such as Google, Intel, IBM, and even Microsoft embracing open-source as a way to accelerate development and spread the adoption of the technologies underlying the modern web beyond the traditional proprietary boundaries. While open source software itself is not at all new, it has in recent years become the mainstream way of developing innovative ideas – based in most part on the foundation of that quintessential open source project: GNU/Linux. The freely-available nature of the Linux OS provides a platform with a level playing field for involvement, and the free GNU tools (compilers, libraries, and development environments) also lower entry barrier for new developers who can contribute towards community projects. Some of these projects can be tiny, perhaps with only a single contributor. Others, such as the OpenStack cloud infrastructure project, can include thousands of developers, project managers, and technical writers from big corporations, research organisations, and individuals.

The open nature of development means that more eyes and more ideas are brought to bear on a project: performance and security issues have a higher chance of being observed and resolved, with individual developers eager to make and maintain their personal credentials, and little to no opportunity for sweeping problems “under the carpet” to meet a specific deadline, as is the risk with closed, proprietary code. The open development also means that users aren’t locked-in to a particular vendor’s technology, or subjected to the risk of either unconscionable price hikes, or the prospect of a product being “killed” due to an acquisition or other business imperative. 

Of course, the challenge for IT managers when applying open source technology is how to support it – and especially how to support it whilst maintaining their existing systems. The widely-available nature of open source may make it very cost-effective to acquire, but those savings can be quickly eroded if an organisation has to employ their own experts to build, manage, maintain, and integrate those technologies, and can be a risky undertaking if those key employees leave the company for any reason (or even want to take a vacation). That’s where open-source software companies such as SUSE come in: over 25 years ago the Germany-based software company produced the first ever enterprise-ready version of Linux, and it has been building and integrating “infrastructure software” for enterprise ever since. This (profitable) longevity means that even though the technologies it products may be cutting-edge, the engineering and support is solid and reliable. A lot of this success is based on the hugely experienced development team, which makes up over 50% of the company’s employees. The egalitarian nature of open source development communities means that an individual developer’s personal credibility is extremely important in making a difference in the direction of an upstream project, and since SUSE boasts some of the most experienced developers in the industry, their influence can be seen across a wide range of projects, with the added result that the real-world scenarios they observe in customers’ workloads are considered when making changes or improvements.

The message, then, for CIO’s wanting help with digital transformation, is to look to the open source world for the reactive, adaptive, and innovative technologies that will make it possible to deliver on consumer expectations at a price point that is affordable, and to do so with the help of an experienced open-source partner who can provide the enterprise-grade support necessary to provide the stability needed for a reliable business.

Thursday, 7 July 2016

n-1 isn't necessarily the wisest choice

BMW 328 - original and hommage versions

Ask any vendor & you will find that one of their greatest frustrations is when customers insist on implementing only the "n-1" release of a particular product. 

At almost every meeting, vendors are asked about the availability of new features, or new capabilities, or new supported configurations that will match what the customer is trying to achieve, and yet when these are finally made available after much development and testing, customers will wait, and stick with supposedly safer older versions.

The risk-management logic, of course, is that the latest release is untried, and may contain flaws and bugs. Unfortunately this misses the point that fixes to older flaws are made possible by deploying a new release. It also brings up the laughable scenario of customers asking for new features to be "back-ported" to the older, "safe" release. Pro-tip: if you back-port all of your new features to the older release, then you end up with the new release anyway!

There are also some times when you just can't take advantage of the latest technology unless you're up-to-date: for example getting the most benefit out of new CPU's requires the operating system software to be in-sync. As SUSE VP of engineering Olaf Kirch points out in this article from 2012, when new features are introduced, you can either back-port to old code (possibly introducing errors) or take the new code and harden it. 

Which brings me to the real point of this article - when we're  talking about open source, the rate of change can be extremely rapid. This means that by the time you get a hardened, tested,  enterprise version of software out of the door, it is already at least version "n-1" : the bleeding-edge stuff is happening at the forefront of the community project, where many eyes and many egos are working on improvements to correctness and performance as well as features. So there's really no reason to require an n-1 release of, say, enterprise Linux ... all you're doing in that case is hobbling your hardware, paying more for extended support, and missing out on access to improvements.

So when SUSE introduces a new kernel revision mid-way through a major release, as it is doing with SUSE Linux Enterprise 12 Service Pack 2 (SLE12SP2), don't fret about the risks: the bleeding edge has already moved forward, and what you're getting is just the best, hardened, QA'd, engineered version of Linux with the most functionality.

Saturday, 22 August 2015

Why Open Source Software Defined Storage Will Win

Last week NetApp posted it's first quarterly loss in many years. It's not something I take pleasure in, since I have many friends still working at that company and it still has some very interesting technologies. The storm clouds are gathering though, and I can't help but liken NetApp's situation to that of Sun Microsystems, another former employer, as the tech bubble burst in the early 2000's, especially when I look at some of the comments reported from the NetApp earnings call.

Back in the day, Sun was making a lot of money and good margins with high performance proprietary hardware

Then along came Linux running some simple tasks on commodity hardware. it was good enough to do the simple jobs, had a lot of the essential features & functionality that unix provided, and the hardware was at such a low price that the economics were impossible to ignore. Some of the first adopters were the same companies that been the early sun customers, those who had replaced mainframes & minicomputers with cheaper, yet effective Sun hardware.

Unfortunately the lesson of their  own early success wasn't remembered by Sun's senior management, who thought they could win customers back by offering Linux on Sun hardware. Of course, the software wasn't the point here – it was the low-cost hardware that was attracting attention. Some of us in the field & parts of engineering tried to convince Sun's management to put more effort behind Solaris for x86, but just at a critical juncture, Solaris 9 was released on SPARC, along with the message that x86  support would be put on the back-burner ... the die was cast and Sun's fate sealed as commodity hardware pushed the boundaries of performance and drove the need for proprietary hardware into an upper niche. I still contend that if Sun had instead fully committed support for x86 & embraced a software-first approach,  the Solaris would dominate the market & Linux would have been relegated to a similar position as FreeBSD occupies today.

What does this have to do with software defined storage & NetApp? Well it appears that there is a similar approach from NetApp's senior management:  they see that customers "want scale out and software defined storage functionality",  but seem to think that the only response is a solution running on NetApp hardware. Like Sun, they (and the other major storage vendors) are chained to their high-margin proprietary hardware. Breaking free of this kind of entanglement is at the crux of Christensen's Innovator's Dilemma.

Meanwhile open source, software-defined storage solutions running on commodity hardware, such as SUSE Enterprise Storage based on Ceph – the Linux of enterprise storage if you like – are starting to gain attention, especially for large bulk data stores facing exponential growth and price-sensitivity. For the moment these solutions are best suited to relatively simple (albeit large) deployments, but the technology is not standing still. Ceph already features high-end technologies like snapshots, zero-copy cloning, cache-tiering and erasure coding, and enterprises are finding that it is "good enough" and at a dramatically lower price-point.  The open source nature of the development means that progress is rapid, and the software-defined nature means that hardware costs are driven relentlessly down. This is the same dynamics as we saw with the transition of UNIX to Linux, and it likely to have the same impact on the proprietary enterprise storage vendors.

So, change is here: I hope NetApp can navigate the course better than Sun did, though it will be interesting to see how.

Meanwhile, enterprises looking to rein in the exponential costs of enterprise storage can now look to open source for answers, and take advantage of the power and economics of commodity hardware systems.

Monday, 3 November 2014

Is Open Source Secure?


In the light of the recent Heartbleed and ShellShock vulnerabilities (both of which were apparently innocuous code bugs),  I was approached for comment for an article about the security of open source software.  The journalist (Anthony Caruana) wanted to know about how code gets approved for inclusion in open source projects, and especially about the potential for the inclusion of "malicious" code. This is a subject I touched on in a previous posting, so it was good to be able to address it more directly.

Here are the questions & my responses:

With such a huge community of developers contributed code to open source applications, what steps are in place to prevent malicious code being injected into widely distributed applications?

I take the term "malicious code"  to mean code that has been deliberately added for some nefarious purpose – to introduce a security back-door, or capture credentials and identifies, or do some other kind of damage. 
It's important to differentiate between this kind of deliberate attack, and the kind of programming errors that can result in exploitation, such as the openSSL Heartbleed bug.
The way that open source projects are managed actively works against the inclusion of malicious code. Firstly, the fact that the source is open and available to be audited by a large audience means that there is a much greater opportunity to find deliberate attacks than there is in closed source. It was source code audits that found the bugs in openSSL and Bash.  Malicious code would have to be carefully hidden - without leaving trace that it is being hidden – to avoid detection. Obfuscated code immediately attracts attention. 
The second factor is that major open source projects operate in a community where reputation is the critical factor:  simply submitting code is not necessarily enough to have it included into the main codeline.  The author must be known to, and trusted by,  the project maintainer.  It's human nature that when a new programmer joins a project, his or her code will be vetted more carefully than already-known contributors.  Establishing and maintaining a high reputation requires significant effort, so lends against the rapid or repeated insertion of malicious code. 
In contrast with this, closed-source code is not open for wide review by a large audience, and simply requires a disgruntled employee or the instruction of some agency to add the malicious code (as has been alleged by the Snowden leaks ).


What testing is done with open source software before it is released into the community? 

Testing varies from project to project - in this sense "open source" is not some homogenous group with identical policies. One of the features of the open source community is the rapid turn-around and release of projects:  with a major project there may be several different versions of software available at once - for example  a "stable" release, a "development" release, and a "nightly build".  A stable release can be considered to have gone through the most amount of testing; a development release is a version that developers and testers are working on; and a nightly build is version of the code that has the latest changes integrated & is almost certainly still buggy.  Of course, with commercial open source organisations like SUSE,  only stable versions are released.
Since this is open source it is the community itself that does the testing. The community may include commercial organisations like SUSE, but also includes researchers and hobbyists.  Users of open source software have to decide what level of comfort they have for possible bugs when they choose which version to use. One of the biggest contributions non-programmers can make is to try development release software & report back any problems.  Again - the potentially wider audience than you can achieve in a closed-source project means that this phase can be more effective and faster than in proprietary software (the "crowd source" effect). 
Hobbyists and non-programmers may only be able to investigate to a certain level, but commercial organisations and researchers can perform more involved tests. When SUSE contributes changes – either as new product features or patches – the software goes through our QA team to test functionality, stability, scalability and for regressions including vulnerabilities, as well as integration testing with our partners. SUSE makes heavy use of testing automation, and a lot of effort goes into maintaining & refining our automated methods. SUSE also relies on individual expertise to review results and find errors. 
At the end of the day for any software - open source, closed source or embedded - it's a question of confidence.  The attractive thing about open source is that there is potential for a much higher degree of confidence than for closed source.  It is much harder to hide mistakes, much harder to secretly introduce malicious code, and much more likely that a programmer wanting to make a name for him or herself will discover (and fix) problems when the source is open and available for review.  Ultimately, if the users of open source software want to perform rigorous and exhaustive examinations of code, they can;  the option is not even there for closed source software.

Saturday, 20 September 2014

Can Open Source Help Solve Unemployment?



The other day on the way from yet another airport to yet another hotel, I was chatting with the taxi driver who was interested in what I did. Inevitably, the taxi driver was not really a taxi driver, but just doing it as casual work while he looked for a real job.  He had a degree in electrical engineering, but like a lot of young people was finding it hard to get that first job since he lacked experience.

His story isn't unusual – according to The Smith Family's Dr Lisa O'Brien, Australia (like other countries) is facing record youth unemployment, with many candidates lacking job-ready skills.  Now having that university degree is probably going to help, but even this is no longer a guarantee of employment without experience.

So what has this got to do with Open Source?

Put simply, getting involved in an open source project is a great way for anyone to show that they can contribute in a meaningful way, work well with others, and develop skills and experience that can be directly transferred to a work environment.

The barrier to entry for open source projects is very low – you just need to show an interest in getting involved. In fact, it's not even necessary to be a proficient coder: open source projects often have more need of usability testers and documentation writers than programmers.  Although higher education can help with developing programming and project management skills, many open source projects have contributors who have not yet graduated or may not yet even be of university age.

The results can be quite dramatic: open source companies like SUSE frequently recruit new developers from the ranks of active contributors, and often look for open source experience & reputation rather than demanding formal qualifications.

What this means is that even without a particular degree or even paid work experience, involvement in open source can open doorways into an IT career, in a way that is relatively easy to access.

Just another reason why open source is increasingly important.



Wednesday, 3 September 2014

Bits and Bytes: SUSE® Cloud 4 OpenStack Admin Appliance – An Easier way to Start Your Cloud

Bits and Bytes: SUSE® Cloud 4 OpenStack Admin Appliance – An Easier way to Start Your Cloud: If you used the SUSE Cloud 3 OpenStack Admin Appliance, you know it was a downloadable, OpenStack Havana-based appliance, which even a non-technical user could get off the ground to deploy an OpenStack cloud.

Friday, 8 August 2014

Why is Open Source Important?



Recently I was asked by the IT manager of a customer, also a software development company, what my position was on open source. His developers were arguing in both directions: some characterised open source as being risky due to the potential for people to see the code & discover security vulnerabilities (which really isn't the case) ; other developers asserted that using closed source also had risks such as the vendor being acquired, going out of business, or dropping a particular product line.

My customer wanted to understand if had a "religion" about open vs closed, so I told a story about my perceptions on how closed source & closed systems came back to hurt the company that initiated it....

Microsoft defined the market in the late 1990's: most people couldn't see past the desktop paradigm. Certainly when I was at Sun Microsystems trying to tell people that "the Network is the Computer" I was mostly given blank stares or told I was living in a fool's paradise with the idea of continual network  access (now however,  some people even suffer from anxiety when they don't have network access).  Meanwhile, Microsoft, having been late to the start of the Internet revolution, quickly used its market power to inculcate Internet Explorer as the standard web browser for enterprise customers, so all software using a web interface had to confirm to its particular quirks, and in particular the quirks of Internet Explorer 6. Given this practical requirement, and the dominance of the Windows desktop concept, many software developers would not support their web interfaces with anything else, which in turn meant people had to buy into the Windows desktop world and so the vicious (or virtuous, depending on your point of view) circle continued.

When Microsoft tried to introduce new, better performing versions of Windows and Internet Explorer, however , they found that adoption was poor. Despite their best efforts and despite the end of support life for Windows XP (the last version of windows that supports internet explorer 6),  30% or more of Windows deployments are STILL of this older version, due in part to the huge dependance on the old, closed Microsoft ecosystem of the early 2000's.  In other words, Microsoft's own efforts to control the entire software stack actually ended up hurting them, with very poor adoption of Windows Vista, and initially slow adoptions of Windows 7 and 8.

A critical factor here is that these old systems do not work with today's dominant paradigm: mobile computing on phone or tablet.  Companies face a potentially huge transitional cost to get access to the way people now access data online. There are a lot of reasons why Microsoft hasn't been as dominant in the mobile space as they'd like to be, but certainly the way they set themselves up at the beginning of the 21st century didn't help.

So how does this fit into the question of open source?  Well by definition open source uses standards for data storage & transfer that are open to scrutiny and available for all to use. This means that getting to data and services can be possible from any device or system, not just from some tightly-coupled combination of software systems in a closed box owned by someone else.  Companies who developed with open standards in mind found it much easier (and faster) to move to the mobile world.

This is just one reason why open source is important: to provide tools, systems and protocols that can be continuously adapted & developed in compatible ways, rather than head down a one-way path towards a dead end.

A telling footnote to this story is that after many years of decrying open source, Microsoft is now embracing open source as a component of the way it operates, and actively collaborates with open source companies (such as SUSE) to improve interoperability and the effectiveness of their customers' IT environments.



Saturday, 7 December 2013

IP Longevity Through Open Source

You never forget working for your first vendor.  I was reminded of this at the SUSEcon conference last November, when I met a young colleague who had recently started with the company and was overflowing with enthusiasm for the culture, workmates, and feel of the workplace compared to his previous experiences.  I saw the same degree of enthusiasm in former colleagues at NetApp, who had grown up with the company, and invested themselves completely in the company culture, even as it went through change.  My own experience with Sun Microsystems was also a match: there was something special about working in a place which, especially for a technologist, offered access to such great ideas, people, equipment & opportunities. For me, this was compunded by being there during the dotcom boom.

While I was at Sun, the place seemed to be brimming with innovation - we were proud of our brilliant ideas & cutting-edge execution. There were a few less-enthusiastic, or perhaps better said as "more realistic" people who didn't look through such rose-coloured glasses. These were the folk who had come to Sun from elsewhere - places like DEC (Digital Equipment Company), which had previously been one of the go-to places & hotbeds of innovation. They could see the good times wouldn't last, and in the end were proved correct.


It seems there has been a constant stream of "innovation" companies, each in turn attracting enthusiastic contributors, building great technology, and then folding or being consumed by some larger organisation that ultimately fails to capitalise on the innovations. The tragedy here is that as the smart people leave these companies, and the intellectual property gets buried under legal constraints, the innovations get lost, and the next generation effectively has to start from scratch. Sun's amazing technology & concepts of the late 1990's & early 2000's has only recently become part of the mainstream understanding (in 1997, no-one could understand what "the network is the computer" meant - modern smartphones demonstrate the concept completely), yet a lot of companies today are re-inventing the capabilities of the last generation of technology Sun developed before the decline & loss of personnel.

This is where free & open source is so interesting: once ideas expressed in open source are exposed, they are forever available, so the demise of a particular company doesn't bury the intellectual property (even if it does mean a lot of the workers on a project may not be able to spend as much time on it). Perhaps as more and more development moves into open source, we'll have less re-invention of the wheel (or even less out & out ignorance of what has come before).  

So long as we can work out a way to agree on licensing schemes....