1

The Whats and Whys of Open Source

When I’ve explained open source to people who aren’t in tech or related areas, I often find myself in a conversation that goes something like this:

Person: “So what is this open source thing?”

Me: “It basically is a way that multiple people and organizations can collaborate on building software out in the open.”

Person: “So, it’s free?”

Me: “I mean yes, but there are licenses involved that set the terms of reuse.”

Person: “Is this stuff valuable? If it was, wouldn’t someone sell it?”

Me: “Well, yeah, it is, but it’s often the software that is a base technology that people would build a product from. Or it’s something enough people feel strongly about being out there in the open for anyone to use.”

Person: “Okay, so people get paid to build this software?”

Me: “Often, yes, but sometimes people just do it because they want to.”

Person: “So, why would someone do this?”

Me: “Well, it could be a lot of reasons. Maybe they like the technology. Maybe the group of people they are working with is interesting and fun to work with. Maybe they are trying to get a foot in the door in software development.”

Person: “Okay, yeah, uh, sounds fun.”

And this conversation aligns with one you might have with someone in business; I have had similar ones with friends and family, and they walk away worried about my future job prospects and how I would support my family ;-).

In all seriousness, describing open source requires some nuance. It is part licensing, part development methodology, part culture, and part ethos – and it is something that continues to ebb and flow over time. Despite millions of open source projects having been successful and just as many (if not more) not having been, there is no one right way to do it – thus, the point of this book!

This chapter will cover the following topics:

  • What is open source?
  • A brief history of open source
  • How is open source used?
  • Open source projects and why they are used

I believe that to understand a topic, you must understand its origin. In this chapter, we will do that by describing what open source is, how it came to be, how it’s used, and examining some open source projects to understand why they are open and where they are used.

What is open source?

Wikipedia (https://en.wikipedia.org/wiki/Open_source) defines open source as follows:

Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use the source code, [1] design documents, [2] or content of the product. The open-source model is a decentralized software development model that encourages open collaboration.

If you search online, you will find a number of definitions used:

While the definitions are certainly different, some common themes align here.

The first is the concept of source code being made freely available, allowing anyone to view, modify, and share the source code with others. This is often where people will start with open source, thinking of it as software one can get for free. However, open source takes that one step further; it’s not just making software available for free (better known as freeware) but also giving the user the ability to see the source code, modify it for their own use, and share it with others.

A good way I’ve heard the difference described is to imagine that you have a car, and the hood is sealed shut. Sure, you own the car and can drive it, but what if something breaks? What if you want to upgrade a component? What if something becomes out of date and needs to be changed to work for the future (such as changing from standard gasoline to E87)? The car with the sealed shut hood means only the manufacturer can change it – the one with the hood that can open is open and can be changed by the user. That’s where the difference lies – it’s, as is often said, not free as in beer but free as in freedom or libre.

The second theme focuses on open collaboration, meaning that anyone can participate in how the code is built. This is an area you see in open source that isn’t always adhered to; many projects that are sponsored by a single organization can be a bit of a challenge to contribute to, and even single, individual maintainer projects struggle here. I’ve seen this most often due to maintainers being overwhelmed and not having a ton of time to dedicate to a project. Other times, it’s due to the project being more of a proof-of-concept and being somewhat abandoned by the maintainer, or occasionally a maintainer not really wanting any help. I’ll dig into this more in later chapters as I cover governance and growth, but as we talk about what open source is in this chapter, open collaboration tends to be a key tenet of the expectations we have.

Finally, there is the theme of a decentralized community. This means open source projects are truly global; while a maintainer might start a project to solve a problem they have and pull in a few others who they know with similar goals, both the model of licensing (the code can be freely viewed, used, modified, and shared by anyone) and the distribution (thank you, internet!) mean that literally anyone in the world who finds this code and uses it is part of a community. This likely feels daunting and intimidating at first glance, but this is one of the best parts of open source; it is a connective thread across locales, cultures, genders, backgrounds, ages, and abilities. Again, a topic that we will chat about a bit more in later chapters, and one that is often a struggle point with projects, is that with this great ability to connect people globally comes the responsibility of supporting them.

The Open Source Initiative maintains The Open Source Definition (source: https://opensource.org/osd (Creative Commons BY 4.0)) and is considered the standard definition of measuring whether a piece of code or project is truly open source.

The definition really focuses on the concept of open source from a licensing perspective and for many, is where the definition of open source starts and stops. Licensing is what many would consider as the table stakes for what open source is (and have no fear, we have a whole chapter dedicated to licensing – Chapter 3, Open Source Licensing and IP Management). What truly makes open source transformational is open collaboration and a decentralized community, bringing together a diverse group of persons to build something that is greater than any one of them. In other words, the license choice enables building community and collaboration, which, in turn, makes open source projects successful.

Now that we have defined open source and learned more about what the key parts of it are, let us look back at how we got to where we are today. In the next section, we will go back to history to trace the roots of open source.

A brief history of open source

Open source as a term dates back to February 3rd, 1998, but the ethos and ideals date back decades before that. Let’s take a look back in time.

The concepts of viewing, modifying, and sharing, along with open collaboration, can be traced to way before the internet and computers. Much of this was commonplace in hacker and maker cultures, both rooted in artisan spirits. For hundreds and thousands of years, new technologies and innovations were born out of the sharing of ideas with each other, each time seeing the next effort built off of that of others before. The challenges were only the ability for ideas to travel, where Gutenberg’s invention of the printing press began the acceleration of knowledge that became the Renaissance.

There has always been a natural tension between the collaborative spirit and commercialization. The establishment of the system of patents in the 1400 and 1500s had the intention of protecting inventors but in many cases, created monopolies that stifled open collaboration. A classic example is in the automotive space, where a patent on the two-cycle engine was filed by and awarded to George B. Selden. Henry Ford challenged that patent and won, which opened innovation and formed an association in which automotive engine knowledge could be shared amongst competitors (and had one of the first patent agreements, where members would agree to share patent licensing freely with one another). This change sparked the automotive boom of the early 20th century.

Tracing the roots of open source to the mainframe community

In computing, the traces go back to 1955 in a room in the Los Angeles, California area. International Business Machines (IBM) had released what is considered to be the first mainframe a few years earlier – the IBM 701 Electronic Data Processing Machine. Early users of this machine came together to collaborate on how to use it, sharing information, insight, code, and knowledge with one another – very like what open source communities do today, but instead of sharing over the internet, it was punch cards and magnetic tape. Thus was born the SHARE community – named most eloquently after its motto: SHARE – it’s not an acronym, it’s what we do.

These user group meetings continued for years, creating a commons known as the SHARE Operating System. This culture of sharing outgrew these commons, and there was a need for a place to collect this code not just for sharing but also to have a central repository to track it. In 1975, Arnold (Arnie) Casinghino, then with Connecticut Bank and Trust (CBT) Company, began collecting this code and distributing it on tape to anyone who requested it. Additionally, if someone wanted something added to the tape, they could send it to Arnie, and it would be added. You could call this an early example of an open source project, complete with a maintainer (Arnie), open collaboration, and a decentralized community. Interestingly enough, this project continues to this day; Arnie has long since retired, but others in the mainframe community have stepped up to maintain the tape. It is now downloadable over the internet, but you can also send the maintainers a few dollars in the mail and they will send you a tape.

In the 1950s and 1960s, with computing so new and generally focused on science and research, along with use in academia, collaboration and decentralized community were the norms. At the same time, the cost of developing software for these computers was increasing as these computers became more complex. This is the point at which we saw the birth of software companies, which were at odds with hardware manufacturers such as IBM who bundled software with hardware at no cost, as they saw it as a necessity to sell the hardware. The United States government saw things differently, filing an antitrust case against IBM in 1969. While the case was eventually withdrawn in 1982, what it did set in motion was IBM unbundling software from hardware, which was a boon for software companies. This was aided by the 1974 US Commission on New Technological Uses of Copyrighted Works (CONTU) deciding that software was copyrightable, and later cases such as Apple versus Franklin, saying object code was copyrightable in the same way literary books are – thus, the idea of free, public domain, sharable software seemed to be a thing of the past.

The emergence of free software

The late 1970s into the 1980s saw the rise of Bulletin Board Systems (BBSes) where enthusiasts, now able to acquire computers to use in their own homes, began sharing software back and forth as was done in the 1950s and 1960s. Two individuals in particular were important at this time.

The first was Richard Stallman, who launched the GNU project in 1983 with the aim of writing a complete operating system free from constraints on the licensing commons with source code from software companies. Probably the most noteworthy of the projects launched include the following:

  • GNU Compiler Collection (GCC)
  • GNU Debugger
  • GNU Emacs

All of these projects are hugely popular to this day. This also kicked off one of the earliest open source licenses, called the GNU Public License, which captured the ideals Stallman had of creating a commons of software. Stallman has been quite outspoken over time and, at times, controversial within the realm of free and open source software, as he has leaned toward a more free software approach (meaning the license should ensure the code and derivative works remain free software), rather than more permissive licensing approaches. I’ll dig more into these differences later in Chapter 3, Open Source Licensing and IP Management.

The other individual is Linus Torvalds, who released a UNIX clone in 1991 called Linux (more on the history of UNIX at https://www.redhat.com/sysadmin/unix-linux-history). While Stallman’s work created the space in which open source today was built, Torvalds’ work brought open source and free software into the mainstream. It also opened up revenue streams and economic development in the order of billions of US dollars, from vendors such as Red Hat and SUSE creating distributions of Linux, IBM, Sun, and Oracle to birthing commercial applications and today’s cloud and infrastructure vendors such as VMware, Amazon, Google, Microsoft, and many others. What has been unique is that alongside commercial application and success, the hobbyist and enthusiast community has been just as strong; Debian Linux and Slackware were early distributions, still to this day having large user bases.

Open source is coined as a term

In 1997, one of the main free software influencers, Eric Raymond (known by his initials ESR), penned an essay, The Cathedral and the Bazaar, which spoke to his experiences as an early free software project maintainer and his observations of the Linux kernel community. This was one of the first pieces of writing about the hobbyist and hacker culture and ethos, drawing visuals to the then two models of free software development. One model was called the Cathedral, where software was developed behind closed doors and then released publicly (used with the various GNU projects). The other model was called the Bazaar, where software was developed in the open over the internet (then still a new concept) in the view of the public (the model of the Linux community). I will dig into the insights and learning from this essay throughout the book.

From a historical perspective, this was considered the nudge for the Netscape Communications Company to release the source code for Netsuite Communicator and launch the Mozilla project in January 1998 (read more about the browser wars at https://thehistoryoftheweb.com/browser-wars/). Companies releasing their commercial products open source is commonplace today. Still, back then, it drew the eye of the technology world (this also happened during the first browser wars, so there were additional factors that drew interest). Those involved in these early open development projects saw that they had an opportunity to start to build a larger movement and define this emerging effort from the free software movement of the 1980s.

This brought together a meeting on February 3, 1998, in Palo Alto, attended by Todd Anderson, Chris Peterson (of the Foresight Institute), John “maddog” Hall and Larry Augustin (both of Linux International), Sam Ockman (of the Silicon Valley Linux Users Group), and Eric Raymond. One thing this group strived to do was to make this movement distinctive from free software and more inclusive of commercial software vendors – the ethos and licensing around the free software movement were considered off-putting and, by some, hostile for commercial use. As the brainstorming and discussion occurred, the idea to label this software as open source came from Chris Peterson, and everyone in the room aligned around it – open source was officially born.

Giving open source a vendor-neutral home

In 1994 and 1995, as Linux was starting to get some early commercial traction, several companies were attempting to trademark the term Linux. Torvalds fought these as best he could and later would be awarded the trademark for Linux. Torvalds then partnered with Linux International to be the organization to hold the marks.

This raised an important question – how can open source projects be best legally protected? I’ll dig more into this in Chapter 3, Open Source Licensing and IP Management, but we saw the rise of non-profit entities to become vendor-neutral homes for these marks (and, in some cases, copyright holders). The initial motivation for these foundations focused on fiduciary homes for marks, copyrights, and other key legal assets but over time, grew to provide professional services for these projects, including but not limited to development and collaboration infrastructure, marketing and outreach support, fundraising, and event management.

One of the first open source foundations was the Apache Software Foundation (ASF), which was established in 1999 as a US 501(c)(3) charitable organization. The ASF had a funding model to accept corporate and individual donations and sponsorships, which supported legal support for projects, along with extensive development and communication infrastructure. It is predominately an all-volunteer effort, and one of the big innovations it brought was The Apache Way, which set a clear governance structure for hosted projects built from the experiences of the more Bazaar-style open source projects from the 1990s. Quickly many of the other key open source projects launched foundations, including GNOME, KDE, Eclipse, and one around the newly open source Netscape Communicator source code called the Mozilla Foundation.

In the years that followed was recognition that many of the functions of these foundations had overlapped, and there could be some efficiency and cost savings by having joint infrastructure. This is where the Linux Foundation innovated by creating the foundation-of-foundations model, which bore key foundations such as the Cloud Native Computing Foundation (CNCF). In addition to the CNCF, the Linux Foundation has also enabled smaller foundations such as the Academy Software Foundation, Hyperledger, LF Energy, Open Mainframe Project, and others to have top-notch professional staff supporting the growing communities with more efficiency, requiring less cost on the staff overhead and, instead, this saving being invested in the communities themselves. I will dig more into these foundation models in Chapter 5, Governance and Hosting Models.

With a good background of the history of open source in hand, let’s look forward now to how we see open source used.

Implementing open source

You can see that there has been a long and winding history of open source, which has predominately been driven by enthusiasts who were passionate about the technologies they worked with and, over time, brought in commercial investment while staying true to form with the ethos that grew these communities.

With those years of effort came many patterns of success and patterns that did not pan out as well. We have seen the concept of open source applied to different areas outside of computing, including quilting patterns, the home brewing of beer, genome patterns, and more. From these efforts we have seen a few patterns in how open source has been used with a degree of success – let us look at those.

Information sharing amongst enthusiasts

The earliest use we’ve seen of open source (and arguably most pervasive) is just being able to share information and knowledge with others with a common problem. It generally is the underlying motivation for open source generally and aligns with the historical ethos of open source being based on hacker and maker cultures.

When it comes to information sharing, what is shared comes in many different forms. While in open source we generally think of code, often, it can be a design, documentation for a tool or process, diagrams, datasets, or some other sort of medium. I will cover how licensing works in these non-code settings in Chapter 3, Open Source Licensing and IP Management, but know that there are licenses out there for just about every type of work and expectation of the community.

Some projects that focus on information sharing include the following:

  • Ubertooth (https://github.com/greatscottgadgets/ubertooth): This is an open source wireless development platform for Bluetooth experimentation. The project both builds a software stack for the hardware, as well as providing hardware designs that others can use to build the actual hardware for the software stack (and cultivates an indie community that offers hardware kits, as well as fully-built dongles to use).
  • PiFire (https://github.com/nebhead/PiFire): This provides a smart Wi-Fi-enabled controller for a pellet smoker or grill, including software design as well as hardware designs based on the Raspberry PI platform.
  • SecurityExplained (https://github.com/harsh-bothra/SecurityExplained): This provides informational content for the software security community.
  • Darwin Core (https://github.com/tdwg/dwc): This is a standard maintained by the Darwin Core Maintenance Interest Group, which includes a glossary of terms intended to facilitate the sharing of information about biological diversity.

Various Awesome Lists are ways for communities to collaborate on some of the best resources in a given topic area. A few that I’ve run across include the following:

Underlying technology

There is a concept called either the UNIX way or sometimes the UNIX philosophy, which describes a minimalized and compartmentalized approach to software written about by several individuals, including Doug McIlroy, Peter H. Salus, and then popularized in the writings of Ken Thompson and Dennis Ritchie. While there are several variations to the philosophy, one could largely boil it down to one phrase: Do one thing and do it well. As the open source software communities come from more of a UNIX background, open source projects take on this mantra. Many of the basic command-line tools contained in Linux and other UNIX-derived systems that we depend on are taking this approach, such as the following:

  • grep: This is a command-line utility for searching plaintext datasets for lines that match a regular expression
  • sed: This stands for stream editor, which parses and transforms text
  • cat: This is a tool for taking the output from one program and writing to standard output for input into another

Modern software has multiple layers of libraries and frameworks that build a complete solution and are built with this same minimalist and integration-focused mindset. Here are some of the open source projects we see often used:

  • Android Project (https://source.android.com/): This is the underlying operating system that powers over 3 billion active devices as of 2021 [1]
  • Ruby on Rails (https://rubyonrails.org/): This popularized the Model-View-Controller (MVC) approach for web development, which was a major influence on web development in the mid-2000s, with over 1.2 million sites globally using this framework as of 2022 [2]
  • Pandoc (https://pandoc.org/): This is the Swiss Army knife of document conversion tools, enabling the conversion of documents between dozens of different formats (and was super useful in the creation of this book).
  • Memcached (https://memcached.org/): This is a distributed and high-performance key-value store used for speeding up web applications by reducing database hits for data that doesn’t change frequently

You’ll notice these projects are predominantly developer tools, and that isn’t a coincidence. Open source has greatly reduced the cost of building software and, more importantly, made high-quality tooling, languages, and frameworks for software development accessible, which helped kickstart so many of the Web 2.0 era companies such as Google, Facebook, Netflix, and hundreds more.

Establishing technology ecosystems

There are some projects that fit into the previous category of underlying technology within a classification of where they would fit in an application stack, but the formation of and motivations behind the project are more related to ecosystem building in nature. In other words, they are created with the intention of both open and commercial solutions to be built from them and have expectations of a certain level of interoperability or skills alignment between the various solutions. This might be done for a multitude of reasons, such as setting a standard in an industry horizontal or vertical market, trying to establish a new area of technology, or maybe bringing together competing for solutions where the active value and investment are higher up in the stack and this level of the stack has become commoditized.

We will talk a bit more about technology-ecosystem-building through open source in Chapter 4, Aligning the Business Value of Open Source for Your Employer. Here are some of the projects that fall into this category:

  • Kubernetes (https://kubernetes.io/): This is an open source system for automating the deployment, scaling, and management of containerized applications. It has built the Certified Kubernetes program (https://www.cncf.io/certification/software-conformance/) for solutions that use Kubernetes with over 130 offerings, along with the Kubernetes Service Provider Program (https://www.cncf.io/certification/kcsp/), with over 250 vendors providing support and services. These programs are built by the Kubernetes community and managed by the CNCF staff.
  • Anuket Assured (https://lfnetworking.org/verification/): This is an open source, community-led compliance and verification program to demonstrate the readiness and availability of commercial cloud-native and virtualized products and services, including NFVI, cloud-native infrastructure, VNFs, and CNFs, using Anuket and ONAP components.
  • The Zowe Conformance Program (https://www.openmainframeproject.org/projects/zowe/conformance): This establishes requirements for interoperability between solutions building or integrating with the Zowe (https://www.zowe.org) open source project. Again, this is a community-built program managed by the Open Mainframe Project staff, with over 70 unique solutions and service offerings as of 2022.

One thing to note is while these programs intend to establish technology ecosystems, it has no impact on the open source licensing and reuse of the code base. The terms of the license itself are what establish the rules for the reuse of the code, and other implementations. Programs such as these are purely to provide a vendor-neutral and community-run program for recognizing implementations. There will be more to come in later chapters as I discuss the commercialization of open source in Chapter 10, Commercialization of Open Source.

Providing high-quality free software

While many of us are fortunate to have been born into and/or living in an environment where software is easily affordable and accessible, that isn’t true for everyone. Even for those in wealthy regions, the costs of some software are prohibitive to being able to provide for groups of people. Think of a startup company where you might be trying to keep costs low, or maybe a school where you might need hundreds or thousands of copies of a piece of software; free software makes this accessible when it otherwise wouldn’t be.

However, one angle that is of equal importance is free not just being free as in beer but free as in freedom. Having high-quality software that users can make changes to in order to support their needs and workflows, or keep updated when the upstream project might have gone stale, are key tenets of the free software movement.

Linux distributions such as Debian, Fedora, Ubuntu, Arch Linux, and many, many more have paved the way for the free desktop environment to let users have increased flexibility on how they work with their computers; and in many cases, have made it possible to reuse outdated hardware with modern software, which is especially valuable in areas of the world that lack easy access to modern hardware. On top of that we’ve seen, most of the key desktop applications have vibrant and active open source equivalents; here is just some of that list:

  • LibreOffice (https://libreoffice.org/): This provides a full office suite comparable to Microsoft Office
  • GNU Image Manipulation Program (GIMP) (https://www.gimp.org/): This enables image editing and manipulation similar to Adobe Photoshop
  • Inkscape (https://inkscape.org/): This is an open source vector graphics editor much like Adobe Illustrator
  • Mozilla Firefox (https://www.mozilla.org/en-US/firefox/): This draws its heritage from the 1998 open source release of Netsuite Communicator, providing a modern and secure web browser

The list goes on and on and is one area that is more widely recognized when we speak about open source software. It’s also an example of where the community has often grown larger than just developers; with the projects in the preceding list, you see experienced project managers, user interface experts, and individuals with domain-specific knowledge and productivity expertise coming together to build high-quality software to be used in professional environments.

Now that we have seen how open source is implemented, let’s take a look at a few projects themselves and understand the motivations for why they have used open source as a model.

Open source projects and why they are used

Now that I’ve walked through the what of open source along with its historical roots and how open source is used, to complete The Golden Circle [3], let’s look at the why of open source.

I heard Alan Clark of SUSE once describe open source as “the ultimate scratch-your-own-itch model,” meaning that participation is tied to whatever motivates the participant. As you can imagine, this makes the steering of an open source project challenging (another topic we will dig more into in later chapters, covering governance, bringing in new contributors, and growing contributors into maintainers in Chapter 5, Governance and Hosting Models). Still, it also makes answering the why open source question not one with a clear, universal answer.

The best way to answer the why is by looking at a few projects and understanding the motivations of those communities. Let’s take a look at some that hopefully will give you an idea of the value of and motivations behind open source projects.

PHP

If you did any sort of web development in the 1990s, you’ll be familiar with the concept of a CGI script (long live the cgi-bin directory!). Web pages were largely static, and any interactions such as sending a form required a CGI script to process it on the backend. Most of these scripts were written in Perl and others were actual executable binaries written in C. Yes, the web was a much simpler time then!

Rasmus Lerdorf wrote a number of these for maintaining his personal home page while he was a student at the University of Waterloo, releasing his implementation as Personal Home Page/Forms Interpreter (PHP/FI). These scripts grew over time, and with the help of Zeev Suraski and Andi Gutmans, were rewritten and renamed to a recursive acronym, PHP: Hypertext Preprocessor (one of many examples of maintainers with interesting senses of humor in naming projects in the early days of open source) [4]. As a project, this was a huge shift in web development, moving away from separate form processing displaying content on web pages to being able to embed things such as database calls and complex logic right into a web page as it was being processed.

What this also did was scratch the itch of making interactive web pages much easier to build, helping anyone with basic programming skills be able to build a web application (although PHP is often known for building web applications with what is known as spaghetti code, but that can be the cost of progress).

One other thing I find fascinating about PHP is that Lerdorf, when asked, will humbly admit that he never had the intention of creating a programming language when he started with PHP/FI. I’ve also heard interviews from him in which he expressed that he felt the weight of being the single maintainer, with many individuals coming to him requesting new functionality to be added or help to get it working. This same pressure falls on open source project maintainers today and is something I will explore more in a later chapter on handling growth in Chapter 9, Handling Growth.

Blender

Computer graphics and other interactive display technology are one of the largest areas where open source is strong. In fact, most of the movies you see today or the video games you play all have open source underpinnings.

Ton Roosendaal, a Dutch art director and self-taught software developer, started his own 3D animation studio, NeoGeo, in 1989. One of the software tools he wrote was called Blender, which was a combination of various scripts and tools to enable 3D creations and visual art to be built. It was software specifically targeted toward so-called creatives, and Roosendaal understood the struggle they had in delivering rapid changes to a 3D project following complex customer requirements. Blender as software exchanged hands over the years to follow until in 2002, Roosendaal formed the Blender Foundation and released the code under the GNU Public License (GPL) to ensure the work would forever be in the commons for creatives to use. To this day, Blender is used throughout special effects and visual effects workflows, as well as by others who do 3D model animation and development.

The why for Blender is very clear—creating a tool for and built by creatives. What is interesting about Blender is its model; while there is a foundation that sponsors grant work and has a small staff to manage operations for the project itself, the vast amount of development is done by volunteers from around the world. I will dig more into this model later in the book when I talk about governance models for open source projects in Chapter 5, Governance and Hosting Models.

Zowe

There is a joke amongst those in the mainframe community that every technology that is considered novel today was done by mainframes decades ago. For example, virtualization was introduced in 1972 with IBM System/370 but popularized in the early 2000s by VMware. And as I discussed earlier in this chapter, much of the roots of open source is derived from early mainframe computer operators and developers. As is often said, what’s old is new!

One challenge the mainframe community was having is that the methodology for interoperating with mainframe applications and data was rooted in technology that was quite different than what modern developers were accustomed to using. Where a developer in 2018 might use Java, Node.js, or Python for building an application or integrating various applications, mainframe applications were often built on decades of COBOL or FORTRAN code. Some interfaces may use Representational State Transport (REST), but others may be custom-coded or interact with an IBM 3270 terminal. This created a divide in organizations depending on mainframes, where there would be one set of individuals and skills maintaining the mainframe environment and a different set for the rest of the IT infrastructure.

Zowe was founded as a project with initial code contributions from CA Technologies (which was bought by Broadcom in 2019), IBM, and Rocket Software, which provides a framework for integrating mainframe applications and data with other applications and tools using development tools and methodologies more common to a developer not in the mainframe world. The why for Zowe was twofold; one motive was solving the problems of mainframe users having separate teams and approaches to managing mainframe applications and data from the rest of those organizations computing infrastructure. Additionally, there was a growing skills gap where the skills you needed to be successful in the mainframe industry were quite different from those needed to develop software for other systems.

Open source projects that come out of companies can often have a bit of a learning curve as they adapt to the open source development methodology, and this is one project that went through that curve and has begun to embrace that model, even though the initial participants were not well versed in open source. I’ve often seen this in projects from vertical industries as well, and a topic I will dig into in later chapters is all about getting your organization to launch an open source project, governance models, and growing committers into maintainers.

PiSCSI

I grew up in a time when Apple Macintosh computers were commonplace in schools. These were innovative at the time, both on the basis of their capabilities but also their iconic design. There is a whole enthusiast community around these beige- and platinum-colored machines, who painstakingly restore these machines to working order to preserve an important part of computing history. I happen to be one of those folks, currently working on restoring an Apple Macintosh IIsi circa 1990.

Apple Macintosh computers of that era, and also those from other manufacturers, used hard disk drives with the Small Computer System Interface (SCSI) interface that was popular in those times. Over the decades, not only have hard drive sizes increased from the 20-megabyte or 40-megabyte drives of that era to the 1-terabyte drives of today, but the technology interface has moved on from SCSI to Parallel ATA and now Serial ATA, meaning that finding replacement hard drives that are more than 30 years old are hard to come by. A developer under the name of GIMONS built a solution called RaSCSI for use with the Sharp X68000 computer, which enabled the use of an SD card and specialized adapter connected to a Raspberry PI to take the place of a SCSI hard drive. A group of enthusiasts who saw the use case for applying this to other computers by using SCSI hard drives took on the project and built the PiSCSI project. This project was created to expand on the work and add additional functionality, such as the ability to emulate an Ethernet interface, provide multiple disk images for a computer that could be managed using a web interface, and other useful functionality to enable these computers to be used even if they had a failed hard drive.

Figure 1.1: My Mac IIsi

Figure 1.1: My Mac IIsi

This is a great example of a community motivated by enthusiasts trying to solve a common problem; emulating SCSI hard drives using common, lower-cost components but over time, grew to scratch more itches for these enthusiasts in being able to connect vintage computers to modern networks. It provides both software built for running on the Raspberry Pi itself and also diagrams for building the custom hardware needed (if you don’t want to go to that trouble, you can buy kits to assemble or fully build devices). This being done open source helps bring together this large community to improve and provide feedback, creating commons that everyone can benefit from.

Summary

Open source, while being driven by a multitude of motivations and a diverse group of enthusiasts, is tied together by a common spirit; the idea of being able to freely share code and knowledge with others openly and do so through open collaboration in decentralized communities. Open source has been built on decades of collaborative spirit, with the ideal of sharing information to advance humanity. I’ve often seen open source described as the next Renaissance, harking back to the same outpouring of knowledge and innovation that advanced society but if you look at the last 3-4 decades, you can truly see how much our society has advanced in technology (which we could agree has opened up a new set of problems, but that is one of the aftereffects of progress, and we’ve tended to see society respond to and correct it over time).

This chapter was intended to give you a good foundation of the what and why of open source, which then lets us dig into the next key topic – what makes a good open source project.

Further reading

[1] https://www.theverge.com/2021/5/18/22440813/android-devices-active-number-smartphones-google-2021

[2] https://iwanttolearnruby.com/how-many-ruby-on-rails-developers-are-there/#1

[3] https://fourweekmba.com/golden-circle/

[4] https://www.php.net/history

[5] https://en.wikipedia.org/wiki/Hardware_virtualization#Hardware-assisted_virtualization

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.26.53