In my November 4, 2015 keynote at the SynBioBeta conference, I talk about lessons from open source software and the internet that should shape our thinking about the bio revolution. Licenses are only part of the open source story. The architecture of interoperability may matter even more.
8. @timoreilly #SynBioBeta@timoreilly #SynBioBeta
Small Pieces Loosely Joined
Enabled by:
• Common, well-understood data formats
• A communications protocol
• A variety of tools for accessing and representing the
data
8
9. @timoreilly #SynBioBeta@timoreilly #SynBioBeta
I think it has more to do with architecture
“The book is perhaps most valuable for
its exposition of the Unix philosophy of
small cooperating tools with standardized
inputs and outputs, a philosophy that also
shaped the end-to-end philosophy of the
Internet. It is this philosophy, and the
architecture based on it, that has allowed
open source projects to be assembled
into larger systems such as Linux, without
explicit coordination between
developers.”
9
16. (Control by API)
Desktop Application
Stack
Proprietary Software
Hardware Lock In
By a Single-Source Supplier
System Assembled from
Standardized
Commodity Components
17. Free and Open Source Software
Cheap Commodity PCs
Intel Inside
18. Proprietary
Software As a Service
Subsystem-Level Lock In
Integration of Commodity
Components
Internet Application
Stack
Apache
19. "The Law of Conservation
of Attractive Profits"
"When attractive profits disappear at one
stage in the value chain because a product
becomes modular and commoditized, the
opportunity to earn attractive profits with
proprietary products will usually emerge at an
adjacent stage."
-- Clayton Christensen
Author of The Innovator's Solution
In Harvard Business Review, February 2004
20.
21. @timoreilly #SynBioBeta@timoreilly #SynBioBeta
More open licenses are necessary,
but they are rarely sufficient.
We must fight restrictive licenses and
other forms of IP, but replacing them
with open licenses isn’t the answer.
22
24. @timoreilly #SynBioBeta@timoreilly #SynBioBeta
“Everyone applauds when Google goes after Microsoft’s Office monopoly, seeing it
simply as “turnabout’s fair play,” (and a distant underdog to boot), but when they start to
go after web non-profits like Wikipedia, you see where the ineluctable logic leads. As
Google’s growth slows, as inevitably it will, it will need to consume more and more of the
web ecosystem, trading against its former suppliers, rather than distributing attention to
them. We already take for granted that common searches, such as for weather or stock
prices, are satisfied directly on the search screen. Where does that process stop?
“Ultimately, I think we see this pattern in the economic development of every innovation.
When a new technology is introduced, there’s a lot of green-field opportunity, and so
much value is being created that there’s no need to capture it all. But as the technology
matures, the winners need to capture more of the total value being created. They
gradually crowd out suppliers as well as competitors.”
25
28. @timoreilly #SynBioBeta@timoreilly #SynBioBeta
Small Pieces Loosely Joined
Enabled by:
• Common, well-understood data formats
• A communications protocol
• A variety of tools for accessing and representing the
data
29
29. @timoreilly #SynBioBeta@timoreilly #SynBioBeta
Who sets the gauge rules the world
Sixty per cent of the world's
railways use 4 ft 8 1⁄2 inch
standard gauge, developed by
George Stephenson in 1822.
30
http://www.warwickshirerailways.com/lms/lnwrns305.htm
I was disappointed to discover that this great quote attributed to Mark Twain is actually a modern fabrication. But the sentiment is completely true.
When I look at the history of the computer industry, I see a recurring pattern. A huge Cambrian explosion of innovation happens when someone makes their breakthrough work available to the world for others to build on. The fundamental architecture of modern computing, developed by John von Neumann and team at the Institute for Advanced Study at Princeton during and after WWII, was put into the public domain, and led to the first wave of computing, led by IBM. Then, Don Estridge, the head of IBM’s PC division, put the specifications for the PC out for everyone to copy, letting folks like Michael Dell start his company from a university dorm room. And more recently, Tim Berners Lee kicked off the web revolution by putting his work into the public domain.
We see this same pattern in biology. Back when Craig Venter and the Human Genome Project were racing to be the first to sequence an entire human genome, Venter was hoping to patent the work, but Jim Kent’s gene assembler helped the public project to keep the human genome in the public domain. And of course, Tom Knight and Drew Endy have worked tirelessly to build an open source culture and community around synthetic biology.
But all too many people still think that the freedoms of open source software come from licenses. They are a necessary part of the story, but not sufficient.
And the history of what became Linux teaches us that free software licenses came after, not before, the fundamental innovations that grew into Linux. Despite having a proprietary license, and being owned by one company, Unix was developed collaboratively by small teams of independent developers. Much of the fundamental software of Unix and the internet was developed at universities. The ATT license was just open enough for sharing to happen. When ATT tried to clamp down, the Berkeley Unix project continued, and many of the utilities they’d built were copied and added into Linux, which was just then emerging.
So what do these things have in common? Unix/Linux. The Internet. The world wide web. Wikipedia.
They all have an architecture of “small pieces loosely joined”, enabled by Common, well-understood data formats, A communications protocol, A variety of tools for accessing and representing the data
In the Wikipedia entry for the book The Unix Programming Environment, I wrote: “The book is perhaps most valuable for its exposition of the Unix philosophy of small cooperating tools with standardized inputs and outputs, a philosophy that also shaped the end-to-end philosophy of the Internet. It is this philosophy, and the architecture based on it, that has allowed open source projects to be assembled into larger systems such as Linux, without explicit coordination between developers.”
And when we were putting together our 1998 book Open Sources, which consisted of interviews with various free software and open source leaders, Linux Torvalds remarked “I couldn’t have built a new kernel for Windows even if I had access to the source code. The architecture just didn’t support it.”
One of the key tenets of the early internet design was something called “The Robustness Principle.” The idea was that the internet would be robust if people followed what really amounts to the Golden Rule applied to networks. “Be conservative...”
This needs to be true of biotech as well.
But that began to change. Google didn’t change the architecture of the web, but it did change the architecture of how it was controlled.
I was on the board of Nutch, a non-profit formed by Doug Cutting, the original architect of Hadoop. The idea was to build an open source search engine, but Doug soon realized that Nutch could never reach the scale of Google, because it was no longer just a matter of software and algorithms, but of data and operations at scale.
Similarly, Don Estridge “freed the PC”
but Bill Gates realized that there was a new lock-in via software.
Back in 2003, I gave a talk called “The Open Source Paradigm Shift” where I started by talking about the architecture of the PC industry, which looked something like this.
With their mindset shaped by the desktop application stack, open source developers imagined the pattern replaying itself like this. They accept intel inside, and loved the cheap commodity PCs, but they imagined proprietary software being replaced by free and open source applications at the top of the stack. Red Hat or maybe SuSe would displace Microsoft, MySql would displace Oracle, and so on.
But instead, we got a world that looks like this. This is my slide from 2003 - obviously, some of the companies highlighted in the graph would be different today.
Clayton Christensen described this pattern perfectly in a 2004 Harvard Business Review article. He called it “The law of conservation of attractive profits.” “"When attractive profits disappear at one stage in the value chain because a product becomes modular and commoditized, the opportunity to earn attractive profits with proprietary products will usually emerge at an adjacent stage."
That’s why well-meaning initiatives like the open source seed initiative, which are modeled on the free software foundation, may be missing the point.
Much as I love 23&me and what they are doing, when I see a headline like this, it really worries me. Because a future in which one company controls too much data is not a future that will keep the innovation engine going.
You see the real risk is that, whatever their good intentions, companies with a monopoly position eventually tend to exploit it. In 2007, I wrote an article about lessons from Wall Street for the future of the Internet.
In that article, I wrote: ““Everyone applauds when Google goes after Microsoft’s Office monopoly, seeing it simply as “turnabout’s fair play,” (and a distant underdog to boot), but when they start to go after web non-profits like Wikipedia, you see where the ineluctable logic leads. As Google’s growth slows, as inevitably it will, it will need to consume more and more of the web ecosystem, trading against its former suppliers, rather than distributing attention to them. We already take for granted that common searches, such as for weather or stock prices, are satisfied directly on the search screen. Where does that process stop?
“Ultimately, I think we see this pattern in the economic development of every innovation. When a new technology is introduced, there’s a lot of green-field opportunity, and so much value is being created that there’s no need to capture it all. But as the technology matures, the winners need to capture more of the total value being created. They gradually crowd out suppliers as well as competitors.”
The answer lies in architecture. I saw an article recently called “the Internet of DNA” - and that is the right answer. How do we create an architecture for synthetic biological and genetic data that makes sharing of data the norm, that builds applications on a common shared substrate?
That is how can we build an architecture of “small pieces loosely joined”, enabled by Common, well-understood data formats, A communications protocol, A variety of tools for accessing and representing the data
The lesson I want to leave you with is this: a lesson from British history and the design of real world platforms. Most of the world uses a standard gauge of railroad track originally developed by George Stephenson in 1822. It was a foundational tool for the British Empire, and was eventually copied by other nations around the world.
In a very different context, Mike Bracken, the founder of the UK’s Government Digital Service, put it in one line: “The strategy is delivery.”
Build a system with the architecture of interoperability that you want, and insist on that interoperability.