Why client-server must die

29.10.2015
I write this week from IBM’s Insight conference in Las Vegas. A former InfoWorld editor in chief, Stewart Alsop, predicted that the last mainframe would be unplugged in 1996. This week I'll attend a session where IBM runs Apache Spark on a mainframe, even as the mighty beast's luster finally fades.

I'm going to the Spark-on-the-mainframe session for the lolz. IBM loves its mainframes because they sustain one of the few noncompetitive hardware businesses in existence, where IBM can make nearly a 50 percent margin.

The mainframe business is also one of the only legitimate areas of computing where you'll see ©1980 on the startup screen. Client-server computing does not depend on specific hardware. Instead, it's simply a computing model that has evolved under various hardware and network constraints.

I'm sure we -- that is, me and the LinkedIn or Twitter spheres -- can quibble over the definition of client-server versus the model I'll call "purely distributed." So allow me to define client-server as one or more clients connected to a server listening on a pool or set of sockets that mainly scales vertically and usually has a central data store. This is the model of the LAN.

I'll define the distributed model as N-clients or peers connected to a mesh of N servers that mainly scale horizontally and use a data store or stores that also shard and distribute processing. This model is built to tolerate failure and demand spikes, enabling you to add more nodes (often linearly) and relocate infrastructure at will. This is the model of the cloud.

The power of this more distributed model goes beyond purely scaling up to include scaling down. This is important because of one of the implied fallacies of client-server was that workloads are predictable.

From the start this has failed to be true. In the distant past, I've administered systems that were rendered useless for all other purposes during EoM reporting, then saw only light use throughout the rest of the month. Ironically, this same fallacy is also why mainframe TPC studies are nonsense. Remember when Slashdot was your browser home page and mere mention of your site caused an outage due to a spike in traffic called the Slashdot effect The whole Internet is like that now.

Have you ever tried to set up a test database for a large, existing, Oracle-based project You need to be able to scale up for unpredictable Internet-age data traffic and usage patterns, but you need to scale down to conserve resources (read: massive Amazon bill) and adapt nimbly (not to mention to test the project on your laptop).

Workloads keep getting more unpredictable and in many cases more voluminous. Moreover, our expectations have increased. Waiting isn't really acceptable, and outages in the age of Google are considered major professional failures. Competition in many areas is fierce and global, while regulations have more bite (at least until President Trump takes office).

Our client-server systems won't scale to real-time demands. They are not resilient and, in many cases, cloud-ready. Meanwhile, it has become much, much easier to write distributed systems. It takes no time to deploy a few MongoDB instances compared to Oracle or even SQL Server. Spark has a supersimple API. NodeJS lends itself nicely to writing event-driven resilient distributed systems; plus, they're all easier to use than their predecessors.

Naysayers will point out that these new technologies have relatively small market penetration, but in truth, it’s growing. Some say a technology dies when its developers retire. You may have to pry Oracle out of those PL/SQL developers' cold dead hands, but it will happen. Today, millennials tend to feel more comfortable with MongoDB than even MySQL.

The client-server era will die in the cloud. In 20 years, as I start to eye retirement, no new client server systems will be put into place for normal business use outside of very specialized areas. The new stuff is simply too much better. It doesn't require a specific deployment model, it's easier and cheaper, and it fits the expectations and use cases of the modem business world.

Will the last client-server system be unplugged in 20 years No -- some sectors of business aren't growing very fast, are protected from competition, or aren't facing new regulations, nor do they need to write or buy much new software. They’ll run what they have until the cows come home.

However, we as an industry don't care too much about them because they don’t pay our bills. Instead, we hope they all get Ubered.

(www.javaworld.com)

Andrew C. Oliver