Old School Web Applications

A person running on a road into the distance.Perhaps soon it will be a shock to say that when I started programming, I built software for a computer.  A very real and physical manifestation of electronic parts I could touch or assemble. For example, when I was 10 we wrote BASIC applications on an Apple IIe; slightly older and I was making DOS apps for IBM clones we built; C++ projects in college on SPARC workstations; and C# business applications running on an Intel server in a data closet for a ‘real job’.  Rarely did we bother abstracting code from the physical underlying machine. In retrospect, glossing over this distinction locked me into the notion of a web application.

When asked how to build a web application, I immediately pictured physical machines with Network Interface Cards (NIC) plugged into the Internet via cables; actual wires in a server room.  These NICs required software networking experts to configure the communication stack for the server room’s network (Novell Netware, anyone?). Once your servers were Internet-capable, web servers and middleware software were yet another set of building blocks assembled onto the machine.  After all that was in place, you could craft and unleash your “Hello Internet” web application upon the world.

The building blocks of web application assembly are vivid for me as I’ve gone through the entire construction process several times.  Building an application stack is not easy and one spends a lot of time at each level trying to understand why or why not things are working as you expect.  Everything must be configured and placed precisely or the engine won’t deliver your application. Because of this, once you’re satisfied that a level is ready, it is useful to think of each layer as a solid brick the upper levels sit upon.

My Computing Evolution

My concrete connection between physical wires and code chiseled away after I joined Wily Technologies and started doing awesome deep dives into Java.  Code was still code running on servers in a data closet, but applications were funneled via Wily interceptors into enhanced bytecode. As long as you were clever while respecting Java Virtual Machine rules, you could safely and dynamically provide deep real-time visibility into what was happening as your app processed user requests.

This visibility was the first time distinguishing the physical machine from the code it was running provided me tangible and practical value.  I relearned the obvious – there is a lot going on in between the pixels in a text editor when I wrote code and the server’s CPU once it was actually running it.  And although it was beneficial to think of the software layers the computer ran as bricks when you’re knee-deep constructing applications, in certain contexts, they were more like jelly cubes.

Morphing these Java runtime environments into gelatinous bytecode cubes afforded clever new operational practices for production applications, but then VMWare came along and added even more depth.  By creating a virtualization revolution on commodity hardware, VMWare had transmuted yet another brick to gelatin – the server itself.

VMWare’s virtual software clones of physical machines were a big step forward in flexible computing, but it was sometimes hard to keep the details straight.  Physical machines were running VMWare virtual machine clones running a Java virtual machine, and in the Java virtual machine we ran Wily agents that dynamically changed the Java virtual machine.  Code was still code, but it was far more abstract and elastic then when I assembled solid bricks into applications.

However, even after VMWare, there were still times when I could not escape the drudgery of mucking around with physical components.  When you are forced to deal with a hassle like virtual NIC cards until way past your bedtime on a Sunday night, no amount of consoling can make you forget the pain.

Thankfully, the last physical impediments to freely running code were short-lived due to Docker’s container technology.  Now, we conjure up container spaces in anywhere the world – an easily reproducible bucket – where computing resources chomp through any code we place in it.  Hand in hand with recent cloud services that run any container we provide, I believe we have truly reached a juncture where we can think of computing as a utility – an on-demand pay-as-you-use resource to process the instruction sets we assemble in a text editor.

 A Winding Journey

Fortuitously for me, it’s into this brave new world of utility computing that I’ve begun my Clojure journey. Before now, I believe my previous difficulty in separating the physical CPU from the brick tiers above it would have lead me to reject Clojure.  If you are preoccupied with the physical, you’ll not appreciate how ignoring it in certain situations can make things better. It turns out I needed to see examples of many different systems at maximum complexity to realize the zen of coding in Clojure.

I am still a novice Clojure programmer, but I’ve begun to see glimpses of the power and simplicity it affords.  In my next post, I’ll discuss my perception of what Clojure brings to an application, and why I think grasping Clojure requires thinking about code differently.

Share with:

TwitterGoogleLinkedIn