AJAX on the Enterprise
By: Kurt Cagle
Jan. 11, 2007 06:15 PM
In Star Trek, Scotty – James Montgomery Scott – was my favorite character, perhaps inevitably. Spock was always the cool and collected uber-genius, inscrutable and forced into an emotional straightjacket, and while the parallels to the real politik of the time are obvious, to me Spock has always been the epitome of the pure ivory tower researcher. Scotty, on the other hand, was the enginee , in many ways the ultimate hacker. Spock may have been able to tell you what properties of dilithium would induce warp speed, but Scotty knew exactly how to crack the damn crystals in such a way as to eke out that last 0.5 warp factor necessary to escape the baddies chasing the
Scotty knew about estimates – and how much you could pad an estimate to insure that you got the correct time necessary to complete your work down to the minute. He was not above a brawl or two, but when it came right down to it, a vacation was the time it woul take to get to that stack of Linux magazines from 2215 that you’ve been putting aside for the last five years and just read.
The Enterprise needed Scotty far more than it needed Kirk or Spock or even McCoy, yet he was always little more than an odd bit player, the one who was never on the bridge…unless he was repairing one of the computer panels that everyone else kept falling into every time the gravitational system failed, which usually didn’t happen because of anything that Scotty did, but because Kirk seemed to have absolutely no sense of restraint or the cost involved in replacing one of those warp nacelles. And
I…oh, I’m sorry…this piece should have been about
In the introduction to the early Star Trek episodes, the hope of NBC (or at least Gene Roddenberry) was fairly clear – the
However, I believe that that there was something about that five-year bit that’s actually pretty important in the here and now. In the 1960s, Central Planning was as much a part of the American economy as it was the old pre-peristroika Soviet economy, and the five-year plan described what was often taken as a convenient metric for how far one could plan before things became too unpredictable.
Five years also seems to be about the lifespan that it takes for “major” technologies to go from being a good idea to becoming foundational. (Note that this differs fairly significantly from product marketing lifecycles, which seem to have about a three-year cycle from inception to obsolescence). I believe that we’re at one of those interesting transitional points where things are really changing in radical ways, the end of one “five-year mission” and the beginning of another, waiting only for Picard to make it so.
Five years ago, several very interesting things were happening, both in software and in business in general. The tech sector was collapsing, warp shields blowing left, right, and center. Now, to someone who’s weathered a few of these five-year plans, the tech sector collapsing was really nothing new – it’s an industry that’s built on promises of miracles and every so often the bill comes due. People invest in tech hoping for outsized gains are generally deluding themselves – tech always underperforms in the short-term, and overperforms in the long, but in ways that few people can really imagine.
However, in spite of, or more likely because of, this effort, people who had been hoarding their cool ideas to capitalize from the next Bay Area VC suddenly found themselves unemployed and sitting in their parents’ spare bedroom with time on their hands while they waited for some response — ANY response — from an employer. So they did what computer people always do when the next boom becomes the next bust – they began to network.
Standards groups that companies had been rushing to get something out the door began to slow down and actually start to take some time thinking about those standards. Several good ones came out between 2000 and 2002 – XSLT, XPath, XML Schema (well, maybe not schema), XForms, XHTML, DocBook (just for a break from the 25th letter of the alphabet), SVG, ebXML, RDF, and a whole host of specialized industry-specific languages from XBRL to MathML to HumanML (yup, it’s up there in OASIS – I was a member of that working group for a while).
Meanwhile, Linus Torvald’s pet project went from an interesting hobbyist effort to looking like a standard itself, accreting stillborn commercial products that were given new life in the long tail, reinforcing the notion believed by most programmers (and espoused quietly by Scotty himself more than once) that if you get two developers communicating with one another, you get something more than twice as good as what each can develop separately, that three tend to add value proportionately, and so forth.
In other words, those five years of “downtime,” was a time of real research and development, not done in hopes of getting that next crucial patent (or the million-dollar payoff) but rather done because the work represented real needs that needed to be rectified and it was to everyone’s benefit to do so. Standards matured, projects started and worked and bloomed and died, and out of the remnants came new projects and the further tinkering with standards.
One of those revenant projects was the ghost of Netscape. I’m going to speak what’s heresy here in
There’s a lesson in that, a lesson that emerged with HTML (and that occasionally needs to be relearned on the XML side — and even on the
Messaging and the XMLHttpRequest Object
That message pump means that you can send information from the client to the server and back from within a Web page. Of course, you can do that anyway, but the important distinction is that with the Message Pump you’re not necessarily forced into refreshing the entire page every time you need to change some aspect of it. In programming circles, this means that state management no longer has to be done exclusively within the server, but can in fact be significantly offloaded to the client.
Now, to take this back to the Star Trek metaphor again (hah, you thought I’d forgotten, hadn’t you!) imagine the
To get back to contemporary terminology, what this means in practice is that rather than creating a single page from fairly complex components on the server and needing to maintain this information on the server, you instead push the components onto the client, and each component in turn becomes responsible for its own interactivity with the server. The state of the application in turn either ends up residing in each component, or in a client-side “model” server which all of the other components interact with.
The distinction between these two forms is important (and I’ll get to them momentarily) but one of the immediate upshots of this is that the server can in fact become dumb – it doesn’t need, in either situation, to retain anywhere near as much state as it did before for that particular session or application. This has a number of immediate consequences:
From the client standpoint, however, things tend to get potentially more complex. (The third law of programming thermodynamics – complexity never disappears, it only moves around). Browsers are perhaps more homogenous with regards to interfaces than they were five years ago (especially now that Internet Explorer 7 is on the horizon), which in turn means that the amount of specialization code necessary to write to the diversity of browsers has shrunk. It hasn’t disappeared entirely yet, though the good news is that the benefits of maintaining a uniform set of interfaces appears to have sink in to just about all of the major players.
This point in turn raises another and in many ways more crucial one. The AJAX movement is not about calling home without refreshing the page, is not about cool widgets appearing in Web pages, displaying the latest feeds from Slashdot or neat drag-and-drop effects, though certainly all of these have a place. Instead, the primary driving motivations of
This shouldn’t be a radical point, but somehow it is. Your customers, the employees at your company, and you individually spend a huge amount of time in front of Web browsers, which are in turn becoming the primary interfaces for all modalities of communication. I haven’t used a standalone e-mail application in months, most of my IM communication occurs in a browser context, and increasingly my production tools exist as extensions to my Web browser. For many people, going from a Web interface to a standalone application seems a step backwards, forcing them from their primary point of contact for news, documentation, and communication into an isolated environment where they have to run the browser in the background and click back and forth to shift between the two.
Movements are funny things, especially in technology. No one takes them seriously at first — there’s no press releases, no aging rock star starts singing the praises of the product, usually just a handful of people who recognize that there is a problem and that the “market” isn’t rushing to solve it because there’s no immediate money in it.
Often there’s a single event that sparks the whole thing – a programmer gets frustrated because no one can find information about physics papers written at the research center where he works and puts up a small set of tools for free, a grad student who sends out a note saying that because he can’t afford to use the university’s Unix implementation he’s writing his own free one, and would anyone else like to help…events that occur almost daily now that are only important in hindsight. People pitch in not for glory or money (because there’s seldom much of either) but because most software developers are a lot like Scotty – they do things because they need to be done and the problems are interesting enough to them to make it worthwhile.
Yet these sparks are almost invariably observable only in retrospect – and what’s more, such sparks are much like those that start a forest fire – there may be dozens or hundreds of them flickering around a campfire that go nowhere because conditions aren’t right, but if the weather has been dry for too long, if the underbrush is overgrown and primed then any one of those sparks (or many of them) may be responsible for the raging conflagration.
(This same argument, by the way, is one of the most compelling I’ve seen against software patents, as important as they may seem to CEOs and investors – good ideas can only exist in a proper context – too early and there isn’t enough technology to support the concepts, too late and the ideas become obsolete. Because software developers live in a medium of common (and commonly available) ideas, it’s very rare for a truly unique idea to actually occur in this space.)
About now conditions are ripe for
It’s arguable as to whether this should be called Web 2.0. It’s a nice catch-phrase, and I’ve written a few articles myself on what Web 2.0 really means. However, I think that this tends to mask that what’s really going on here is essentially a continuity with what happened in the 1990s, after taking a few steps back to rejig some of the basics…most notably XML.
The argument has been made elsewhere that I’ll repeat here that the end of the dot.com era occurred because we pushed the prototype phase of the Web too far and thought it was complete. I’m a software developer. I practice a form of development that likely won’t be out of place at any of your companies – I start with an idea, a model of where I want to go, and build it much like I would a sculpture’s maquette from clay — add a module here, rewrite a part of an API over there, building something up in pieces until I get about as far as I possibly can. However, and this is the important part, this maquette exists only as a prototype for me to understand what I need to do in the final product. Functionally it’s a mess – the API may not be consistent from one class or structure to the next, the XML may be hideously non-optimal for either performance or updating, the documentation consists mainly of
However, that maquette is important to the scultor, just as the prototype is important to a software developer. It helps both of them shape their final vision, and having completed the prototype, the developer can then go in and rebuilt it right, insuring that there’s fundamental integrity between and within the components, that the application is able to integrate properly, and that the resulting product is not only functional and (this is the critical thing) maintainable.
Your applications will start to become obsolete the moment the programmers stop working on it because the business cases that the software was intended to solve will change in response to changes in the business environment and change because you’ve solved the immediate business cases, which in turn open up possibilities that weren’t open before. What this means is that your applications will spend far more time in a phase of incipient obsolescence than they did in development, which means in turn that they should be designed to age well.
Given all that, we’ve developed the prototype with Web 1.0, and like all too many products out there, the prototype was shipped. Web 2.0 is not a new Web, it’s what happens after engineers take the crash test dummy from the 100 MPH collision with a tanker truck and examine what’s left.
XML Messaging/XML Presentation
The issue of XML is perhaps fundamental to this whole discussion. XML is more than just a replacement for HTML, and after a decade of XML being out there I’m not going to spend any time digging into what exactly it is. If you don’t know, ask your programmers. If your programmers don’t know…fire them. Seriously. All of your data will eventually be moving around in XML streams of one sort or another if it doesn’t already, your databases are likely increasingly speaking XQuery as well as SQL (and there are MANY MANY benefits to that), chances are that your middleware is increasingly tasked with transporting and manipulating XML, and of course your client applications are increasingly assuming one or another XML dialect to render content. That of course is not even beginning to talk about the XML services that are out there, the fact that in your verticals your customers, business partners, and competitors are already working with industry-specific XML schemas and will be expecting you to be too. If your programmers don’t know the basics of working with XML then chances are pretty good they’ll be a liability real soon now.
Keep in mind that XML is fundamentally a mechanism for abstraction. It’s not a product – it’s not even, technically speaking, a language. It’s simply a set of conventions for structuring data in particular ways and providing means to identify compositional elements in that data. I remember one client nervously viewing the medical landscape and getting alarmed that a particular hospital group was going to XML. I was personally ecstatic – it meant that the application I had developed for the client would be able to work more easily there than with those groups that were still dealing with patient records on paper (or even in SQL databases). It’s key to this whole Web 2.0 thing – the free flow of information requires a common structural language, and XML, for all its warts, is it.
However, that doesn’t mean that XML by itself is the answer, and more importantly doesn’t mean that XML itself hasn’t been changing to reflect the evolution of the Web. In particular, there are several key aspects of XML that will likely loom large in the
Objective XML (E4X)
Atom and XML Syndication
Syndication is for more than just blogs. Incredible amounts of information in your system, from red shirt security types that are expendible to planets that serve Earl Grey tea can be thought of as lists that can be presented as syndicated information. Atom is an XML format designed to be a good mechanism for presenting lists of content and includes its own (openly available) publishing formats.
The fact that each Atom entry can also contain a veritable forest of links of varying types and semantics also makes it a good lightweight alternative to RDF and other relational formats, especially once people start migrating to Xquery-enabled databases.
For instance, consider as an example a set of schematic diagrams (of, say, starships just to keep in theme here), with each ship being one entry in an atom feed. Each schematic in turn contains a breakdown of the schematic by section, and each section in turn contains a list of callouts that point to specific items of interest in the schema section. If each of these lists consists of entries defined with appropriate linkage structures then this “application” essentially becomes simply a matter of pulling in external “news feeds” that contain enough data to describe particular nodes in a graph while at the same time provide unique links capable of pointing to subordinate “feeds.” Certainly such information can be expressed as RDF as well, but the fixed commonality of Atom feeds means that there’s typically enough to populate generalized components without requiring that “semantics” seep into the equation.
What’s perhaps more compelling about such syndicated feeds is that the system for displaying them assumes that such information changes over time, that the hyperlinked lists are themselves ephemeral and have some form of time or thematic relevance. Obviously, news in general fits this bill well, but does weather information, availability of computer systems, lists of students in a given course and so forth? An Atom list is fundamentally a cohesive “editorial” unit, with all items in the list tied together by some relevant criterion.
One of the critical issues inherent in deploying Web Services has been the question of determining how to designate list or array content. If you think of an Atom feed as an array in which each entry has a minimal set of “metadata” that can provide some context for the links contained in the feed then you can do such things as build tools that will display Atom without needing to know what the specific “payload” is, which in turn makes it much easier to componentize such viewers. This is discussed more in the final section of this article about bindings and components.
XQuery and XML Databases
Every era in computing has defined its own paradigms for reading and updating data. If you’re reading a relational database to convert into XML then sending XML up to the server and spending time with DOM converting it into SQL, XQuery is for you. XQuery is a lightweight (and non-XML) language for manipulating XML based in large part on the XPath 2.0 specifications that are going gold this month.
I’ve written two books and perhaps a dozen articles and blog postings on XQuery. They were, admittedly, too far ahead of the curve – the specification for an XML-oriented query language has been underway since before 2000 and even today the formal specification strictly handles only the query (not the update) side of data management. However, one of the most interesting facets of XML databases has been the fact that a number of different mechanisms for handling updates have been tried, and the most elegant of them seem to tie into the notion of performing such updates in the same query space as used for getting XML requests in the first place.
I think it’s fair to talk about XQuery and XML databases in the same breath. The two are fundamentally tied together, and are further tied to the notion of data provider abstraction. A significant amount of the work involved in putting together a Web application of any complexity involves a translation layer to communicate between the database and the Web client. For the most part, such middle-tier services involve using some sort of data abstraction service such as ODBC,
Unfortunately, such code is remarkably fragile, is very verbose, always deals with information at the atomic level even when the information may be coming in (or needs to be produced) at a more abstract aggregate level, and all too often is spread out over several different functions or Web Services, making maintenance costly and cumbersome.
XQuery shifts the processing of such queries (and potentially updates as well) out of the server language and into XQuery scripts.
XML databases are becoming both fast and robust, and there are some interesting update extensions proposed (and integrated into open source projects such as eXist and Sleepy Cat XML Database) that handle the update side of XML data query in a clean and seamless way. From personal experience, such databases can cut your development time significantly in the Web application space.
XSLT and Transformations
XSLT in the hands of a good programmer is a wonder tool, especially if you can use XSLT 2.0, which goes gold this month as well – it provides a means to provide exhaustive transformations from one form of XML into another, can read from multiple XML streams and produce multiple forms of output, can be easily subclassed to handle variations in formats, and works incredibly well even in bindings (which I’ll talk about shortly).
One additional facet of eXist that I like is the ability to perform XSLT transformations from within an XQuery and then continue processing the results in the same query (including passing the transformation onto a conditional pipeline of other transformations). I can’t stress enough how important XSLT is even now, and how it will perhaps be the dominant mechanism for manipulating XML in the future.
The Move to XHTML
The distinction may seem minor – XHTML is, for the most part, simply an expression of HTML using XML rules rather than the older SGML rules – but the effects are profound. By shifting to XHTML, you gain all the manipulative tools of XML, including the ability to create arbitrary tags that can be transformed or otherwise bound, the ability to incorporate other namespaces (from the graphically oriented SVG to MathML to RDF for metadata to Xforms), and the means to validate such XHTML content quickly and easily.
What’s more, you can incorporate XHTML fragments into transport formats such as Atom, or as secondary documentation in many other formats. Finally, even browsers that don’t formally recognize XHTML (such as Internet Explorer) can still take XHTML as valid HTML with a minor change in the response header.
Reader Feedback: Page 1 of 1
Latest AJAXWorld RIA Stories
|| | | | | | ||
| | | |
| | | |
| | | |
| | | |
|SYS-CON MEDIA: | | | ||
|SYS-CON EVENTS: | | | | ||
|INTERNATIONAL SITES: | | | | | | | | | ||
|Copyright ©1994-2008 SYS-CON Publications, Inc. All Rights Reserved. All marks are trademarks of SYS-CON Media.|
|Reproduction in whole or in part in any form or medium without express written permission of SYS-CON Publications, Inc. is prohibited.|