Lecture Notes 22
One of the minor miracles of the World Wide Web is that it makes client/server network programming easy. With the Common Gateway Interface (CGI) anyone can become a network programmer, creating dynamic web pages, frontends for databases, and even complex intranet applications with ease. If you're like many web programmers, you started out by writing CGI scripts in Perl. With its powerful text-processing capabilities, forgiving syntax, and tool-oriented design, Perl lends itself to small programs that CGI was designed for.
Unfortunately the Perl/CGI love affair doesn't last forever. As your scripts get larger and your server more heavily loaded, you inevitably run into the performance wall. A 1,000-line Perl CGI script that runs fine on a lightly loaded web site becomes unacceptably slow when it increases to 10,000 lines and the hit rate triples. You may have tried switching to a different programming language and been disappointed. Because the main bottleneck in the CGI protocol is the need to relaunch the script every time it's requested, even compiled C won't give you the performance boost you expect.
If your application needs to go beyond simple dynamic pages, you may have run into the limitations of the CGI protocol itself. Many interesting things go on inthe heart of a web server -- things like the smart remapping of URLs, access control and authentication, or the assignment of MIME types to different documents. The CGI protocol doesn't give you access to these internals. You can neither find out what's going on or intervene in any meaningful way.
To go beyond simple CGI scripting, you must use an alternative protocol that doesn't rely on launching and relaunching an external program each time a script runs. Alternatives include
The Apache server offers you a way out of this trap. It is a freely
distributed, full-featured web server that runs on Unix and Windows NT
systems. Derived from the popular NCSA
httpd server, Apache
dominates the web, currently accounting for more than half of the servers
reachable from the Internet. Like its commercial cousins from Microsoft
and Netscape, Apache supports an application programming interface (API),
allowing you to extend the server with extension modules of your own
design. Modules can behave like CGI scripts, creating interactive pages
on the fly, or they can make much more fundamental changes in the operation
of the server, such as implementing a single sign-on security system or
logging web accesses to a relational database. Regardless of whether they
are simple or complex, Apache modules provide performance many times
greater than the fastest conventional CGI scripts.
The best thing about Apache modules, however, is the existence of
mode_perl is a fully functional Perl
interpreter embedded directly in Apache. With
can take your existing Perl CGI scripts and plug them in, usually without
making any source code changes whatsoever. The scripts will run exactly
as before but many times faster (nearly as fast as fetching static HTML
pages in many cases). Better yet,
mod_perl offers a Perl
interface to the Apache API, allowing you full access to Apache internals.
Instead of writing CGI scripts, you can write Perl extension modules that
control every aspect of the Apache server.
Move your existing Perl scripts over to
mode_perl to get
the immediate performance boost. As you need to, add new features to your
scripts that take advantage of the Apache API (or don't, if you wish to
maintain portability with other servers). When you absolutely need to drag
out the last little bit of performance, you can bite the bullet and rewrite
your Perl modules as C modules. Surprisingly enough, the performance of
Apache/Perl is so good that you won't need to do this as often as you expect.
If you want to write Apache modules I recommend reading this book:
It will show you how to write Apache modules. Because you can get so much done with Perl modules, the focus of the book is on the Apache API through the eyes of the Perl programmer. It covers techniques for creating dynamic HTML documents, interfacing to databases, maintaing state across multiple user sessions, implementing access control and authentication schemes, supporting advanced HTTP methods such as server publish, and implementing custom logging systems. If you are a C programmer, don't despair. Two chapters on writing C-language modules point out the differences between the Perl and C APIs and lead you through the process of writing, compiling, and installing C-language modules. The book includes complete reference guides to both Perl and C APIs and multiple appendixes covering the more esoteric aspects of writing Apache modules. I will also be using most of the first chapter in the book to provide a bird's eye view on the state of the art in web programming today. And developing Apache modules can be an eye-opening experience.
We will first talk about general issues of web application programming and show how the web server APIs in general, and the Apache server API in specific, fit into the picture.
Server-Side Programming with Apache
Before the World Wide Web appeared, client/server network programming was a drag. Application developers had to develop the communications protocol, write the low-level network code to reliably transmit and receive messages, create a user interface at the client side of the connection, and write a server to listen for incoming requests, service them properly, and transmit the results back to the client. Even simple client/server applications were many thousand lines of code, the development pace was slow, and programmers worked in C.
When the web appeared in the early '90s, all that changed. The web provided a simple but versatile communications protocol standard, a universal network client, and a set of reliable and well written network servers. In addition, the early servers provided developers with a server extension protocol called the Common Gateway Interface (CGI). Using CGI, a programmer could get a simple client/server application up and running in 10 lines of code instead of thousands. Instead of being limited to C or another "systems language," CGI allowed programmers to use whatever development environment they felt comfortable with, whether that be the command shell, Perl, Phthon, REXX, Visual Basic, or a traditional compiled language. Suddenly client/server programming was transformed from a chore into a breeze. The number of client/server applications increased 100-fold over a period of months, and a new breed of software developer, the "web programmer," appeared.
The face of network application development continues its rapid pace of change. Open the pages of a web developer's magazine today and you will be greeted by a bewildering array of competing technologies. You can
Web Programming Then and Now
In the beginning was the web server. Specifically, in the very very
beginning was CERN
httpd, a C-language server developed
at CERN, the European high-energy physics lab, by Tim Berners-Lee,
Ari Luotonen, and Henrik Frystyk Nielsen around 1991. CERN
httpd was designed to serve static web pages. The
server listened to the network for Uniform Resource Locator (URL)
requests using what would eventually be called the HTTP/0.9 protocol,
translated the URLs into file paths, and returned the contents of the
files to the waiting client. If you wanted to extend the functionality
of the web server -- for example to hook it up to a bibliographic
database of scientific papers -- you had to modify the server's source
code and recompile.
This was neither very flexible nor very easy to do. So early on,
httpd was enhanced to launch extrenal programs to
handle certain URL requests. Special URLs, recognized with a complex
system of pattern matching and string transformation rules, would
invoke a command shell to run an external script or program. The output
of the script would then be redirected to the browser, generating a web
page on the fly. A simple scheme allowed users to pass arguments lists
to the script, allowing developers to create keyword search systems
and other basic applications.
Meanwhile, Rob McCool, of the National Center for Supercomputing
Applications at the University of Illinois, was developing another web
server to accompany NCSA's browser product, Mosaic. NCSA
was smaller than CERN
httpd, faster (or so common wisdom had
it), and a host of nifty features, and was easier than the CERN software to
configure and install. It quickly gained ground on CERN
particularly in the United States. Like CERN
httpd, the NCSA
product had a facility for generating pages on the fly with external
programs, but one that differed in detail from CERN's
Scripts written to work with NCSA
httpd wouldn't work with
httpd and vice versa.
The Birth of CGI
Fortunately for the world, the CERN and the NCSA groups did not cling tenaciously to "their" standards as certain latter-day software vendors do. Instead, the two groups got together along with other interested parties and worked out a common standard called the Common Gateway Interface.
CGI was intended to be the duct tape of the web -- a flexible glue that could quickly and easily bridge between the web protocols and other forms of information technology. And it worked. By following a few easy conventions, CGI scripts can place user-friendly web frontends on top of databases, scientific analysis tools, order entry systems, and games. They can even provide access to older network services, such as gopher, whois, or WAIS. As the web changed from an academic exercise into big business, CGI came along for the ride. Every major server vendor (with a couple of notable exceptions, such as some of the Macintosh server developers) has incorporated the CGI standard into its product. It comes very close to the "write once, run everywhere" development environment that application developers have been seeking for decades.
But CGI is not the highest-performance environment. The Achilles' heel of a CGI script is that every time a web server needs it, the server must set up the CGI environment, read the script into memory, and launch the script. The CGI protocol works well with operating systems that were optimized for fast process startup and many simultaneous processes, such as Unix dialects, provided that the server doesn't become very heavily loaded. However, as the load increases, the process creation bottleneck eventually turns formerly snappy scripts into molasses. On operating systems that were designed to run lightweight threads and where full processes are rather heavyweight, such as Windows NT, CGI scripts are a performance disaster.
Another fundamental problem with CGI scripts is that they exit as soon as they finish processing the current request. If the CGI script does some time-consuming operation during startup, such as establishing a database connection or creating complex data structures, the overhead of reestablishing the state each time it's needed is considerable -- and a pain to program around.
An early alternative to the CGI scripting paradigm was the invention of web server APIs (application programming interfaces), mechanisms that the developer can use to extend the functionality of the server itself by linking new modules directly to the server executable. For example, to search a database from within a web page, a developer could write a module that combines calls to web server functions with calls to a relational database library. Add a dash or two of program logic to transform the URLs into SQL, and the web server suddenly becomes a fancy database frontend. Server APIs typically provide extensive access to the innards of the server itself, allowing developers to customize how it performs the various phases of the HTTP transaction. Although this might seem like an esoteric feature, it's quite powerful.
The earliest web API that we know of was built into the Plexus web server, written by Tony Sanders of BSDI. Plexus was a 100 percent pure Perl server that did almost everything that web servers of the time were expected to do. Written entirely in Perl version 4, Plexus allowed the webmaster to extend the server by addign new source files to be compiled and run on an as-needed basis.
APIs invented later include NSAPI, the interface for Netscape servers; ISAPI, the interface used by Microsoft's Internet Information Server and some other Windows-based servers; and of course the Apache web server's API, the only one of the bunch that doesn't have a cute acronym.
Server APIs provide performance and access to the guts of the server's software, giving them programming powers beyond those of mere mortal CGI scripts. Their drawbacks include a steep learning curve and often a certain amount of risk and inconvenience, not to mention limited portability. As an example of the risk, a bug in an API module can crash the whole server. Because of the tight linkage between the server and its API modules, it's never as easy to install and debug a new module as it is to install and debug a new CGI script. On some platforms, you might have to bring the server down to recompile and link it. On other platforms, you have to worry about the details of dynamic loading. However, the biggest problem of server APIs is their limited portability. A server module written for one API is unlikely to work with another vendor's server without extensive revision.
Another server-side solution uses server-side includes to embed
snippets of code inside HTML comments or special-purpose tags. NCSA
httpd was the first to implement server-side includes.
More advanced members of this species include Microsoft's Active
Server Pages, Allaire Cold Fusion, and PHP, all of which turn HTML
into miniature programming language complete with variables, looping
constructs, and database access methods.
Netscape servers recognize HTML pages that have been enhanced
runs on top of Apache's
mod_perl module, marries HTML
to Perl, as does PerlScript, an
extension for Microsoft
Internet Information server.
The main problem with server-side includes and other HTML extensions is that they are ad hoc. No standars exist for server-side includes, and pages written for one vendor's web server will definitely not run unmodified on another's.
To avoid some of the problems of proprietary APIs and server-side includes, several vendors have turned to using embedded high-level interpretive languages in their servers. Embedded interpreters often come with CGI emulation layers, allowing script files to be executed directly by the server without the overhead of invoking separate processes. An embedded interpreter also eliminates the need to make dramatic changes to the server software itself. In many cases an embedded interpreter provides a smooth path for speeding up CGI scripts because little or no source code modification is necessary.
Examples of embedded interpreters include
mod_pyapache, which embeds a
Phyton interpreter. When a Phyton script is
requested, the latency between loading the
script and running it is dramatically reduced
because the interpreter is already in memory.
A similar module exists for the TCL language.
Sun Microsystems' "servlet" API provides a standard way for web servers to run small programs written in the Java programming language. Depending on the implementation, a portion of the Java runtime system may be embedded in the web server or the web server itself may be written in Java. Apache's servlet system uses co-processes rather than an embedded interpreter. These implementations all avoid the overhead of launching a new external process for each request.
Much of the book from which this section is drawn is about
mod_perl, an Apache module that embeds the Perl
interpreter in the server. However, as you can see if you read
mod_perl goes well beyond providing an
emulation layer for CGI scripts to give programmers complete
access to the Apache API.
Another way to avoid the latency of CGI scripts is to keep them loaded and running all the time as a co-process. When the server needs the script to generate the page, it sends it a message and waits for the response.
The first system to use co-processing was the FastCGI protocol, released by Open Market in 1996. Under this system, the web server runs FastCGI scripts as separate processes just like ordinary CGI scripts. However, once launched, these scripts don't immediately exit when they finish processing the initial request. Instead, they go into an infinite loop that awaits new incoming requests, processes them, and goes back to waiting. Things are arranged so that the FastCGI process's input and output streams are redirected to the web server and a CGI-like environment is set up at the beginning of each request.
Existing CGI scripts can be adapted to use FastCGI by making a few,
usually painless, changes to the script source code. Implementations of
FastCGI are available for Apache, as well as Zeus, netscape, Microsoft
IIS, and other servers. However, FastCGI has so far failed to win wide
acceptance in the web development community, perhaps because of Open
Market's retreat of web server market. Fortunately, a group of
volunteers have picked up the Apache
and are continuing to support and advance this freeware implementation.
You can find out more about
mod_fastcgi at the group's
website. Commercial implementations
of FastCGI are also available from Fast
Engines, Inc., which provides the Netscape and Microsoft IIS
versions of FastCGI.
system is an Apache module called
mod_jserv, which you can find
at the project homepage.
mod_jserv allows Apache
to run Java servlets using Sun's servlet API. However,
unlike most other servlet systems,
mod_jserv uses something called the
"JServ Protocol" to allow the web server to communicate with Java scripts
running as separate processes. You can also control these servlets via the
Apache Perl API using the
Apache::Servlet module written by
An entirely different way to improve the performance of web-based applications is to move some or all of the processing from the server side to the client side. It seems silly to send a fill-out form all the way across the Internet and back again if all you need to do is to validate that the user has filled in the Zip Code field correctly. This, and the ability to provide more dynamic interfaces, is a big part of the motivation for client-side scripting.
However, although Java claims to solve client-side compatibility problems, the many slight differences in implementation of the Java runtime library in different browsers has given it a reputation for "write once, debug everywhere." Also, because of security concerns, Java applets are very much restricted in what they can do, although this is expected to change once Sun and the vendors introduce a security model based on unforgeable digital signatures.
Microsoft's ActiveX technology is a repackaging of its COM (Common Object Model) architecture. ActiveX allows dynamic link libraries to be packed up into "controls," shipped across the Internet, and run on the user's computer. because ActiveX controls are compiled binaries, and because COM has not been adopted by other operating systems, this technology is most suitable for uniform intranet environments that consist of Microsoft Windows machines running a recent version of Internet Explorer.
Integrated Development Environments
Integrated development ebvironments try to give software developers the best of both client-side and server-side worlds by providing a high-level view of the application. In this type of environment, you don't need to worry much about the details of how web pages are displayed. Instead, you concentrate on the application logic and the user interface.
The development environment turns your program into some mixture of database access queries, server-side procedures, and client-side scripts. Some popular environments of this sort include Netscape's "Live" development systems (LiveWire for client-server applications and LiveConnect for database connectivity), NeXT's object oriented WebObjects, Allaire's Cold Fusion, and the Microsoft FrontPage publishing system. These systems, although attractive, have the same disadvantage as embedded HTML languages; once you've committed to one of these environments, there's no backing out. There's not the least whiff of compatibility across different vendors' development systems.
Making the Choice
Your head is probably spinning with all the possibilities. Which tool should you use for your own application development? The choice depends on your application's requirements and the tradeoffs you're willing to accept. The table below gives the authors' highly subjective ranking of the different development systems' pros and cons.
Table: Comparison of Web Development Solutions
In this table, the "Portability" column indicates how easy it is to move a web application from one server to another inthe case of server-side systems, or from one make of web browser to another in the case of client-side solutions. By "Performance," we mean the interactive speed of the application that the user perceives more than raw data processing power of the system. "Simplicity" is our gut feeling for the steepness of the system's learning curve and how convenient the system is to develop in once you're comfortable with it. "Power" is an estimate of the capabilities of the system; how much control it provides over the way the application behaves and its flexibility to meet creative demands.
If your main concern is present and future protability, your best choice is vanilla CGI. You can be confident that your CGI scripts will work properly with all browsers, and that you'll be able to migrate scripts from one server to another with a minimu of hardship. CGI scripts are simple to write and offer a fair amount of flexibility, but their performance is poor.
If you want power and performance at all cost, go with a server API. The applications that you write will work correctly with all browsers, but you'll want to think twice before moving your programs to a different server. Chances are that a large chunk of your application will need to be rewritten when you migrate from one vendor's API to another's.
FastCGI offers a marked performance improvement but does require you to make some minor modifications to CGI script source code in order to use it.
If you need a sophisticated graphical user interface at the browser side, then some component of your application must be client-side Java or DHTML. Despite its compatibility problems, DHTML is worth considering, particularly when you are running an intranet and have complete control over your users' choice of browsers.
Java applets improve the compatibility situation. So long as you don't try to get too fancy, there's a good chance that an applet will run on more than one version of a single vendor's browser, and perhaps even on browsers from different vendors.
If you're looking for ease of programming and a gentle learning curve, you should consider a server-side include system like PHP or Active Server Pages. You don't have to learn the whole language at once. Just start writing HTML and add new features as you need them. The cost of this simplicity is portability once again. Pages written for one vendor's server-side include system won't work correctly with a different vendor's server-side system, although the HTML framework will still display correctly.
A script interpreter embedded in the web server has much better performance than a standalone CGI script. In many cases, CGI scripts can be moved to embedded interpreters and back again without source code modifications, allowing for portability among different servers. To take the most advantage of the features offered by embedded interpreters, you must usually write server-specific code, which sacrifices portability and adds a bit of complexity to the application code.
The Apache Project
The Apache project began in 1995 when a group of eight volunteers, seeing that web software was becoming increasingly comercialized, got together to create a supported open source web server. Apache began as an enhanced version of the public-domain NCSA server but steadily diverged from the original. Many new features have been added to Apache over the years: significant features include the ability for a single server to host multiple virtual web sites, a smorgasbord of authentication schemes, and the ability for the server to act as a caching proxy. In some cases, Apache is way ahead of the commercial vendors in the features wars.
Internally the server has been completely redesigned to use a
modular and extensible architecture, turning it into what the authors
describe as a "web server toolkit". In fact, there's very little of the
httpd source code left within Apache. The main
NCSA legacy is the configuration files, which remain backward compatible
Apache's success has been phenomenal. In less than three years, Apache has risen from relative obscurity to the position of market leader. Netcraft, a British market research company that monitors the growth and usage of the web, estimates that Apache servers now run over 50 percent of the Internet's web sites, making it by far the most popular web server in the world. Mocrosoft, its nearest rival, holds a mere 22 percent of the market. This is despite the fact that Apache has lacked some of the conveniences that common wisdom holds to be essential, such as a graphical user interface for configuration and administration. (Impressive as they are, these numbers should be taken with a grain or two of salt. Netcraft's survey techniques count only web servers connected directly to the Internet. The number of web servers running intranets is not represented in these counts, which might inflate or deflate Apache's true market share).
Apache has been used as the code base for several commercial server products. The most successful of these, C2Net's Stringhold, adds support for secure communications with Secure Socket Layer (SSL) and a form-based configuration manager. There is also WebTen by Tenon Intersystems, a Macintosh PowerPC port, and the Red Hat Secure Server, an inexpensive SSL-supporting server from the makers of red Hat Linux.
Another milestone was reached in November of 1997 when the Apache Group announced its port of Apache to the Windows NT and 95 operating systems (Win32). A fully multithreaded implementation, the Win32 port supports all the features of the Unix version and is designed with the same modular architecture as its brother. Freeware ports to OS/2 and the AmigaOS are also available.
In the summer of 1998, IBM announced its plans to join with the Apache volunteers to develop a version of Apache to use as the basis of its secure Internet commerce server system, supplanting the servers that it and Lotus Corporation had previously developed.
Why use Apache? Many web sites run Apache by accident. The server software is small, free, and well documented, and can be downloaded without filling out pages of licensing agreements. The person responsible for getting his organization's web site up and running downloads and installs Apache just to get his feet wet, intending to replace Apache with a "real" server at a later date. But that date never comes. Apache does the job and does it well.
However there are better reasons for using Apache. Like other successful open source products such as Perl, the GNU tools, and the Linux operating system, Apache has some big advantages over its commercial rivals.
are usually answered within hours. If you need a higher level of support, you can purchase Stringhold or another commercial version of Apache and get all the benefits of the freeware product, plus trained professional help.comp.infosystems.www.servers.unix
1988, who would have thought the Digital Equipment whale would be gobbled up by the Compaq minnow just 10 years later? Good community software projects don't go away. Because the source code is available to all, someone is always there to pick up the torch when a member of the core developer group leaves.
This being said, Apache does provide simple web-based interfaces for viewing
the current configuration and server status. A number of people are working on
administrative GUIs, and there is already a web interface for remotely managing
web user accounts (for example, the