The primary function of a web server is to deliver web pages on the request to clients. This means delivery of HTML documents and any additional content that may be included by a document, such as images, style sheets and scripts.
A client, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so. The resource is typically a real file on the server’s secondary memory, but this is not necessarily the case and depends on how the web server is implemented.
While the primary function is to serve content, a full implementation of HTTP also includes ways of receiving content from clients. This feature is used for submitting web forms, including uploading of files.
Many generic web servers also support server-side scripting, e.g., Active Server Pages (ASP) and PHP. This means that the behaviour of the web server can be scripted in separate files, while the actual server software remains unchanged. Usually, this function is used to create HTML documents “on-the-fly” as opposed to returning fixed documents. This is referred to as dynamic and static content respectively. The former is primarily used for retrieving and/or modifying information from databases. The latter is, however, typically much faster and more easily cached.
Web servers are not always used for serving the world wide web. They can also be found embedded in devices such as printers, routers, webcams and serving only a local network. The web server may then be used as a part of a system for monitoring and/or administrating the device in question. This usually means that no additional software has to be installed on the client computer, since only a web browser is required (which now is included with most operating systems).
History of web servers
In 1989 Tim Berners-Lee proposed a new project with the goal of easing the exchange of information between scientists by using a hypertext system to his employer CERN. The project resulted in Berners-Lee writing two programs in 1990:
- A browser called WorldWideWeb
- The world’s first web server, later known as CERN httpd, which ran on NeXTSTEP
Between 1991 and 1994, the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among socially diverse groups of people, first in scientific organizations, then in universities and finally in industry.
In 1994 Tim Berners-Lee decided to constitute the World Wide Web Consortium (W3C) to regulate the further development of the many technologies involved (HTTP, HTML, etc.) through a standardization process.
A web server (program) has defined load limits, because it can handle only a limited number of concurrent client connections (usually between 2 and 80,000, by default between 500 and 1,000) per IP address (and TCP port) and it can serve only a certain maximum number of requests per second depending on:
- Its own settings
- The HTTP request type
- Content origin (static or dynamic)
- The fact that the served content is or is not cached
- The hardware and software limitations of the OS where it is working
When a web server is near to or over its limits, it becomes unresponsive.
At any time web servers can be overloaded because of:
- Too much legitimate web traffic. Thousands or even millions of clients connecting to the web site in a short interval, e.g., Slashdot effect;
- Distributed Denial of Service attacks;
- Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them);
- XSS viruses can cause high traffic because of millions of infected browsers and/or web servers;
- Internet bots. Traffic not filtered/limited on large web sites with very few resources (bandwidth, etc.);
- Internet (network) slowdowns, so that client requests are served more slowly and the number of connections increases so much that server limits are reached;
- Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures, back-end (e.g., database) failures, etc.; in these cases the remaining web servers get too much traffic and become overloaded.
The symptoms of an overloaded web server are:
- Requests are served with (possibly long) delays (from 1 second to a few hundred seconds).
- 500, 502, 503, 504 HTTP errors are returned to clients (sometimes also unrelated 404 error or even 408 error may be returned).
- TCP connections are refused or reset (interrupted) before any content is sent to clients.
- In very rare cases, only partial contents are sent (but this behavior may well be considered a bug, even if it usually depends on unavailable system resources).
To partially overcome above load limits and to prevent overload, most popular Web sites use common techniques like:
- managing network traffic, by using:
- deploying Web cache techniques;
- using different domain namesto serve different (static and dynamic) content by separate web servers, i.e.:
- using different domain names and/or computers to separate big files from small and medium sized files; the idea is to be able to fully cache small and medium sized files and to efficiently serve big or huge (over 10 – 1000 MB) files by using different settings;
- using many web servers (programs) per computer, each one bound to its own network card and IP address;
- using many web servers (computers) that are grouped together so that they act or are seen as one big web server (see also Load balancer);
- adding more hardware resources (i.e. RAM, disks) to each computer;
- tuning OS parameters for hardware capabilities and usage;
- using more efficient computer programs for web servers, etc.;
- using other workarounds, especially if dynamic content is involved.
|Product||Vendor||Web Sites Hosted||Percent|
|Sun Java System Web Server||Oracle|