Computer Virus

A computer virus is a computer program that can replicate itself[1] and spread from one computer to another. The term “virus” is also commonly but erroneously used to refer to other types of malware, including but not limited to adware and spyware programs that do not have the reproductive ability. A true virus can spread from one computer to another (in some form of executable code) when its host is taken to the target computer; for instance because a user sent it over a network or the Internet, or carried it on a removable medium such as a floppy disk, CD, DVD, or USB drive.[2]

Viruses can increase their chances of spreading to other computers by infecting files on a network file system or a file system that is accessed by another computer.[3][4]

As stated above, the term “computer virus” is sometimes used as a catch-all phrase to include all types of malware, even those that do not have the reproductive ability. Malware includes computer viruses, computer worms, Trojan horses, most rootkits, spyware, dishonest adware and other malicious and unwanted software, including true viruses. Viruses are sometimes confused with worms and Trojan horses, which are technically different. A worm can exploit security vulnerabilities to spread itself automatically to other computers through networks, while a Trojan horse is a program that appears harmless but hides malicious functions. Worms and Trojan horses, like viruses, may harm a computer system’s data or performance. Some viruses and other malware have symptoms noticeable to the computer user, but many are surreptitious or simply do nothing to call attention to themselves. Some viruses do nothing beyond reproducing themselves.

Infection strategies

In order to replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many viruses attach themselves to executable files that may be part of legitimate programs. If a user attempts to launch an infected program, the virus’ code may be executed simultaneously. Viruses can be divided into two types based on their behavior when they are executed. Nonresident viruses immediately search for other hosts that can be infected, infect those targets, and finally transfer control to the application program they infected. Resident viruses do not search for hosts when they are started. Instead, a resident virus loads itself into memory on execution and transfers control to the host program. The virus stays active in the background and infects new hosts when those files are accessed by other programs or the operating system itself.

Vectors and hosts

Viruses have targeted various types of transmission media or hosts. This list is not exhaustive:

PDFs, like HTML, may link to malicious code. PDFs can also be infected with malicious code.

In operating systems that use file extensions to determine program associations (such as Microsoft Windows), the extensions may be hidden from the user by default. This makes it possible to create a file that is of a different type than it appears to the user. For example, an executable may be created named “picture.png.exe”, in which the user sees only “picture.png” and therefore assumes that this file is an image and most likely is safe, yet when opened runs the executable on the client machine.

An additional method is to generate the virus code from parts of existing operating system files by using the CRC16/CRC32 data. The initial code can be quite small (tens of bytes) and unpack a fairly large virus. This is analogous to a biological “prion” in the way it works but is vulnerable to signature based detection. This attack has not yet been seen “in the wild”.

Methods to avoid detection

In order to avoid detection by users, some viruses employ different kinds of deception. Some old viruses, especially on the MS-DOS platform, make sure that the “last modified” date of a host file stays the same when the file is infected by the virus. This approach does not fool anti-virus software, however, especially those which maintain and date Cyclic redundancy checks on file changes.

Some viruses can infect files without increasing their sizes or damaging the files. They accomplish this by overwriting unused areas of executable files. These are called cavity viruses. For example, the CIH virus, or Chernobyl Virus, infects Portable Executable files. Because those files have many empty gaps, the virus, which was 1 KB in length, did not add to the size of the file.

Some viruses try to avoid detection by killing the tasks associated with antivirus software before it can detect them.

As computers and operating systems grow larger and more complex, old hiding techniques need to be updated or replaced. Defending a computer against viruses may demand that a file system migrate towards detailed and explicit permission for every kind of file access.

Encryption with a variable key

A more advanced method is the use of simple encryption to encipher the virus. In this case, the virus consists of a small decrypting module and an encrypted copy of the virus code. If the virus is encrypted with a different key for each infected file, the only part of the virus that remains constant is the decrypting module, which would (for example) be appended to the end. In this case, a virus scanner cannot directly detect the virus using signatures, but it can still detect the decrypting module, which still makes indirect detection of the virus possible. Since these would be symmetric keys, stored on the infected host, it is in fact entirely possible to decrypt the final virus, but this is probably not required, since self-modifying code is such a rarity that it may be reason for virus scanners to at least flag the file as suspicious.

An old, but compact, encryption involves XORing each byte in a virus with a constant, so that the exclusive-or operation had only to be repeated for decryption. It is suspicious for a code to modify itself, so the code to do the encryption/decryption may be part of the signature in many virus definitions.

Anti-virus software and other preventive measures

Many users install anti-virus software that can detect and eliminate known viruses after the computer downloads or runs the executable. There are two common methods that an anti-virus software application uses to detect viruses. The first, and by far the most common method of virus detection is using a list of virus signature definitions. This works by examining the content of the computer’s memory (its RAM, and boot sectors) and the files stored on fixed or removable drives (hard drives, floppy drives), and comparing those files against a database of known virus “signatures”. The disadvantage of this detection method is that users are only protected from viruses that pre-date their last virus definition update. The second method is to use a heuristic algorithm to find viruses based on common behaviors. This method has the ability to detect novel viruses that anti-virus security firms have yet to create a signature for.

Some anti-virus programs are able to scan opened files in addition to sent and received email messages “on the fly” in a similar manner. This practice is known as “on-access scanning”. Anti-virus software does not change the underlying capability of host software to transmit viruses. Users must update their software regularly to patch security holes. Anti-virus software also needs to be regularly updated in order to recognize the latest threats.

One may also minimize the damage done by viruses by making regular backups of data (and the operating systems) on different media, that are either kept unconnected to the system (most of the time), read-only or not accessible for other reasons, such as using different file systems. This way, if data is lost through a virus, one can start again using the backup (which should preferably be recent).

If a backup session on optical media like CD and DVD is closed, it becomes read-only and can no longer be affected by a virus (so long as a virus or infected file was not copied onto the CD/DVD). Likewise, an operating system on a bootable CD can be used to start the computer if the installed operating systems become unusable. Backups on removable media must be carefully inspected before restoration. The Gammima virus, for example, propagates via removable flash drives.

Virus removal

One possibility on Windows Me, Windows XP, Windows Vista and Windows 7 is a tool known as System Restore, which restores the registry and critical system files to a previous checkpoint. Often a virus will cause a system to hang, and a subsequent hard reboot will render a system restore point from the same day corrupt. Restore points from previous days should work provided the virus is not designed to corrupt the restore files or also exists in previous restore points.[33] Some viruses, however, disable System Restore and other important tools such as Task Manager and Command Prompt. An example of a virus that does this is CiaDoor. However, many such viruses can be removed by rebooting the computer, entering Windows safe mode, and then using system tools.

Administrators have the option to disable such tools from limited users for various reasons (for example, to reduce potential damage from and the spread of viruses). A virus can modify the registry to do the same even if the Administrator is controlling the computer; it blocks all users including the administrator from accessing the tools. The message “Task Manager has been disabled by your administrator” may be displayed, even to the administrator.[citation needed]

Users running a Microsoft operating system can access Microsoft’s website to run a free scan, provided they have their 20-digit registration number. Many websites run by anti-virus software companies provide free online virus scanning, with limited cleaning facilities (the purpose of the sites is to sell anti-virus products). Some websites allow a single suspicious file to be checked by many antivirus programs in one operation.



War Dialing

War dialing or wardialing is a technique of using a modem to automatically scan a list of telephone numbers, usually dialing every number in a local area code to search for computers, Bulletin board systems and fax machines. Hackers use the resulting lists for various purposes, hobbyists for exploration, and crackers – malicious hackers who specialize in computer security – for password guessing.

A single wardialing call would involve calling an unknown number, and waiting for one or two rings, since answering computers usually pick up on the first ring. If the phone rings twice, the modem hangs up and tries the next number. If a modem or fax machine answers, the wardialer program makes a note of the number. If a human or answering machine answers, the wardialer program hangs up. Depending on the time of day, wardialing 10,000 numbers in a given area code might annoy dozens or hundreds of people, some who attempt and fail to answer a phone in two rings, and some who succeed, only to hear the wardialing modem’s carrier tone and hang up. The repeated incoming calls are especially annoying to businesses that have many consecutively numbered lines in the exchange, such as used with a Centrex telephone system.

The popularity of wardialing in 1980s and 1990s prompted some states to enact legislation prohibiting the use of a device to dial telephone numbers without the intent of communicating with a person.

The popular name for this technique originated in the 1983 film WarGames. In the film, the protagonist programmed his computer to dial every telephone number in Sunnyvale, California to find other computer systems. Prior to the movie’s release, this technique was known as “hammer dialing” or “demon dialing“. ‘WarGames Dialer’ programs became common on bulletin board systems of the time, with file names often truncated to wardial.exe and the like due to length restrictions of 8 characters on such systems. Eventually, the etymology of the name fell behind as “war dialing” gained its own currency within computing culture.[1]

A more recent phenomenon is wardriving, the searching for wireless networks (Wi-Fi) from a moving vehicle. Wardriving was named after wardialing, since both techniques involve brute-force searches to find computer networks. The aim of wardriving is to collect information about wireless access points (not to be confused with piggybacking).

Similar to war dialing is a port scan under TCP/IP, which “dials” every TCP port of every IP address to find out what services are available. Unlike wardialing, however, a port scan will generally not disturb a human being when it tries an IP address, regardless of whether there is a computer responding on that address or not. Related to wardriving is warchalking, the practice of drawing chalk symbols in public places to advertise the availability of wireless networks.

The term is also used today by analogy for various sorts of exhaustive brute force attack against an authentication mechanism, such as a password. While a dictionary attack might involve trying each word in a dictionary as the password, “wardialing the password” would involve trying every possible password. Password protection systems are usually designed to make this impractical, by making the process slow and/or locking out an account for minutes or hours after some low number of wrong password entries.

War dialing is sometimes used as a synonym for demon dialing, a related technique which also involves automating a computer modem in order to repeatedly place telephone calls.

Web Crawler

For the search engine of the same name, see WebCrawler. For the fictional robots called Skutters, see Red Dwarf characters#The Skutters.
Not to be confused with offline reader.

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots,[1] Web spiders,[2] Web robots,[2] or—especially in the FOAF community—Web scutters.[3]

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages,such as harvesting e-mail addresses (usually for sending spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a fraction of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

As Edwards et al. noted, “Given that the bandwidth for conducting crawls is neither infinite nor free, it is becoming essential to crawl the Web in not only a scalable, but efficient way, if some reasonable measure of quality or freshness is to be maintained.”[4] A crawler must carefully choose at each step which pages to visit next.

The behavior of a Web crawler is the outcome of a combination of policies:[5]

  • a selection policy that states which pages to download,
  • a re-visit policy that states when to check for changes to the pages,
  • a politeness policy that states how to avoid overloading Web sites, and
  • a parallelization policy that states how to coordinate distributed Web crawlers.


Web Server

Web server can refer to either the hardware (the computer) or the software (the computer application) that helps to deliver content that can be accessed through the Internet.[1]

The most common use of web servers is to host web sites but there are other uses such as data storage or running enterprise applications.

The inside and front of a Dell PowerEdge Web server


The primary function of a web server is to deliver web pages on the request to clients. This means delivery of HTML documents and any additional content that may be included by a document, such as images, style sheets and scripts.

A client, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so. The resource is typically a real file on the server’s secondary memory, but this is not necessarily the case and depends on how the web server is implemented.

While the primary function is to serve content, a full implementation of HTTP also includes ways of receiving content from clients. This feature is used for submitting web forms, including uploading of files.

Many generic web servers also support server-side scripting, e.g., Active Server Pages (ASP) and PHP. This means that the behaviour of the web server can be scripted in separate files, while the actual server software remains unchanged. Usually, this function is used to create HTML documents “on-the-fly” as opposed to returning fixed documents. This is referred to as dynamic and static content respectively. The former is primarily used for retrieving and/or modifying information from databases. The latter is, however, typically much faster and more easily cached.

Web servers are not always used for serving the world wide web. They can also be found embedded in devices such as printers, routers, webcams and serving only a local network. The web server may then be used as a part of a system for monitoring and/or administrating the device in question. This usually means that no additional software has to be installed on the client computer, since only a web browser is required (which now is included with most operating systems).

History of web servers

The world’s first web server

In 1989 Tim Berners-Lee proposed a new project with the goal of easing the exchange of information between scientists by using a hypertext system to his employer CERN. The project resulted in Berners-Lee writing two programs in 1990:

Between 1991 and 1994, the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among socially diverse groups of people, first in scientific organizations, then in universities and finally in industry.

In 1994 Tim Berners-Lee decided to constitute the World Wide Web Consortium (W3C) to regulate the further development of the many technologies involved (HTTP, HTML, etc.) through a standardization process.

Load limits

A web server (program) has defined load limits, because it can handle only a limited number of concurrent client connections (usually between 2 and 80,000, by default between 500 and 1,000) per IP address (and TCP port) and it can serve only a certain maximum number of requests per second depending on:

  • Its own settings
  • The HTTP request type
  • Content origin (static or dynamic)
  • The fact that the served content is or is not cached
  • The hardware and software limitations of the OS where it is working

When a web server is near to or over its limits, it becomes unresponsive.

Overload causes

At any time web servers can be overloaded because of:

  • Too much legitimate web traffic. Thousands or even millions of clients connecting to the web site in a short interval, e.g., Slashdot effect;
  • Distributed Denial of Service attacks;
  • Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them);
  • XSS viruses can cause high traffic because of millions of infected browsers and/or web servers;
  • Internet bots. Traffic not filtered/limited on large web sites with very few resources (bandwidth, etc.);
  • Internet (network) slowdowns, so that client requests are served more slowly and the number of connections increases so much that server limits are reached;
  • Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures, back-end (e.g., database) failures, etc.; in these cases the remaining web servers get too much traffic and become overloaded.

Overload symptoms

The symptoms of an overloaded web server are:

  • Requests are served with (possibly long) delays (from 1 second to a few hundred seconds).
  • 500, 502, 503, 504 HTTP errors are returned to clients (sometimes also unrelated 404 error or even 408 error may be returned).
  • TCP connections are refused or reset (interrupted) before any content is sent to clients.
  • In very rare cases, only partial contents are sent (but this behavior may well be considered a bug, even if it usually depends on unavailable system resources).

Anti-overload techniques

To partially overcome above load limits and to prevent overload, most popular Web sites use common techniques like:

  • managing network traffic, by using:
    • Firewalls to block unwanted traffic coming from bad IP sources or having bad patterns;
    • HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP patterns;
    • Bandwidth management and traffic shaping, in order to smooth down peaks in network usage;
  • deploying Web cache techniques;
  • using different domain namesto serve different (static and dynamic) content by separate web servers, i.e.:
  • using different domain names and/or computers to separate big files from small and medium sized files; the idea is to be able to fully cache small and medium sized files and to efficiently serve big or huge (over 10 – 1000 MB) files by using different settings;
  • using many web servers (programs) per computer, each one bound to its own network card and IP address;
  • using many web servers (computers) that are grouped together so that they act or are seen as one big web server (see also Load balancer);
  • adding more hardware resources (i.e. RAM, disks) to each computer;
  • tuning OS parameters for hardware capabilities and usage;
  • using more efficient computer programs for web servers, etc.;
  • using other workarounds, especially if dynamic content is involved.

Below is the most recent statistics of the market share of the top web servers on the internet by Netcraft survey in March 2011.

Product Vendor Web Sites Hosted Percent
Apache Apache 179,720,332 60.31%
IIS Microsoft 57,644,692 19.34%
nginx Igor Sysoev 22,806,060 7.65%
GWS Google 15,161,530 5.09%
lighttpd lighttpd 1,796,471 0.60%
Sun Java System Web Server Oracle

Source: Wikipedia

White-Box Testing

White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing)
is a method of testing software that tests internal structures or
workings of an application, as opposed to its functionality (i.e. black-box testing).
In white-box testing an internal perspective of the system, as well as
programming skills, are required and used to design test cases. The
tester chooses inputs to exercise paths through the code and determine
the appropriate outputs. This is analogous to testing nodes in a
circuit, e.g. in-circuit testing (ICT).

While white-box testing can be applied at the unit, integration and system levels of the software testing
process, it is usually done at the unit level. It can test paths within
a unit, paths between units during integration, and between subsystems
during a system level test. Though this method of test design can
uncover many errors or problems, it might not detect unimplemented parts
of the specification or missing requirements.

White-box test design techniques include:

  • Control flow testing
  • Data flow testing
  • Branch testing
  • Path testing


In penetration testing, white-box testing refers to a methodology where an ethical hacker
has full knowledge of the system being attacked. The goal of a
white-box penetration test is to simulate a malicious insider who has
some knowledge and possibly basic credentials to the target system.



is a mechanism for wirelessly connecting electronic devices. A device enabled with Wi-Fi, such as a personal computer, video game console, smartphone, or digital audio player, can connect to the Internet via a wireless network access point. An access point (or hotspot) has a range of about 20 meters (65 ft) indoors and a greater range outdoors. Multiple overlapping access points can cover large areas.

“Wi-Fi” is a trademark of the Wi-Fi Alliance and the brand name for products using the IEEE 802.11 family of standards. Wi-Fi is used by over 700 million people. There are over four million hotspots (places with Wi-Fi Internet connectivity) around the world, and about 800 million new Wi-Fi devices are sold every year.[citation needed] Wi-Fi products that complete Wi-Fi Alliance interoperability certification testing successfully may use the “Wi-Fi CERTIFIED” designation and trademark.


To connect to a Wi-Fi LAN, a computer has to be equipped with a wireless network interface controller. The combination of computer and interface controller is called a station.
All stations share a single radio frequency communication channel.
Transmissions on this channel are received by all stations within range.
The hardware does not signal the user that the transmission was
delivered and is therefore called a best-effort delivery mechanism. A carrier wave is used to transmit the data in packets, referred to as “Ethernet frames“. Each station is constantly tuned in on the radio frequencycommunication channel to pick up available transmissions.

Internet access

A Wi-Fi-enabled device, such as a personal computer, video game console, smartphone or digital audio player, can connect to the Internet when within range of a wireless network connected to the Internet. The coverage of one or more (interconnected) access points—called
hotspots—comprises an area as small as a few rooms or as large as many
square miles. Coverage in the larger area may depend on a group of
access points with overlapping coverage. Wi-Fi technology has been used
successfully in wireless mesh networks in London, UK, for example.[1]

Wi-Fi provides service in private homes and offices as well as in
public spaces at Wi-Fi hotspots set up either free-of-charge or
commercially. Organizations and businesses,
such as airports, hotels, and restaurants, often provide free-use
hotspots to attract or assist clients. Enthusiasts or authorities who
wish to provide services or even to promote business in selected areas
sometimes provide free Wi-Fi access. As of 2008 more than 300 city-wide
Wi-Fi (Muni-Fi) projects had been created.[2] As of 2010 the Czech Republic had 1150 Wi-Fi based wireless Internet service providers.[3][4]

Routers that incorporate a digital subscriber line modem or a cable modem and a Wi-Fi access point, often set up in homes and other buildings, provide Internet access and internetworking to all devices tuned into them, wirelessly or via cable. With the emergence of MiFi and WiBro (a portable Wi-Fi router) people can easily create their own Wi-Fi hotspots that connect to Internet via cellular networks. Now iPhone, Android, Bada and Symbian phones can create wireless connections.[5]

One can also connect Wi-Fi devices in ad-hoc mode
for client-to-client connections without a router. Wi-Fi also connects
places normally without network access, such as kitchens and garden


WEP Wireless

Wired Equivalent Privacy (WEP) is a weak security algorithm for IEEE 802.11 wireless networks. Introduced as part of the original 802.11 standard ratified in September 1999, its intention was to provide data confidentiality comparable to that of a traditional wired network.[1] WEP, recognizable by the key of 10 or 26 hexadecimal digits, is widely in use and is often the first security choice presented to users by router configuration tools.[2][3]

Although its name implies that it is as secure as a wired connection, WEP has been demonstrated to have numerous flaws and has been deprecated in favor of newer standards such as WPA2. In 2003 the Wi-Fi Alliance announced that WEP had been superseded by Wi-Fi Protected Access (WPA). In 2004, with the ratification of the full 802.11i standard (i.e. WPA2), the IEEE declared that both WEP-40 and WEP-104 “have been deprecated as they fail to meet their security goals”.

WEP was included as the privacy component of the original IEEE 802.11 standard ratified in September 1999.[5] WEP uses the stream cipher RC4 for confidentiality,[6] and the CRC-32 checksum for integrity.[7] It was deprecated in 2004 and is documented in the current standard.[8]

Basic WEP encryption: RC4 keystream XORed with plaintext

Standard 64-bit WEP uses a 40 bit key (also known as WEP-40), which is concatenated with a 24-bit initialization vector (IV) to form the RC4 key. At the time that the original WEP standard was drafted, the U.S. Government’s export restrictions on cryptographic technology limited the key size. Once the restrictions were lifted, manufacturers of access points implemented an extended 128-bit WEP protocol using a 104-bit key size (WEP-104).

A 64-bit WEP key is usually entered as a string of 10 hexadecimal (base 16) characters (0-9 and A-F). Each character represents four bits, 10 digits of four bits each gives 40 bits; adding the 24-bit IV produces the complete 64-bit WEP key. Most devices also allow the user to enter the key as five ASCII characters, each of which is turned into eight bits using the character’s byte value in ASCII; however, this restricts each byte to be a printable ASCII character, which is only a small fraction of possible byte values, greatly reducing the space of possible keys.

A 128-bit WEP key is usually entered as a string of 26 hexadecimal characters. 26 digits of four bits each gives 104 bits; adding the 24-bit IV produces the complete 128-bit WEP key. Most devices also allow the user to enter it as 13 ASCII characters.

A 256-bit WEP system is available from some vendors. As with the other WEP-variants 24 bits of that is for the IV, leaving 232 bits for actual protection. These 232 bits are typically entered as 58 hexadecimal characters. ((58 × 4 bits =) 232 bits) + 24 IV bits = 256-bit WEP key.

Key size is one of the security limitations in WEP.[9] Cracking a longer key requires interception of more packets, but there are active attacks that stimulate the necessary traffic. There are other weaknesses in WEP, including the possibility of IV collisions and altered packets,[6] that are not helped by using a longer key.


Two methods of authentication can be used with WEP: Open System authentication and Shared Key authentication.

For the sake of clarity, we discuss WEP authentication in the Infrastructure mode (that is, between a WLAN client and an Access Point). The discussion applies to the ad-Hoc mode as well.

In Open System authentication, the WLAN client need not provide its credentials to the Access Point during authentication. Any client can authenticate with the Access Point and then attempt to associate. In effect, no authentication occurs. Subsequently WEP keys can be used for encrypting data frames. At this point, the client must have the correct keys.

In Shared Key authentication, the WEP key is used for authentication in a four step challenge-response handshake:

  1. The client sends an authentication request to the Access Point.
  2. The Access Point replies with a clear-text challenge.
  3. The client encrypts the challenge-text using the configured WEP key, and sends it back in another authentication request.
  4. The Access Point decrypts the response. If this matches the challenge-text the Access Point sends back a positive reply.

After the authentication and association, the pre-shared WEP key is also used for encrypting the data frames using RC4.

At first glance, it might seem as though Shared Key authentication is more secure than Open System authentication, since the latter offers no real authentication. However, it is quite the reverse. It is possible to derive the keystream used for the handshake by capturing the challenge frames in Shared Key authentication.[10] Hence, it is advisable to use Open System authentication for WEP authentication, rather than Shared Key authentication. (Note that both authentication mechanisms are weak.)



Telephone tapping (also wire tapping or wiretapping in American English) is the monitoring of telephone and Internet conversations by a third party, often by covert means. The wire tap received its name because, historically, the monitoring connection was an actual electrical tap on the telephone line. Legal wiretapping by a government agency is also called lawful interception. Passive wiretapping monitors or records the traffic, while active wiretapping alters or otherwise affects it.



In 1995, Peter Garza, a Special Agent with the Naval Criminal Investigative Service, conducted the first court-ordered Internet wiretap in the United States while investigating Julio Cesar Ardita (“El Griton“).

As technologies emerge, including VoIP, new questions are raised about law enforcement access to communications (see VoIP recording). In 2004, the Federal Communications Commission was asked to clarify how the Communications Assistance for Law Enforcement Act (CALEA) related to Internet service providers. The FCC stated that “providers of broadband Internet access and voice over Internet protocol (“VoIP”) services are regulable as “telecommunications carriers” under the Act.”[10] Those affected by the Act will have to provide access to law enforcement officers who need to monitor or intercept communications transmitted through their networks. As of 2009, warrantless surveillance of internet activity has consistently been upheld in FISA court.[11]

The Internet Engineering Task Force has decided not to consider requirements for wiretapping as part of the process for creating and maintaining IETF standards.[12]

Typically, illegal Internet wiretapping will be conducted via Wi-Fi connection to someone’s internet by cracking the WEP or WPA key, using a tool such as Aircrack-ng or Kismet. Once in, the intruder will rely on a number of potential tactics, for example an ARP spoofing attack which will allow the intruder to view packets in a tool such as Wireshark or Ettercap.

One issue that Internet wiretapping is yet to overcome is that of steganography, whereby a user encodes, or “hides”, one file inside another (usually a larger, dense file like a MP3 or JPEG image). With modern advancements in encoding technologies, the resulting combined file is essentially indistinguishable to anyone attempting to view it, unless they have the necessary protocol to extract the hidden file.[13][14] US News reported that this technique was commonly used by Osama bin Laden as a way to communicate with his terrorist cells.





A computer worm is a self-replicating malware computer program, which uses a computer network to send copies of itself to other nodes (computers on the network) and it may do so without any user intervention. This is due to security shortcomings on the target computer. Unlike a computer virus, it does not need to attach itself to an existing program. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.


Many worms that have been created are only designed to spread, and don’t attempt to alter the systems they pass through. However, as the Morris worm and Mydoom showed, even these “payload free” worms can cause major disruption by increasing network traffic and other unintended effects. A “payload” is code in the worm designed to do more than spread the worm–it might delete files on a host system (e.g., the ExploreZip worm), encrypt files in a cryptoviral extortion attack, or send documents via e-mail. A very common payload for worms is to install a backdoor in the infected computer to allow the creation of a “zombie” computer under control of the worm author. Networks of such machines are often referred to as botnets and are very commonly used by spam senders for sending junk email or to cloak their website’s address.[1] Spammers are therefore thought to be a source of funding for the creation of such worms,[2][3] and the worm writers have been caught selling lists of IP addresses of infected machines.[4] Others try to blackmail companies with threatened DoS attacks.[5]

Backdoors can be exploited by other malware, including worms. Examples include Doomjuice, which spreads better using the backdoor opened by Mydoom, and at least one instance of malware taking advantage of the rootkit and backdoor installed by the Sony/BMG DRM software utilized by millions of music CDs prior to late 2005.[dubiousdiscuss]

Protecting against dangerous computer worms

Worms spread by exploiting vulnerabilities in operating systems. Vendors with security problems supply regular security updates[7] (see “Patch Tuesday“), and if these are installed to a machine then the majority of worms are unable to spread to it. If a vulnerability is disclosed before the security patch released by the vendor, a Zero-day attack is possible.

Users need to be wary of opening unexpected email,[8] and should not run attached files or programs, or visit web sites that are linked to such emails. However, as with the ILOVEYOU worm, and with the increased growth and efficiency of phishing attacks, it remains possible to trick the end-user into running a malicious code.

Anti-virus and anti-spyware software are helpful, but must be kept up-to-date with new pattern files at least every few days. The use of a firewall is also recommended.

In the April–June, 2008, issue of IEEE Transactions on Dependable and Secure Computing, computer scientists describe a potential new way to combat internet worms. The researchers discovered how to contain the kind of worm that scans the Internet randomly, looking for vulnerable hosts to infect. They found that the key is for software to monitor the number of scans that machines on a network sends out. When a machine starts sending out too many scans, it is a sign that it has been infected, allowing administrators to take it off line and check it for viruses.