Announcing Exchange 2010 Service Pack 3

The Exchange Team is pleased to announce that in the first half of calendar year 2013 we will be releasing Exchange Server 2010 Service Pack 3 (SP3) to our customers. With SP3, the following new features and capabilities will be included:

Coexistence with Exchange 2013: Customers that want to introduce Exchange Server 2013 into their existing Exchange 2010 infrastructure will need the coexistence changes shipping in SP3.

Support for Windows Server 2012: With Service Pack 3, you will have the ability to install and deploy Exchange Server 2010 on machines running Windows Server 2012.

Customer Requested Fixes: All fixes contained within update rollups released prior to Service Pack 3 will also be contained within SP3. Details of our regular Exchange 2010 release rhythm can be found in Exchange 2010 Servicing.

In order to support these newly added features, there will be a requirement for customers to update their Active Directory schema. We are communicating the required changes ahead of the release date in order to assist our customers with planning their upgrade path ahead of time.

We hope these announcements come as welcome news to you. It is our custom to provide ongoing improvements to features, functionality and security of Exchange Server, based largely on customer feedback, and to provide continual innovation on an already great messaging product. We look forward to receiving your comments and announcing more detailed information as we continue to develop the features that will be included in SP3.

Source: http://blogs.technet.com/b/exchange/archive/2012/09/25/announcing-exchange-2010-service-pack-3.aspx

iSCSI Target for Windows Server 2008 R2

Microsoft iSCSI Software Target 3.3 for Windows Server 2008 R2 available for public download

Introduction

For the last few years, I’ve been blogging about the Microsoft iSCSI Software Target and its many uses related to Windows Server Failover Clustering, Hyper-V and other server scenarios. Today, Microsoft has made this software publicly available to all users of Windows Server 2008 R2.

The Microsoft iSCSI Software Target has been available for production use as part of Windows Storage Server since early 2007. It has also been available for development and test use by MSDN and TechNet subscribers starting in May 2009. However, until now, there was no way to use the Microsoft iSCSI Software Target in production on a regular server running Windows Server 2008 R2. This new download offers exactly that.

Now available as a public download, the software is essentially the same software that ships with Windows Storage Server 2008 R2. Windows Storage Server 2008 R2 and the public download package will be refreshed (kept in sync) with any software fixes and updates. Those updates are described at http://technet.microsoft.com/en-us/library/gg232597.aspx.

This release was preceded by intense testing by the Microsoft iSCSI Target team, especially in scenarios where the iSCSI Target is used with Hyper-V and with Windows Server Failover Clusters. We do imagine these to be amongst the most commons deployment scenarios.

Testing included running the Microsoft iSCSI Software Target in a two-node Failover Cluster and configuring 92 individual Hyper-V VMs, each running a data intensive application and storing data on a single node of that iSCSI Target cluster. The exciting part of the test was to force an unplanned failure of the iSCSI Target node being used by all the VMs and verify that we had a successful failover to the other node with all 92 VMs continuing to run the application without any interruption.

How to download and install

To download the Microsoft iSCSI Software Target 3.3 for Windows Server 2008 R2, go to http://www.microsoft.com/downloads/en/details.aspx?FamilyID=45105d7f-8c6c-4666-a305-c8189062a0d0 and download a single file called “iSCSITargetDLC.EXE”. (Note: This was just released at 10AM PST on 04/04/2011, so the download might still be replicating to your closest download server. If the link does not work, try again later). This is a self-extracting archive that will show this screen when run:

Select a destination folder and click “Install”. Once it finishes, you will find a few files available to you in  that folder:

If you click on the index.htm file on the main folder, you will see the welcome page with a few links to the items included:

To install the iSCSI Target on a computer running Windows Server 2008 R2, simply run the “iscsitarget_public.msi” MSI file from a command line or right-click it on Windows Explorer and choose “Install”.

Frequently Asked Questions (FAQ)

Q: Can I install the Microsoft iSCSI Software Target 3.3 on Windows Server 2008 or Windows Server 2003? A: No. The Microsoft iSCSI Software Target 3.3 can only be installed on Windows Server 2008 R2.

Q: Can I install the Microsoft iSCSI Software Target on Windows Server 2008 R2 with Service Pack 1 (SP1)? A: Yes. In fact, that’s what is recommended.

Q: Can I install the Microsoft iSCSI Software Target on a Core install of Windows Server 2008 R2? A: No. The Microsoft iSCSI Software Target 3.3 is only supported in a Full install.

Q: I don’t have a copy of Windows Server 2008 R2. Where can I get an evaluation copy? A: You download an evaluation version of Windows Server 2008 R2 with Service Pack 1 from http://technet.microsoft.com/en-us/evalcenter/dd459137.aspx

Q: Where is the x86 (32-bit) version of the Microsoft iSCSI Software Target 3.3? A: The Microsoft iSCSI Software Target 3.3, is provided in only in an x64 (64-bit) version, as is Windows Server 2008 R2,

Q: What are these “iSCSITargetClient” MSI files included in the download? A: Those are the optional VSS and VDS providers for the Microsoft iSCSI Software Target 3.3. You should install them in the same computer that runs the iSCSI Initiator if you intend to use VSS or VDS. For details on VSS, see http://blogs.technet.com/b/josebda/archive/2007/10/10/the-basics-of-the-volume-shadow-copy-service-vss.aspx. For details on VDS, see http://blogs.technet.com/b/josebda/archive/2007/10/25/the-basics-of-the-virtual-disk-services-vds.aspx.

Q: Where is the Windows Storage Server 2008 R2 documentation? A: There is some documentation inside the package. Additional documentation is available on the web at http://technet.microsoft.com/en-us/library/gg232606.aspx

Q: Can I use the Microsoft iSCSI Software Target 3.3 as shared storage for a Windows Server Failover Cluster? A: Yes. That is one of its most common uses.

Q: Can I install the Microsoft iSCSI Software Target 3.3 in a Hyper-V virtual machine? A: Yes. We do it all the time.

Q: Can I use the downloaded Microsoft iSCSI Software Target 3.3 in my production environment? A: Yes. Make sure to perform the proper evaluation and testing before deploying any software in a production environment. But you knew that already…

Q: What are the support policies for the Microsoft iSCSI Software Target 3.3 on Windows Server 2008 R2? A: The support policies are listed at http://technet.microsoft.com/en-us/library/gg983493.aspx

Links

I would recommend that you download and read my previous blog posts about the Microsoft iSCSI Software Target. Here are some of the most popular ones.

Please keep in mind that some of these posts mention previous versions of the Microsoft iSCSI Software Target that ran on different Windows Server versions. The overall guidance, however, still applies.

Conclusion

I hope you are as excited as we are about this release. Download it and experiment with it. And don’t forget to post a comment about your experience or send us your feedback.

Source: blogs.technet.com

Quick Install Exchange 2010

System Requirements

First, you need to make sure that your Active Directory (AD) environment and your Exchange server meet the minimum requirements:

  • AD forest functional level is Windows Server 2003 (or higher)   
  • AD Schema Master is running Windows Server 2003 w/SP1 or later   
  • Full installation of Windows Server 2008 w/SP2 or later OR Windows Server 2008 R2 for the Exchange server itself   
  • Exchange server is joined to the domain (except for the Edge Transport server role)

Prerequisites

In this example we are going to install Exchange 2010 on a Windows Server 2008 R2 operating system.  Before installing Exchange we need to install some Windows components.  It’s important that you don’t miss anything here because the Exchange 2010 installer does not provide very good feedback if Server 2008 R2 is missing required components.

In Exchange management shell Run the following command: Import-Module ServerManager

For a typical install with the Client Access, Hub Transport, and Mailbox roles run the following command:

Add-WindowsFeature NET-Framework,RSAT-ADDS,Web-Server,Web-Basic-Auth,Web-Windows-Auth,Web-Metabase,Web-Net-Ext,Web-Lgcy-Mgmt-Console,WAS-Process-Model,RSAT-Web-Server,Web-ISAPI-Ext,Web-Digest-Auth,Web-Dyn-Compression,NET-HTTP-Activation,RPC-Over-HTTP-Proxy -Restart

If your Exchange server will have the Client Access Server role set the Net.Tcp Port Sharing Service to start automatically

Open PowerShell via the icon on the task bar or Start >> All Programs >> Accessories >> Windows PowerShell >> Windows PowerShell.  Be sure that PowerShell opened with an account that has rights to modify service startup settings.

Run the following command: Set-Service NetTcpPortSharing -StartupType Automatic

 

 

 

Than just follow the setup, to finish instalation of exchange.

Computer Virus

A computer virus is a computer program that can replicate itself[1] and spread from one computer to another. The term “virus” is also commonly but erroneously used to refer to other types of malware, including but not limited to adware and spyware programs that do not have the reproductive ability. A true virus can spread from one computer to another (in some form of executable code) when its host is taken to the target computer; for instance because a user sent it over a network or the Internet, or carried it on a removable medium such as a floppy disk, CD, DVD, or USB drive.[2]

Viruses can increase their chances of spreading to other computers by infecting files on a network file system or a file system that is accessed by another computer.[3][4]

As stated above, the term “computer virus” is sometimes used as a catch-all phrase to include all types of malware, even those that do not have the reproductive ability. Malware includes computer viruses, computer worms, Trojan horses, most rootkits, spyware, dishonest adware and other malicious and unwanted software, including true viruses. Viruses are sometimes confused with worms and Trojan horses, which are technically different. A worm can exploit security vulnerabilities to spread itself automatically to other computers through networks, while a Trojan horse is a program that appears harmless but hides malicious functions. Worms and Trojan horses, like viruses, may harm a computer system’s data or performance. Some viruses and other malware have symptoms noticeable to the computer user, but many are surreptitious or simply do nothing to call attention to themselves. Some viruses do nothing beyond reproducing themselves.

Infection strategies

In order to replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many viruses attach themselves to executable files that may be part of legitimate programs. If a user attempts to launch an infected program, the virus’ code may be executed simultaneously. Viruses can be divided into two types based on their behavior when they are executed. Nonresident viruses immediately search for other hosts that can be infected, infect those targets, and finally transfer control to the application program they infected. Resident viruses do not search for hosts when they are started. Instead, a resident virus loads itself into memory on execution and transfers control to the host program. The virus stays active in the background and infects new hosts when those files are accessed by other programs or the operating system itself.

Vectors and hosts

Viruses have targeted various types of transmission media or hosts. This list is not exhaustive:

PDFs, like HTML, may link to malicious code. PDFs can also be infected with malicious code.

In operating systems that use file extensions to determine program associations (such as Microsoft Windows), the extensions may be hidden from the user by default. This makes it possible to create a file that is of a different type than it appears to the user. For example, an executable may be created named “picture.png.exe”, in which the user sees only “picture.png” and therefore assumes that this file is an image and most likely is safe, yet when opened runs the executable on the client machine.

An additional method is to generate the virus code from parts of existing operating system files by using the CRC16/CRC32 data. The initial code can be quite small (tens of bytes) and unpack a fairly large virus. This is analogous to a biological “prion” in the way it works but is vulnerable to signature based detection. This attack has not yet been seen “in the wild”.

Methods to avoid detection

In order to avoid detection by users, some viruses employ different kinds of deception. Some old viruses, especially on the MS-DOS platform, make sure that the “last modified” date of a host file stays the same when the file is infected by the virus. This approach does not fool anti-virus software, however, especially those which maintain and date Cyclic redundancy checks on file changes.

Some viruses can infect files without increasing their sizes or damaging the files. They accomplish this by overwriting unused areas of executable files. These are called cavity viruses. For example, the CIH virus, or Chernobyl Virus, infects Portable Executable files. Because those files have many empty gaps, the virus, which was 1 KB in length, did not add to the size of the file.

Some viruses try to avoid detection by killing the tasks associated with antivirus software before it can detect them.

As computers and operating systems grow larger and more complex, old hiding techniques need to be updated or replaced. Defending a computer against viruses may demand that a file system migrate towards detailed and explicit permission for every kind of file access.

Encryption with a variable key

A more advanced method is the use of simple encryption to encipher the virus. In this case, the virus consists of a small decrypting module and an encrypted copy of the virus code. If the virus is encrypted with a different key for each infected file, the only part of the virus that remains constant is the decrypting module, which would (for example) be appended to the end. In this case, a virus scanner cannot directly detect the virus using signatures, but it can still detect the decrypting module, which still makes indirect detection of the virus possible. Since these would be symmetric keys, stored on the infected host, it is in fact entirely possible to decrypt the final virus, but this is probably not required, since self-modifying code is such a rarity that it may be reason for virus scanners to at least flag the file as suspicious.

An old, but compact, encryption involves XORing each byte in a virus with a constant, so that the exclusive-or operation had only to be repeated for decryption. It is suspicious for a code to modify itself, so the code to do the encryption/decryption may be part of the signature in many virus definitions.

Anti-virus software and other preventive measures

Many users install anti-virus software that can detect and eliminate known viruses after the computer downloads or runs the executable. There are two common methods that an anti-virus software application uses to detect viruses. The first, and by far the most common method of virus detection is using a list of virus signature definitions. This works by examining the content of the computer’s memory (its RAM, and boot sectors) and the files stored on fixed or removable drives (hard drives, floppy drives), and comparing those files against a database of known virus “signatures”. The disadvantage of this detection method is that users are only protected from viruses that pre-date their last virus definition update. The second method is to use a heuristic algorithm to find viruses based on common behaviors. This method has the ability to detect novel viruses that anti-virus security firms have yet to create a signature for.

Some anti-virus programs are able to scan opened files in addition to sent and received email messages “on the fly” in a similar manner. This practice is known as “on-access scanning”. Anti-virus software does not change the underlying capability of host software to transmit viruses. Users must update their software regularly to patch security holes. Anti-virus software also needs to be regularly updated in order to recognize the latest threats.

One may also minimize the damage done by viruses by making regular backups of data (and the operating systems) on different media, that are either kept unconnected to the system (most of the time), read-only or not accessible for other reasons, such as using different file systems. This way, if data is lost through a virus, one can start again using the backup (which should preferably be recent).

If a backup session on optical media like CD and DVD is closed, it becomes read-only and can no longer be affected by a virus (so long as a virus or infected file was not copied onto the CD/DVD). Likewise, an operating system on a bootable CD can be used to start the computer if the installed operating systems become unusable. Backups on removable media must be carefully inspected before restoration. The Gammima virus, for example, propagates via removable flash drives.

Virus removal

One possibility on Windows Me, Windows XP, Windows Vista and Windows 7 is a tool known as System Restore, which restores the registry and critical system files to a previous checkpoint. Often a virus will cause a system to hang, and a subsequent hard reboot will render a system restore point from the same day corrupt. Restore points from previous days should work provided the virus is not designed to corrupt the restore files or also exists in previous restore points.[33] Some viruses, however, disable System Restore and other important tools such as Task Manager and Command Prompt. An example of a virus that does this is CiaDoor. However, many such viruses can be removed by rebooting the computer, entering Windows safe mode, and then using system tools.

Administrators have the option to disable such tools from limited users for various reasons (for example, to reduce potential damage from and the spread of viruses). A virus can modify the registry to do the same even if the Administrator is controlling the computer; it blocks all users including the administrator from accessing the tools. The message “Task Manager has been disabled by your administrator” may be displayed, even to the administrator.[citation needed]

Users running a Microsoft operating system can access Microsoft’s website to run a free scan, provided they have their 20-digit registration number. Many websites run by anti-virus software companies provide free online virus scanning, with limited cleaning facilities (the purpose of the sites is to sell anti-virus products). Some websites allow a single suspicious file to be checked by many antivirus programs in one operation.

 

 

War Dialing

War dialing or wardialing is a technique of using a modem to automatically scan a list of telephone numbers, usually dialing every number in a local area code to search for computers, Bulletin board systems and fax machines. Hackers use the resulting lists for various purposes, hobbyists for exploration, and crackers – malicious hackers who specialize in computer security – for password guessing.

A single wardialing call would involve calling an unknown number, and waiting for one or two rings, since answering computers usually pick up on the first ring. If the phone rings twice, the modem hangs up and tries the next number. If a modem or fax machine answers, the wardialer program makes a note of the number. If a human or answering machine answers, the wardialer program hangs up. Depending on the time of day, wardialing 10,000 numbers in a given area code might annoy dozens or hundreds of people, some who attempt and fail to answer a phone in two rings, and some who succeed, only to hear the wardialing modem’s carrier tone and hang up. The repeated incoming calls are especially annoying to businesses that have many consecutively numbered lines in the exchange, such as used with a Centrex telephone system.

The popularity of wardialing in 1980s and 1990s prompted some states to enact legislation prohibiting the use of a device to dial telephone numbers without the intent of communicating with a person.

The popular name for this technique originated in the 1983 film WarGames. In the film, the protagonist programmed his computer to dial every telephone number in Sunnyvale, California to find other computer systems. Prior to the movie’s release, this technique was known as “hammer dialing” or “demon dialing“. ‘WarGames Dialer’ programs became common on bulletin board systems of the time, with file names often truncated to wardial.exe and the like due to length restrictions of 8 characters on such systems. Eventually, the etymology of the name fell behind as “war dialing” gained its own currency within computing culture.[1]

A more recent phenomenon is wardriving, the searching for wireless networks (Wi-Fi) from a moving vehicle. Wardriving was named after wardialing, since both techniques involve brute-force searches to find computer networks. The aim of wardriving is to collect information about wireless access points (not to be confused with piggybacking).

Similar to war dialing is a port scan under TCP/IP, which “dials” every TCP port of every IP address to find out what services are available. Unlike wardialing, however, a port scan will generally not disturb a human being when it tries an IP address, regardless of whether there is a computer responding on that address or not. Related to wardriving is warchalking, the practice of drawing chalk symbols in public places to advertise the availability of wireless networks.

The term is also used today by analogy for various sorts of exhaustive brute force attack against an authentication mechanism, such as a password. While a dictionary attack might involve trying each word in a dictionary as the password, “wardialing the password” would involve trying every possible password. Password protection systems are usually designed to make this impractical, by making the process slow and/or locking out an account for minutes or hours after some low number of wrong password entries.

War dialing is sometimes used as a synonym for demon dialing, a related technique which also involves automating a computer modem in order to repeatedly place telephone calls.

Web Crawler

For the search engine of the same name, see WebCrawler. For the fictional robots called Skutters, see Red Dwarf characters#The Skutters.
Not to be confused with offline reader.

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots,[1] Web spiders,[2] Web robots,[2] or—especially in the FOAF community—Web scutters.[3]

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages,such as harvesting e-mail addresses (usually for sending spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

The large volume implies that the crawler can only download a fraction of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that the pages might have already been updated or even deleted.

The number of possible crawlable URLs being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.

As Edwards et al. noted, “Given that the bandwidth for conducting crawls is neither infinite nor free, it is becoming essential to crawl the Web in not only a scalable, but efficient way, if some reasonable measure of quality or freshness is to be maintained.”[4] A crawler must carefully choose at each step which pages to visit next.

The behavior of a Web crawler is the outcome of a combination of policies:[5]

  • a selection policy that states which pages to download,
  • a re-visit policy that states when to check for changes to the pages,
  • a politeness policy that states how to avoid overloading Web sites, and
  • a parallelization policy that states how to coordinate distributed Web crawlers.

 

Web Server

Web server can refer to either the hardware (the computer) or the software (the computer application) that helps to deliver content that can be accessed through the Internet.[1]

The most common use of web servers is to host web sites but there are other uses such as data storage or running enterprise applications.

The inside and front of a Dell PowerEdge Web server

Overview

The primary function of a web server is to deliver web pages on the request to clients. This means delivery of HTML documents and any additional content that may be included by a document, such as images, style sheets and scripts.

A client, commonly a web browser or web crawler, initiates communication by making a request for a specific resource using HTTP and the server responds with the content of that resource or an error message if unable to do so. The resource is typically a real file on the server’s secondary memory, but this is not necessarily the case and depends on how the web server is implemented.

While the primary function is to serve content, a full implementation of HTTP also includes ways of receiving content from clients. This feature is used for submitting web forms, including uploading of files.

Many generic web servers also support server-side scripting, e.g., Active Server Pages (ASP) and PHP. This means that the behaviour of the web server can be scripted in separate files, while the actual server software remains unchanged. Usually, this function is used to create HTML documents “on-the-fly” as opposed to returning fixed documents. This is referred to as dynamic and static content respectively. The former is primarily used for retrieving and/or modifying information from databases. The latter is, however, typically much faster and more easily cached.

Web servers are not always used for serving the world wide web. They can also be found embedded in devices such as printers, routers, webcams and serving only a local network. The web server may then be used as a part of a system for monitoring and/or administrating the device in question. This usually means that no additional software has to be installed on the client computer, since only a web browser is required (which now is included with most operating systems).

History of web servers

The world’s first web server

In 1989 Tim Berners-Lee proposed a new project with the goal of easing the exchange of information between scientists by using a hypertext system to his employer CERN. The project resulted in Berners-Lee writing two programs in 1990:

Between 1991 and 1994, the simplicity and effectiveness of early technologies used to surf and exchange data through the World Wide Web helped to port them to many different operating systems and spread their use among socially diverse groups of people, first in scientific organizations, then in universities and finally in industry.

In 1994 Tim Berners-Lee decided to constitute the World Wide Web Consortium (W3C) to regulate the further development of the many technologies involved (HTTP, HTML, etc.) through a standardization process.

Load limits

A web server (program) has defined load limits, because it can handle only a limited number of concurrent client connections (usually between 2 and 80,000, by default between 500 and 1,000) per IP address (and TCP port) and it can serve only a certain maximum number of requests per second depending on:

  • Its own settings
  • The HTTP request type
  • Content origin (static or dynamic)
  • The fact that the served content is or is not cached
  • The hardware and software limitations of the OS where it is working

When a web server is near to or over its limits, it becomes unresponsive.

Overload causes

At any time web servers can be overloaded because of:

  • Too much legitimate web traffic. Thousands or even millions of clients connecting to the web site in a short interval, e.g., Slashdot effect;
  • Distributed Denial of Service attacks;
  • Computer worms that sometimes cause abnormal traffic because of millions of infected computers (not coordinated among them);
  • XSS viruses can cause high traffic because of millions of infected browsers and/or web servers;
  • Internet bots. Traffic not filtered/limited on large web sites with very few resources (bandwidth, etc.);
  • Internet (network) slowdowns, so that client requests are served more slowly and the number of connections increases so much that server limits are reached;
  • Web servers (computers) partial unavailability. This can happen because of required or urgent maintenance or upgrade, hardware or software failures, back-end (e.g., database) failures, etc.; in these cases the remaining web servers get too much traffic and become overloaded.

Overload symptoms

The symptoms of an overloaded web server are:

  • Requests are served with (possibly long) delays (from 1 second to a few hundred seconds).
  • 500, 502, 503, 504 HTTP errors are returned to clients (sometimes also unrelated 404 error or even 408 error may be returned).
  • TCP connections are refused or reset (interrupted) before any content is sent to clients.
  • In very rare cases, only partial contents are sent (but this behavior may well be considered a bug, even if it usually depends on unavailable system resources).

Anti-overload techniques

To partially overcome above load limits and to prevent overload, most popular Web sites use common techniques like:

  • managing network traffic, by using:
    • Firewalls to block unwanted traffic coming from bad IP sources or having bad patterns;
    • HTTP traffic managers to drop, redirect or rewrite requests having bad HTTP patterns;
    • Bandwidth management and traffic shaping, in order to smooth down peaks in network usage;
  • deploying Web cache techniques;
  • using different domain namesto serve different (static and dynamic) content by separate web servers, i.e.:
  • using different domain names and/or computers to separate big files from small and medium sized files; the idea is to be able to fully cache small and medium sized files and to efficiently serve big or huge (over 10 – 1000 MB) files by using different settings;
  • using many web servers (programs) per computer, each one bound to its own network card and IP address;
  • using many web servers (computers) that are grouped together so that they act or are seen as one big web server (see also Load balancer);
  • adding more hardware resources (i.e. RAM, disks) to each computer;
  • tuning OS parameters for hardware capabilities and usage;
  • using more efficient computer programs for web servers, etc.;
  • using other workarounds, especially if dynamic content is involved.

Below is the most recent statistics of the market share of the top web servers on the internet by Netcraft survey in March 2011.

Product Vendor Web Sites Hosted Percent
Apache Apache 179,720,332 60.31%
IIS Microsoft 57,644,692 19.34%
nginx Igor Sysoev 22,806,060 7.65%
GWS Google 15,161,530 5.09%
lighttpd lighttpd 1,796,471 0.60%
Sun Java System Web Server Oracle

Source: Wikipedia

White-Box Testing

White-box testing (also known as clear box testing, glass box testing, transparent box testing, and structural testing)
is a method of testing software that tests internal structures or
workings of an application, as opposed to its functionality (i.e. black-box testing).
In white-box testing an internal perspective of the system, as well as
programming skills, are required and used to design test cases. The
tester chooses inputs to exercise paths through the code and determine
the appropriate outputs. This is analogous to testing nodes in a
circuit, e.g. in-circuit testing (ICT).

While white-box testing can be applied at the unit, integration and system levels of the software testing
process, it is usually done at the unit level. It can test paths within
a unit, paths between units during integration, and between subsystems
during a system level test. Though this method of test design can
uncover many errors or problems, it might not detect unimplemented parts
of the specification or missing requirements.

White-box test design techniques include:

  • Control flow testing
  • Data flow testing
  • Branch testing
  • Path testing

Hacking

In penetration testing, white-box testing refers to a methodology where an ethical hacker
has full knowledge of the system being attacked. The goal of a
white-box penetration test is to simulate a malicious insider who has
some knowledge and possibly basic credentials to the target system.

Source: wikipedia.org

Wi-Fi


Wi-Fi
is a mechanism for wirelessly connecting electronic devices. A device enabled with Wi-Fi, such as a personal computer, video game console, smartphone, or digital audio player, can connect to the Internet via a wireless network access point. An access point (or hotspot) has a range of about 20 meters (65 ft) indoors and a greater range outdoors. Multiple overlapping access points can cover large areas.

“Wi-Fi” is a trademark of the Wi-Fi Alliance and the brand name for products using the IEEE 802.11 family of standards. Wi-Fi is used by over 700 million people. There are over four million hotspots (places with Wi-Fi Internet connectivity) around the world, and about 800 million new Wi-Fi devices are sold every year.[citation needed] Wi-Fi products that complete Wi-Fi Alliance interoperability certification testing successfully may use the “Wi-Fi CERTIFIED” designation and trademark.

Uses

To connect to a Wi-Fi LAN, a computer has to be equipped with a wireless network interface controller. The combination of computer and interface controller is called a station.
All stations share a single radio frequency communication channel.
Transmissions on this channel are received by all stations within range.
The hardware does not signal the user that the transmission was
delivered and is therefore called a best-effort delivery mechanism. A carrier wave is used to transmit the data in packets, referred to as “Ethernet frames“. Each station is constantly tuned in on the radio frequencycommunication channel to pick up available transmissions.

Internet access

A Wi-Fi-enabled device, such as a personal computer, video game console, smartphone or digital audio player, can connect to the Internet when within range of a wireless network connected to the Internet. The coverage of one or more (interconnected) access points—called
hotspots—comprises an area as small as a few rooms or as large as many
square miles. Coverage in the larger area may depend on a group of
access points with overlapping coverage. Wi-Fi technology has been used
successfully in wireless mesh networks in London, UK, for example.[1]

Wi-Fi provides service in private homes and offices as well as in
public spaces at Wi-Fi hotspots set up either free-of-charge or
commercially. Organizations and businesses,
such as airports, hotels, and restaurants, often provide free-use
hotspots to attract or assist clients. Enthusiasts or authorities who
wish to provide services or even to promote business in selected areas
sometimes provide free Wi-Fi access. As of 2008 more than 300 city-wide
Wi-Fi (Muni-Fi) projects had been created.[2] As of 2010 the Czech Republic had 1150 Wi-Fi based wireless Internet service providers.[3][4]

Routers that incorporate a digital subscriber line modem or a cable modem and a Wi-Fi access point, often set up in homes and other buildings, provide Internet access and internetworking to all devices tuned into them, wirelessly or via cable. With the emergence of MiFi and WiBro (a portable Wi-Fi router) people can easily create their own Wi-Fi hotspots that connect to Internet via cellular networks. Now iPhone, Android, Bada and Symbian phones can create wireless connections.[5]

One can also connect Wi-Fi devices in ad-hoc mode
for client-to-client connections without a router. Wi-Fi also connects
places normally without network access, such as kitchens and garden
sheds.

Source: wikipedia.org

Windows 8 Developer Preview Download

The Windows 8 Developer Preview is a pre-beta version of Windows 8 for developers. These downloads include prerelease software that may change without notice. The software is provided as is, and you bear the risk of using it. It may not be stable, operate correctly or work the way the final version of the software will. It should not be used in a production environment. The features and functionality in the prerelease software may not appear in the final version. Some product features and functionality may require advanced or additional hardware, or installation of other software.

Windows 8 Developer Preview English, 64-bit

DOWNLOAD (3.6 GB)

Sha 1 hash – 79DBF235FD49F5C1C8F8C04E24BDE6E1D04DA1E9

Includes a disk image file (.iso) to install the Windows 8 Developer Preview and Metro style apps on a 64-bit PC.

Note: This download does not include developer tools. You must download the Windows 8 Developer Preview with developer tools 64-bit (x64) to build Metro style apps.

Source: http://msdn.microsoft.com/en-us/windows/apps/br229516

WEP Wireless

Wired Equivalent Privacy (WEP) is a weak security algorithm for IEEE 802.11 wireless networks. Introduced as part of the original 802.11 standard ratified in September 1999, its intention was to provide data confidentiality comparable to that of a traditional wired network.[1] WEP, recognizable by the key of 10 or 26 hexadecimal digits, is widely in use and is often the first security choice presented to users by router configuration tools.[2][3]

Although its name implies that it is as secure as a wired connection, WEP has been demonstrated to have numerous flaws and has been deprecated in favor of newer standards such as WPA2. In 2003 the Wi-Fi Alliance announced that WEP had been superseded by Wi-Fi Protected Access (WPA). In 2004, with the ratification of the full 802.11i standard (i.e. WPA2), the IEEE declared that both WEP-40 and WEP-104 “have been deprecated as they fail to meet their security goals”.

WEP was included as the privacy component of the original IEEE 802.11 standard ratified in September 1999.[5] WEP uses the stream cipher RC4 for confidentiality,[6] and the CRC-32 checksum for integrity.[7] It was deprecated in 2004 and is documented in the current standard.[8]

Basic WEP encryption: RC4 keystream XORed with plaintext

Standard 64-bit WEP uses a 40 bit key (also known as WEP-40), which is concatenated with a 24-bit initialization vector (IV) to form the RC4 key. At the time that the original WEP standard was drafted, the U.S. Government’s export restrictions on cryptographic technology limited the key size. Once the restrictions were lifted, manufacturers of access points implemented an extended 128-bit WEP protocol using a 104-bit key size (WEP-104).

A 64-bit WEP key is usually entered as a string of 10 hexadecimal (base 16) characters (0-9 and A-F). Each character represents four bits, 10 digits of four bits each gives 40 bits; adding the 24-bit IV produces the complete 64-bit WEP key. Most devices also allow the user to enter the key as five ASCII characters, each of which is turned into eight bits using the character’s byte value in ASCII; however, this restricts each byte to be a printable ASCII character, which is only a small fraction of possible byte values, greatly reducing the space of possible keys.

A 128-bit WEP key is usually entered as a string of 26 hexadecimal characters. 26 digits of four bits each gives 104 bits; adding the 24-bit IV produces the complete 128-bit WEP key. Most devices also allow the user to enter it as 13 ASCII characters.

A 256-bit WEP system is available from some vendors. As with the other WEP-variants 24 bits of that is for the IV, leaving 232 bits for actual protection. These 232 bits are typically entered as 58 hexadecimal characters. ((58 × 4 bits =) 232 bits) + 24 IV bits = 256-bit WEP key.

Key size is one of the security limitations in WEP.[9] Cracking a longer key requires interception of more packets, but there are active attacks that stimulate the necessary traffic. There are other weaknesses in WEP, including the possibility of IV collisions and altered packets,[6] that are not helped by using a longer key.

Authentication

Two methods of authentication can be used with WEP: Open System authentication and Shared Key authentication.

For the sake of clarity, we discuss WEP authentication in the Infrastructure mode (that is, between a WLAN client and an Access Point). The discussion applies to the ad-Hoc mode as well.

In Open System authentication, the WLAN client need not provide its credentials to the Access Point during authentication. Any client can authenticate with the Access Point and then attempt to associate. In effect, no authentication occurs. Subsequently WEP keys can be used for encrypting data frames. At this point, the client must have the correct keys.

In Shared Key authentication, the WEP key is used for authentication in a four step challenge-response handshake:

  1. The client sends an authentication request to the Access Point.
  2. The Access Point replies with a clear-text challenge.
  3. The client encrypts the challenge-text using the configured WEP key, and sends it back in another authentication request.
  4. The Access Point decrypts the response. If this matches the challenge-text the Access Point sends back a positive reply.

After the authentication and association, the pre-shared WEP key is also used for encrypting the data frames using RC4.

At first glance, it might seem as though Shared Key authentication is more secure than Open System authentication, since the latter offers no real authentication. However, it is quite the reverse. It is possible to derive the keystream used for the handshake by capturing the challenge frames in Shared Key authentication.[10] Hence, it is advisable to use Open System authentication for WEP authentication, rather than Shared Key authentication. (Note that both authentication mechanisms are weak.)

Source: Wikipedia.org

Wiretapping

Telephone tapping (also wire tapping or wiretapping in American English) is the monitoring of telephone and Internet conversations by a third party, often by covert means. The wire tap received its name because, historically, the monitoring connection was an actual electrical tap on the telephone line. Legal wiretapping by a government agency is also called lawful interception. Passive wiretapping monitors or records the traffic, while active wiretapping alters or otherwise affects it.

 

Internet

In 1995, Peter Garza, a Special Agent with the Naval Criminal Investigative Service, conducted the first court-ordered Internet wiretap in the United States while investigating Julio Cesar Ardita (“El Griton“).

As technologies emerge, including VoIP, new questions are raised about law enforcement access to communications (see VoIP recording). In 2004, the Federal Communications Commission was asked to clarify how the Communications Assistance for Law Enforcement Act (CALEA) related to Internet service providers. The FCC stated that “providers of broadband Internet access and voice over Internet protocol (“VoIP”) services are regulable as “telecommunications carriers” under the Act.”[10] Those affected by the Act will have to provide access to law enforcement officers who need to monitor or intercept communications transmitted through their networks. As of 2009, warrantless surveillance of internet activity has consistently been upheld in FISA court.[11]

The Internet Engineering Task Force has decided not to consider requirements for wiretapping as part of the process for creating and maintaining IETF standards.[12]

Typically, illegal Internet wiretapping will be conducted via Wi-Fi connection to someone’s internet by cracking the WEP or WPA key, using a tool such as Aircrack-ng or Kismet. Once in, the intruder will rely on a number of potential tactics, for example an ARP spoofing attack which will allow the intruder to view packets in a tool such as Wireshark or Ettercap.

One issue that Internet wiretapping is yet to overcome is that of steganography, whereby a user encodes, or “hides”, one file inside another (usually a larger, dense file like a MP3 or JPEG image). With modern advancements in encoding technologies, the resulting combined file is essentially indistinguishable to anyone attempting to view it, unless they have the necessary protocol to extract the hidden file.[13][14] US News reported that this technique was commonly used by Osama bin Laden as a way to communicate with his terrorist cells.

 

Source: wikipedia.org

Worm

 

A computer worm is a self-replicating malware computer program, which uses a computer network to send copies of itself to other nodes (computers on the network) and it may do so without any user intervention. This is due to security shortcomings on the target computer. Unlike a computer virus, it does not need to attach itself to an existing program. Worms almost always cause at least some harm to the network, even if only by consuming bandwidth, whereas viruses almost always corrupt or modify files on a targeted computer.

Payloads

Many worms that have been created are only designed to spread, and don’t attempt to alter the systems they pass through. However, as the Morris worm and Mydoom showed, even these “payload free” worms can cause major disruption by increasing network traffic and other unintended effects. A “payload” is code in the worm designed to do more than spread the worm–it might delete files on a host system (e.g., the ExploreZip worm), encrypt files in a cryptoviral extortion attack, or send documents via e-mail. A very common payload for worms is to install a backdoor in the infected computer to allow the creation of a “zombie” computer under control of the worm author. Networks of such machines are often referred to as botnets and are very commonly used by spam senders for sending junk email or to cloak their website’s address.[1] Spammers are therefore thought to be a source of funding for the creation of such worms,[2][3] and the worm writers have been caught selling lists of IP addresses of infected machines.[4] Others try to blackmail companies with threatened DoS attacks.[5]

Backdoors can be exploited by other malware, including worms. Examples include Doomjuice, which spreads better using the backdoor opened by Mydoom, and at least one instance of malware taking advantage of the rootkit and backdoor installed by the Sony/BMG DRM software utilized by millions of music CDs prior to late 2005.[dubiousdiscuss]

Protecting against dangerous computer worms

Worms spread by exploiting vulnerabilities in operating systems. Vendors with security problems supply regular security updates[7] (see “Patch Tuesday“), and if these are installed to a machine then the majority of worms are unable to spread to it. If a vulnerability is disclosed before the security patch released by the vendor, a Zero-day attack is possible.

Users need to be wary of opening unexpected email,[8] and should not run attached files or programs, or visit web sites that are linked to such emails. However, as with the ILOVEYOU worm, and with the increased growth and efficiency of phishing attacks, it remains possible to trick the end-user into running a malicious code.

Anti-virus and anti-spyware software are helpful, but must be kept up-to-date with new pattern files at least every few days. The use of a firewall is also recommended.

In the April–June, 2008, issue of IEEE Transactions on Dependable and Secure Computing, computer scientists describe a potential new way to combat internet worms. The researchers discovered how to contain the kind of worm that scans the Internet randomly, looking for vulnerable hosts to infect. They found that the key is for software to monitor the number of scans that machines on a network sends out. When a machine starts sending out too many scans, it is a sign that it has been infected, allowing administrators to take it off line and check it for viruses.

Source: Wikipedia.org

Hibernation Windows Server/Client

Hibernation Enable/Disable

 

The hiberfil.sys file came into existence when Windows introduced a feature called hibernation. Many Windows users were initially confused by the unusual size of this mysterious file, but that is explained by how hibernation works. Hibernation is a feature that will let you save power without turning your computer off. It’s like a standby mode. It does this by momentarily freezing your system, which requires the use of memory. The memory needed to hibernate is stored in the hiberfil.sys file, which is why the file is so large. This file can be deleted from Windows Server 2008 by running a command.

To Delete Hiberfil.sys From Windows Server 2008 or Windows 7

Click on “Start,” then “Run.”

To Disable;
Type “powercfg.exe /hibernate off” and then press “Enter.”

 

If you want enable Hibernation type; 

Type “powercfg.exe /hibernate on” and then press “Enter.”

Windows Sysinternals Process Explorer

Introduction

Ever wondered which program has a particular file or directory open? Now you can find out. Process Explorer shows you information about which handles and DLLs processes have opened or loaded.

The Process Explorer display consists of two sub-windows. The top window always shows a list of the currently active processes, including the names of their owning accounts, whereas the information displayed in the bottom window depends on the mode that Process Explorer is in: if it is in handle mode you’ll see the handles that the process selected in the top window has opened; if Process Explorer is in DLL mode you’ll see the DLLs and memory-mapped files that the process has loaded. Process Explorer also has a powerful search capability that will quickly show you which processes have particular handles opened or DLLs loaded.

The unique capabilities of Process Explorer make it useful for tracking down DLL-version problems or handle leaks, and provide insight into the way Windows and applications work.

 

 

Process Explorer can be used to track down problems. For example, it provides a means to list or search for named resources that are held by a process or all processes. This can be used to track down what is holding a file open and preventing its use by another program. Or as another example, it can show the command lines used to start a program, allowing otherwise identical processes to be distinguished. Or like Task Manager, it can show a process that is maxing out the CPU, but unlike Task Manager it can show which thread (with the callstack) is using the CPU – information that is not even available under a debugger.

Features

  • Hierarchical view of processes.
  • Ability to display an icon and company name next to each process.
  • Live CPU activity graph in the task bar.
  • Ability to suspend selected process.
  • Ability to raise the window attached to a process, thus “unhiding” it.
  • Complete process tree can be killed.
  • Interactively alter a service process’ access security
  • Interactively set the priority of a process
  • Disambiguates service executables which perform multiple service functions. For example, when the pointer is placed over a svchost.exe, it will tell if it is the one performing automatic updates/secondary logon/etc., or the one providing RPC, or the one performing terminal services, and so on.

 

Download Process Explorer

Source: http://technet.microsoft.com/en-us/sysinternals

The Easiest, Fastest Way to Update or Install Software

About Ninite

Ninite was founded by Patrick Swieskowski and Sascha Kuzins. Investors include Y Combinator and a small collection of helpful angels.

Ninite is one of the best webpage for instaling programs.

You just choose what program you want to install to your computer, click to “Get installer” and install all selected programs at once.

www.ninite.com

 

 

 

 

Source: Ninite.com, Google.

HTTP Flood Denial of Service (DoS) Testing Tool for Windows

For testing purposes only

  • DoSHTTP is an easy to use and powerful HTTP Flood Denial of Service (DoS)
    Testing Tool for Windows. DoSHTTP includes URL Verification, HTTP Redirection,
    Port Designation, Performance Monitoring and Enhanced Reporting.

  • DoSHTTP uses multiple asynchronous sockets to perform an effective HTTP
    Flood. DoSHTTP can be used simultaneously on multiple clients to emulate a
    Distributed Denial of Service (DDoS) attack.

  • DoSHTTP can help IT Professionals test web server performance and evaluate
    web server protection software. DoSHTTP was developed by certified IT Security
    and Software Development professionals.

Features

  • Easy to use and powerful HTTP Flood Denial of Service (DoS) Testing Tool
  • Uses multiple asynchronous sockets to perform an effective HTTP Flood
  • Allows multiple clients to emulate a Distributed Denial of Service (DDoS) Attack
  • Allows target port designation within the URL [http://host:port/]
  • Supports HTTP Redirection for automatic page redirection (optional)
  • Includes URL Verification that displays the response header and document
  • Includes Performance Monitoring and Enhanced Reporting
  • Allows customized User Agent header fields
  • Allows user defined Socket and Request settings
  • Supports numeric addressing for Target URLs
  • Includes a comprehensive User Guide
  • Clear Target URLs and Reset All options
  • Now supports 15,000 simultaneous connections

For testing purposes only

 

Forwarding Rules Not Working Outlook 2007/2010

Automatic forwarding and Remote Domains

Remote Domains define a bunch of settings, such as message formats, character sets, and OOFs for messages sent to specified domains outside your Exchange organization. The default Remote Domain setting for the address space * (the asterisk character) applies to all external domains except the ones for which you’ve created a Remote Domain for.

The Allow automatic forward setting for remote domains applies only to client-side forwarding using mechanisms like Inbox Rules. For instance, if a user creates a rule in Microsoft Outlook to automatically forward mail to an external email address, the default setting (for address space *) doesn’t allow it. To enable automatic client-side forwarding of mail to external addresses, select the Allow automatic forward checkbox in a remote domain’s properties.

Alternatively, you can do this from the Exchange Management Shell;

set-remotedomain -identity Default -AutoForwardEnabled $true

DHCP Server MAC Address based filtering

Source: blogs.technet.com

 

DHCP Server team is excited to announce that the much appreciated and loved feature, MAC Address based filtering, (previously provided by this callout dll) is now a part of Windows Server 2008 R2 DHCP Server. Check out the blog.        The MAC Address filtering feature in Windows Server 2008 R2,   has provision for both Allow and Deny lists, with provision for wild-cards.        The Allow and Deny lists,  can be managed from within the DHCP MMC.

This DHCP Server Callout DLL helps administrator to filter out DHCP Requests to DHCP Server based on MAC Address. When a device or computer tries to connect to network, it shall first try to obtain ip address from DHCP Server. DHCP Server Callout DLL checks if this device MAC address is present in known list of MAC addresses configured by administrators. If it is present, device shall be allowed to obtain ip address or device requests shall be ignored based on action configured by administrator.

MAC address based filtering will allow network administrators to ensure that only know set of devices in the system are able get ip address from DHCP Server.  This DLL will help administrators to enforce additional security into network.

This callout DLL will help user in solving either of the following problems

  1. Allow Machines only belonging to set of MAC addresses to get ip address from DHCP Server.
  2. Deny Machines belonging to set of MAC addresses from getting ip address from this server.

This callout DLL shall work on Windows 2003 Server and Windows 2008 Server.

The usage is pretty simple and explained in the setup document along with the tool.

Both the dll (MacFilterCallout.dll) and the Setup document (SetupDHCPMacFilter.rtf) are copied on to %SystemRoot%\system32 folder after installation.

Updates done since initial version:

    1. Support for 32 bit and 64 bit OSs : Works on Windows 2003 and Windows 2008 Server
    2. Ease of setup : You do not have to copy the DLLs to obscure locations or edit the registry entries.    The installer copies the files into the appropriate locations and makes the necessary registry changes.
    3. Improved documentation :  Better documentation, along with a sample file.

    You can now specify upper case MAC addresses in the config file

     

  1. You can now check out the information log file, for information on what all addresses were allowed/denied, while the DHCP server service is running.

Known Issue:

  1. This callout dll may not work on localized builds (non english builds).

 

 

Source: blogs.technet.com

Windows Thin PC “MSWTPC”

 

Windows Thin PC enables customers to repurpose existing PCs as thin clients
by providing a smaller footprint, locked down version of Windows 7. This
provides organizations with significant benefits:

Reduced End Point costs for VDI: Windows Thin PC empowers
enterprises to leverage their end point hardware investments to access virtual
desktops that are delivered using VDI or
sessions. Windows Thin PC is available as a benefit of SA, and hence does not
represent any additional cost for SA customers. Windows Thin PC also provides IT
with the flexibility to revert back to PCs if necessary, in case the thin client
computing model does not provide the benefits they were after.

Excellent Thin Client experience: Windows Thin PC offers
many of the benefits of a thin client. Organizations can improve security and
compliance on their repurposed PCs, by using write filters to prevent data from
being written to disk. Additionally, Windows Thin PC ensures a rich remote
desktop experience through RemoteFX, enabling
delivery of high fidelity multimedia content from centralized desktops.

Enterprise Ready platform: Windows Thin PC is built on the
proven Windows 7 platform. Organizations can leverage existing management
strategy and tools such as System Center to centrally manage Windows Thin PC,
including accelerated role based deployment of applications, security patches,
updates, and data. Enterprise features such as Bitlocker
and Applocker
further help IT secure their devices, while DirectAccess
helps customers securely access their corporate data on repurposed laptops.

Microsoft recommends that customers who are currently evaluating thin client
computing begin their journey by first repurposing existing PCs as thin clients
with Windows Thin PC and evaluate the benefits they would get with this
architecture. Once Windows Thin PC device hardware get decommissioned, customers
can then purchase new Windows
Embedded
Thin Clients from our OEM partners without having to make changes
to their existing management and security policies.

 

Windows Thin PC Quick Demo;

Windows Thin PC 2

Source: Microsoft, Youtube.

Enable GodMode in Windows 7

Windows 7’s so-called GodMode is actually a shortcut to
accessing the operating system’s various control settings.

Although its name suggests perhaps even grander capabilities, Windows
enthusiasts are excited over the discovery of a hidden “GodMode” feature that
lets users access all of the operating system’s control panels from within a
single folder.

By creating a new folder in Windows 7 and renaming it with a certain text string at
the end, users are able to have a single place to do everything from changing
the look of the mouse pointer to making a new hard-drive partition.

Enable ‘GodMode’ in Windows 7

To enter “GodMode,” one need only create a new folder and then rename the
folder to the following:

GodMode.{ED7BA470-8E54-465E-825C-99712043E01C}

Once that is done, the folder’s icon will change to resemble a control panel
and will contain dozens of control options.

 

 

Win7, Services.msc Error prompt ActiveX

When you open Services.msc  you might get an ActiveX error message.

Solution:
Open the Registry Editor and navigate to:
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings\Zones\

Create a Backup of the first SubKey that is not a Number (Usually a half square and just above 0) and delete it.

That’s it, now open the services.msc again and the problem is solved

Case Conficker Worm F-secure

 

Conficker, also known as Downup, Downadup and Kido, is a computer worm targeting the Microsoft Windowsoperating system that was first detected in November 2008.[1] It uses flaws in Windows software and dictionary attacks on administrator passwords to propagate while forming a botnet. Conficker has since spread rapidly into what is now believed to be the largest computer worm infection since the 2003 SQL Slammer,[2] with more than seven million government, business and home computers in over 200 countries now under its control. The worm has been unusually difficult to counter because of its combined use of many advanced malware techniques.

Case Conficker / Downadup
Mikko Hypponen & Patrik Runald
F-Secure Corporation

Species Conference
February 2, 2009
Amsterdam

 

 

Part 2

 

 

 

Brain: Searching for the first PC virus

Below is video made by F-secure (Mikko Hypponen)

A 10-minute video reportage about Mikko Hypponen’s trip to Lahore, Pakistan, to find the authors of the first PC virus “Brain”. This is the first time Amjad Farooq Alvi and Basit Farooq Alvi have given a video interview about the virus, which spread around the world via floppy disks in 1986.

Enjoy your watch…

Hide some file in your image (jpeg,png,…)!

!!!This tutorial is just for testing purpose!!!

 

So, how to hide some files in JPEG or any other image formats;

 

 

1. First you need a program called: Saint Andrew’s File in Image Hide.exe

 

Or you find the program using google.com 🙂

 

2. Run the Saint Andrew’s File in Image Hide.exe and choose you picture and file/program you want to hide in picture;

 

3. When you choose and click to “Add File To Image”, you will see, that your file in that case hosts.bat is hide in TestPicture.png. Below in picture you see a normal file description. !But hosts.bat file in now hide in the picture!

 

 

4. If you want Extract file back from picture, the process in very straightforward, you choose the picture, which have the file hide in picture and click to “Extract File From Image”

 

 

 

5. And that’s it, you have now your picture and your program/file back.

 

 

Remote Desktop Services

 

 

 

Remote Desktop Services

in Windows Server 2008 R2, formerly known as Terminal Services in Windows Server 2008 and previous versions, is one of the components of Microsoft Windows (both server and client versions) that allows a user to access applications and data on a remote computer over a network, using the Remote Desktop Protocol (RDP). Terminal Services is Microsoft‘s implementation of thin-client terminal server computing, where Windows applications, or even the entire desktop of the computer running Terminal Services, are made accessible to a remote client machine. The client can either be a full-fledged computer, running any operating system as long as the terminal services protocol is supported, or a barebone machine powerful enough to support the protocol (such as Windows FLP). With terminal services, only the user interface of an application is presented at the client. Any input to it is redirected over the network to the server, where all application execution takes place.[1] This is in contrast to appstreaming systems, like Microsoft Application Virtualization, in which the applications, while still stored on a centralized server, are streamed to the client on-demand and then executed on the client machine. Microsoft changed the name from Terminal Services to Remote Desktop Services with the release of Windows Server 2008 R2 in October 2009.[2] RemoteFX is being added to Remote Desktop Services as part of Windows Server 2008 R2 SP1.

Overview

Terminal Services was first introduced in Windows NT 4.0 Terminal Server Edition. It was significantly improved for Windows 2000 and Windows Server 2003. All versions of Windows XP, except Home edition, also include a Remote Desktop server. Both the underlying protocol as well as the service was again overhauled for Windows Vista and Windows Server 2008.[3] Windows includes two client applications which utilize terminal services: the first, Remote Assistance is available in all versions of Windows XP and successors and allows one user to assist another user. The second, Remote Desktop, allows a user to log in to a remote system and access the desktop, applications and data on the system as well as control it remotely. However, this is only available in certain Windows editions. These are Windows NT Terminal Server; subsequent Windows server editions, Windows XP Professional, and Windows Vista Business, Enterprise and Ultimate. In the client versions of Windows, Terminal Services supports only one logged in user at a time, whereas in the server operating systems, concurrent remote sessions are allowed.

Microsoft provides the client software Remote Desktop Connection (formerly called Terminal Services Client), available for most 32-bit versions of Windows, including Windows Mobile, and Apple‘s Mac OS X, that allows a user to connect to a server running Terminal Services. On Windows, both Terminal Services client and Remote Desktop Protocol (RDP) use TCP port 3389 by default, which is editable[4] in the Windows registry. It also includes an ActiveX control to embed the functionality in other applications or even a web page.[5] A Windows CE version of the client software is also available.[1] Server versions of Windows OSs also include the Remote Desktop for Administration client (a special mode of the Remote Desktop Connection client), which allows remote connection to the traditional session 0 console of the server. In Windows Vista and later this session is reserved for services, and users always log onto session >0. The server functionality is provided by the Terminal Server component, which is able to handle Remote Assistance, Remote Desktop as well as the Remote Administration clients.[1] Third-party developers have created client software for other platforms, including the open source rdesktop client for common Unix platforms.

For an enterprise, Terminal Services allows IT departments to install applications on a central server. For example, instead of deploying database or accounting software on all desktops, the applications can simply be installed on a server and remote users can log on and use them via the Internet. This centralization makes upgrading, troubleshooting, and software management much easier. As long as employees have Remote Desktop software, they will be able to use enterprise software. Terminal Services can also integrate with Windows authentication systems to prevent unauthorized users from accessing the applications or data.

Microsoft has a long-standing agreement with Citrix to facilitate sharing of technologies and patent licensing between Microsoft Terminal Services and Citrix XenApp (formerly Citrix MetaFrame and Citrix Presentation Server). In this arrangement, Citrix has access to key source code for the Windows platform enabling their developers to improve the security and performance of the Terminal Services platform. In late December, 2004 the two companies announced a five-year renewal of this arrangement to cover Windows Vista.

 

Architecture

The server component of Remote Desktop Services is Terminal Server (termdd.sys), which listens on TCP port 3389. When an RDP client connects to this port, it is tagged with a unique SessionID and associated with a freshly spawned console session (Session 0, keyboard, mouse and character mode UI only). The login subsystem (winlogon.exe) and the GDI graphics subsystem is then initiated, which handles the job of authenticating the user and presenting the GUI. These executables are loaded in a new session, rather than the console session. When creating the new session, the graphics and keyboard/mouse device drivers are replaced with RDP-specific drivers: RdpDD.sys and RdpWD.sys. The RdpDD.sys is the device driver and it captures the UI rendering calls into a format that is transmittable over RDP. RdpWD.sys acts as keyboard and mouse driver; it receives keyboard and mouse input over the TCP connection and presents them as keyboard or mouse inputs. It also allows creation of virtual channels, which allow other devices, such as disc, audio, printers, and COM ports to be redirected, i.e., the channels act as replacement for these devices. The channels connect to the client over the TCP connection; as the channels are accessed for data, the client is informed of the request, which is then transferred over the TCP connection to the application. This entire procedure is done by the terminal server and the client, with the RDP protocol mediating the correct transfer, and is entirely transparent to the applications.[6] RDP communications are encrypted using 128-bit RC4 encryption. Windows Server 2003 onwards, it can use a FIPS 140 compliant encryption schemes.[1]

Once a client initiates a connection and is informed of a successful invocation of the terminal services stack at the server, it loads up the device as well as the keyboard/mouse drivers. The UI data received over RDP is decoded and rendered as UI, whereas the keyboard and mouse inputs to the Window hosting the UI is intercepted by the drivers, and transmitted over RDP to the server. It also creates the other virtual channels and sets up the redirection. RDP communication can be encrypted; using either low, medium or high encryption. With low encryption, user input (outgoing data) is encrypted using a weak (40-bit RC4) cipher. With medium encryption, UI packets (incoming data) are encrypted using this weak cipher as well. The setting “High encryption (Non-export)” uses 128-bit RC4 encryption and “High encryption (Export)” uses 40-bit RC4 encryption.

 

Terminal Server

Terminal Server is the server component of Terminal services. It handles the job of authenticating clients, as well as making the applications available remotely. It is also entrusted with the job of restricting the clients according to the level of access they have. The Terminal Server respects the configured software restriction policies, so as to restrict the availability of certain software to only a certain group of users. The remote session information is stored in specialized directories, called Session Directory which is stored at the server. Session directories are used to store state information about a session, and can be used to resume interrupted sessions. The terminal server also has to manage these directories. Terminal Servers can be used in a cluster as well.[1]

In Windows Server 2008, it has been significantly overhauled. While logging in, if the user logged on to the local system using a Windows Server Domain account, the credentials from the same sign-on can be used to authenticate the remote session. However, this requires Windows Server 2008 to be the terminal server OS, while the client OS is limited to Windows Server 2008, Windows Vista and Windows 7. In addition, the terminal server can provide access to only a single program, rather than the entire desktop, by means of a feature named RemoteApp. Terminal Services Web Access (TS Web Access) makes a RemoteApp session invocable from the web browser. It includes the TS Web Access Web Part control which maintains the list of RemoteApps deployed on the server and keeps the list up to date. Terminal Server can also integrate with Windows System Resource Manager to throttle resource usage of remote applications.[3]

Terminal Server is managed by the Terminal Server Manager Microsoft Management Console snap-in. It can be used to configure the sign in requirements, as well as to enforce a single instance of remote session. It can also be configured by using Group Policy or Windows Management Instrumentation. It is, however, not available in client versions of Windows OS, where the server is pre-configured to allow only one session and enforce the rights of the user account on the remote session, without any customization.

 

Terminal Services Gateway

The Terminal Services Gateway service component, also known as TS Gateway, can tunnel the Remote Desktop Protocol session using a HTTPS channel.[8] This increases the security of Remote Desktop Services by encapsulating the session with Transport Layer Security (TLS)[9] This also allows the option to use Internet Explorer as the RDP client.

This feature was introduced in the Windows Server 2008 and Windows Home Server products.

Important to note at the time of writing (April 2011), there are no Mac OS or Linux clients that support connecting through a Terminal Services Gateway.

 

Remote Desktop Connection

Remote Desktop Connection (RDC, also called Remote Desktop, formerly known as Microsoft Terminal Services Client, or mstsc) is the client application for Remote Desktop Services. It allows a user to remotely log in to a networked computer running the terminal services server. RDC presents the desktop interface (or application GUI) of the remote system, as if it were accessed locally.[1] With version 6.0, if the Desktop Experience component is plugged into the remote server, the chrome of the applications will resemble the local applications, rather than the remote one. In this scenario, the remote applications will use the Aero theme if a Windows Vista machine running Aero is connected to the server.[3] Later versions of the protocol also support rendering the UI in full 24 bit color, as well as resource redirection for printers, COM ports, disk drives, mice and keyboards. With resource redirection, remote applications are able to use the resources of the local computer. Audio is also redirected, so that any sounds generated by a remote application are played back at the client system.[1][3] In addition to regular username/password for authorizing for the remote session, RDC also supports using smart cards for authorization[1] With RDC 6.0, the resolution of a remote session can be set independently of the settings at the remote computer. In addition, a remote session can also span multiple monitors at the client system, independent of the multi-monitor settings at the server. It also prioritizes UI data as well as keyboard and mouse inputs over print jobs or file transfers so as to make the applications more responsive. It also redirects plug and play devices such as cameras, portable music players, and scanners, so that input from these devices can be used by the remote applications as well.[3] RDC can also be used to connect to WMC remote sessions; however, since WMC does not stream video using Remote Desktop Protocol, only the applications can be viewed this way, not any media. RDC can also be used to connect to computers, which are exposed via Windows Home Server RDP Gateway over the Internet. RDC can be used to reboot the remote computer with the CTRL-ALT-END key combination.

 

RemoteApp

RemoteApp (or TS RemoteApp) is a special mode of Remote Desktop Services, available only in Remote Desktop Connection 6.1 and above (with Windows Server 2008 being the RemoteApp server), where remote session configuration is integrated into the the client operating system. The RDP 6.1 client ships with Windows XP SP3, KB952155 for Windows XP SP2 users,[12] Windows Vista SP1 and Windows Server 2008. The UI for the RemoteApp is rendered in a window over the local desktop, and is managed like any other window for local applications. The end result of this is that remote applications behave largely like local applications. The task of establishing the remote session, as well as redirecting local resources to the remote application, is transparent to the end user.[13] Multiple applications can be started in a single RemoteApp session, each with their own windows.[14]

A RemoteApp can be packaged either as a .rdp file or distributed via an .msi Windows Installer package. When packaged as an .rdp file (which contains the address of the RemoteApp server, authentication schemes to be used, and other settings), a RemoteApp can be launched by double clicking the file. It will invoke the Remote Desktop Connection client, which will connect to the server and render the UI. The RemoteApp can also be packaged in a Windows Installer database, installing which can register the RemoteApp in the Start Menu as well as create shortcuts to launch it. A RemoteApp can also be registered as handler for filetypes or URIs. Opening a file registered with RemoteApp will first invoke Remote Desktop Connection, which will connect to the terminal server and then open the file. Any application which can be accessed over Remote Desktop can be served as a RemoteApp.[13]

Windows 7 includes built-in support for RemoteApp publishing but it has to be enabled manually in registry, since there is no RemoteApp management console in client versions of Microsoft Windows.

 

Windows Desktop Sharing

Windows Vista onwards, Terminal Services also includes a multi-party desktop sharing capability known as Windows Desktop Sharing. Unlike Terminal Services, which creates a new user session for every RDP connection, Windows Desktop Sharing can host the remote session in the context of the currently logged in user without creating a new session, and make the Desktop, or a subset of it, available over Remote Desktop Protocol.[16] Windows Desktop Sharing can be used to share the entire desktop, a specific region, or a particular application.[17] Windows Desktop Sharing can also be used to share multi-monitor desktops. When sharing applications individually (rather than the entire desktop), the windows are managed (whether they are minimized or maximized) independently at the server and the client side.[17]

The functionality is only provided via a public API, which can be used by any application to provide screen sharing functionality. Windows Desktop Sharing API exposes two objects: RDPSession for the sharing session and RDPViewer for the viewer. Multiple viewer objects can be instantiated for one Session object. A viewer can either be a passive viewer, who is just able to watch the application like a screen cast, or an interactive viewer, who is able to interact in real time with the remote application.[16] The RDPSession object contains all the shared applications, represented as Application objects, each with Window objects representing their on-screen windows. Per-application filters capture the application Windows and package them as Window objects.[18] A viewer must authenticate itself before it can connect to a sharing session. This is done by generating an Invitation using the RDPSession. It contains an authentication ticket and password. The object is serialized and sent to the viewers, who need to present the Invitation when connecting.[16][18]

Windows Desktop Sharing API is used by Windows Meeting Space for providing application sharing functionality among peers; however, the application does not expose all the features supported by the API.[17] It is also used by Remote Assistance.

 

Source: Wikipedia

 

10 reasons to migrate to Exchange 2010

 

1: Continuous replication

International research shows that companies lose $10,000 an hour to email downtime. This version of Exchange enables continuous replication of data, which can minimise disruptions dramatically and spare organisations from such loss. Moreover, Microsoft reckons the costs of deploying Exchange 2010 can be recouped within six months thanks in part to the improvements in business continuity.

2: Virtualisation

Exchange 2010 supports virtualisation, allowing consolidation. Server virtualisation is not only a cost cutter, reducing expenditure related to maintenance, support staff, power, cooling, and space. It also improves business continuity — when a virtual machine is down, computers can run on another virtual machine with no downtime.

3: Cost savings on storage

Exchange 2010 has, according to Microsoft, 70% less disk I/O (input/output) than Exchange 2007. For this reason, the firm recommends moving away from SAN storage solutions and adopting less expensive direct attached storage. This translates to real and significant cost savings for most businesses.

4: Larger mailboxes

Coupling the ability to use larger, slower SATA (or SAS) disks with changes to the underlying mailbox database architecture means that far larger mailbox sizes will become the norm.

5: Voicemail transcription

Unified Messaging, first introduced with Exchange 2007, offers the concept of the “universal inbox,” where email and voice mail are available from a single location and consequently accessed from any of the following clients:

  • Outlook 2007 and later
  • Outlook Web App
  • Outlook Voice Access — access from any phone
  • Windows Mobile 6.5 or later devices

A new feature to Exchange 2010, Voicemail Preview, sees text-transcripts of voicemails being received, saving the time it takes to listen to the message. Upon reception of a voice message, the receiver can glance at the preview and decide whether it is an urgent matter. This and other improvements, such as managing voice and email from a single directory (using AD), offer organisations the opportunity to discard third-party voicemail solutions in favour of Exchange 2010.

6: Help desk cost reduction

Exchange 2010 offers potential to reduce help desk costs by enabling users to perform common tasks that would normally require a help desk call. Role-based Access control (RBAC) allows delegation based on job function which, coupled with the Web-based Exchange Control Panel (ECP), enables users to assume responsibility for distribution lists, update personal information held in AD, and track messages. This reduces the call volumes placed on the help desk, with obvious financial benefits.

7: High(er) Availability

Exchange 2010 builds upon the continuous replication technologies first introduced in Exchange 2007. The technology is far simpler to deploy than Exchange 2007, as the complexities of a cluster install are taken away from the administrator. It incorporates easily with existing mailbox servers and offers protection at the database — with Database Availability Groups – rather than the server level. By supporting automatic failover, this feature allows faster recovery times than previously.

8: Native archiving

A large hole in previous Exchange offerings was the lack of a native managed archive solution. This saw either the proliferation of unmanaged PSTs or the expense of deploying third-party solutions. With the advent of Exchange 2010 — and in particular the upcoming arrival of SP1 this year — a basic archiving suite is now available out-of-the-box.

9: Running on-premise or in the cloud

Exchange 2010 offers organisations the option to run Exchange on-premise or in the cloud. This approach even allows organisations to run some mailboxes in the cloud and some on locally held Exchange resources. This offers companies very competitive rates for mailbox provision from cloud providers for key mailboxes, whilst deciding how much control to relinquish by still hosting most mailboxes on local servers.

10: Easier calendar sharing

With Federation for Exchange 2010, employees can share calendars and distribution lists with external recipients more easily. The application allows them to schedule meetings with partners and customers as if they belonged to the same organisation. This might not appeal to every organisation, but those investing in collaboration technologies will see the value Exchange 2010 offers.

 

MS Forefront Threat Management Gateway

What is TMG ?

 

Microsoft Forefront Threat Management Gateway (Forefront TMG), formerly known as Microsoft Internet Security and Acceleration Server (ISA Server), is a network security and protection solution for Microsoft Windows, described by Microsoft as “enables businesses by allowing employees to safely and productively use the Internet for business without worrying about malware and other threats“.

 

Features

Microsoft Forefront TMG offers a set of features which include:

  1. Routing and remote access features: Microsoft Forefront TMG can act as a router, an Internet gateway, a virtual private network (VPN) server, a network address translation (NAT) server and a proxy server.
  2. Security features: Microsoft Forefront TMG is a firewall which can inspect network traffic (including web contents, secure web contents and emails) and filter out malwares, attempts to exploit security vulnerabilities and content that does not match a predefined security policy. In technical sense, Microsoft Forefront TMG offers application layer protection, stateful filtering, content filtering and anti-malware protection.
  3. Network performance features: Microsoft Forefront TMG can also improve network performance: It can compress web traffic to improve communication speed. It also offers web caching: It can cache frequently-accessed web contents so that users can access them faster from the local network cache. Microsoft Forefront TMG 2010 can also cache data received through Background Intelligent Transfer Service, such as updates of software published on Microsoft Update website.

 

Microsoft Forefront TMG 2010

Microsoft Forefront Threat Management Gateway 2010 (Forefront TMG 2010) was released on 17 November 2009.[1] It is built on the foundation of ISA Server 2006 and provides enhanced web protection, native 64-bit support, support for Windows Server 2008 and Windows Server 2008 R2, malware protection and BITS caching. Service Pack 1 for this product was released on 23 June 2010.[14] It includes several new features to support Windows Server 2008 R2 and Microsoft SharePoint 2010 lines of products.

 

 

 

 

 

What’s New in Hyper-V in Windows Server 2008 R2


What are the major changes?

The Hyper-V™ role enables you to create and manage a virtualized server computing environment by using a technology that is part of Windows Server® 2008 R2. The improvements to Hyper-V include new live migration functionality, support for dynamic virtual machine storage, and enhancements to processor and networking support.

The following changes are available in Windows Server 2008 R2:

  • Live migration
  • Dynamic virtual machine storage
  • Enhanced processor support
  • Enhanced networking support

What does Hyper-V do?

Hyper-V is a role in Windows Server 2008 R2 that provides you with the tools and services you can use to create a virtualized server computing environment. This virtualized environment can be used to address a variety of business goals aimed at improving efficiency and reducing costs. This type of environment is useful because you can create and manage virtual machines, which allows you to run multiple operating systems on one physical computer and isolate the operating systems from each other.

Who will be interested in this feature?

The Hyper-V role is used by IT professionals who need to create a virtualized server computing environment.

What new functionality does Hyper-V provide?

Improvements to Hyper-V include new live migration functionality.

Live migration

Live migration allows you to transparently move running virtual machines from one node of the failover cluster to another node in the same cluster without a dropped network connection or perceived downtime. Live migration requires the failover clustering role to be added and configured on the servers running Hyper-V. In addition, failover clustering requires shared storage for the cluster nodes. This can include an iSCSI or Fiber-Channel Storage Area Network (SAN). All virtual machines are stored in the shared storage area, and the running virtual machine state is managed by one of the nodes.

On a given server running Hyper-V, only one live migration (to or from the server) can be in progress at a given time. This means that you cannot use live migration to move multiple virtual machines simultaneously.

We recommend using the new Cluster Shared Volumes (CSV) feature of Failover Clustering in Windows Server 2008 R2 with live migration. CSV provides increased reliability when used with live migration and virtual machines, and also provides a single, consistent file namespace so that all servers running Windows Server 2008 R2 see the same storage.

Why is this change important?

Live migration does the following to facilitate greater flexibility and value:

  • Provides better agility. Datacenters with multiple servers running Hyper-V can move running virtual machines to the best physical computer for performance, scaling, or optimal consolidation without affecting users.
  • Reduces costs. Datacenters with multiple servers running Hyper-V can service their servers without causing virtual machine downtime or the need to schedule a maintenance window. Datacenters will also be able to reduce power consumption by dynamically increasing consolidation ratios and turning off unused servers during times of lower demand.
  • Increases productivity. It is possible to keep virtual machines online, even during maintenance, which increases productivity for both users and server administrators.

Are there any dependencies?

Live migration requires the failover clustering role to be added and configured on the servers running Hyper-V.

What existing functionality is changing?

The following list briefly summarizes the improvements to existing functionality in Hyper-V:

  • Dynamic virtual machine storage. Improvements to virtual machine storage include support for hot plug-in and hot removal of the storage on a SCSI controller of the virtual machine. By supporting the addition or removal of virtual hard disks and physical disks while a virtual machine is running, it is possible to quickly reconfigure virtual machines to meet changing requirements. Hot plug-in and removal of storage requires the installation of Hyper-V integration services (included in Windows Server 2008 R2) on the guest operating system.
  • Enhanced processor support. You can now have up to 64 physical processor cores. The increased processor support makes it possible to run even more demanding workloads on a single host. In addition, there is support for Second-Level Address Translation (SLAT) and CPU Core Parking. CPU Core Parking enables Windows and Hyper-V to consolidate processing onto the fewest number of possible processor cores, and suspends inactive processor cores. SLAT adds a second level of paging below the architectural x86/x64 paging tables in x86/x64 processors. It provides an indirection layer from virtual machine memory access to the physical memory access. In virtualization scenarios, hardware-based SLAT support improves performance. On Itanium-based processors, this is called Extended Page Tables (EPT), and on AMD-based processors, it is called Nested Page Tables (NPT).
  • Enhanced networking support. Support for jumbo frames, which was previously available in nonvirtual environments, has been extended to be available on virtual machines. This feature enables virtual machines to use jumbo frames up to 9,014 bytes in size, if the underlying physical network supports it.

Which editions include this role?

This role is available in all editions of Windows Server 2008 R2, except for Windows Server® 2008 R2 for Itanium-Based Systems and Windows® Web Server 2008 R2.

Source: Microsoft.com

Anonymous Browsing with TOR Windows 7

What is Tor?

Tor is free software and an open network that helps you defend against a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security known as traffic analysis.

Why Anonymity Matters

Tor protects you by bouncing your communications around a distributed network of relays run by volunteers all around the world: it prevents somebody watching your Internet connection from learning what sites you visit, and it prevents the sites you visit from learning your physical location. Tor works with many of your existing applications, including web browsers, instant messaging clients, remote login, and other applications based on the TCP protocol.

Why we need tor

Using Tor protects you against a common form of Internet surveillance known as “traffic analysis.” Traffic analysis can be used to infer who is talking to whom over a public network. Knowing the source and destination of your Internet traffic allows others to track your behavior and interests. This can impact your checkbook if, for example, an e-commerce site uses price discrimination based on your country or institution of origin. It can even threaten your job and physical safety by revealing who and where you are. For example, if you’re travelling abroad and you connect to your employer’s computers to check or send mail, you can inadvertently reveal your national origin and professional affiliation to anyone observing the network, even if the connection is encrypted.

How does traffic analysis work? Internet data packets have two parts: a data payload and a header used for routing. The data payload is whatever is being sent, whether that’s an email message, a web page, or an audio file. Even if you encrypt the data payload of your communications, traffic analysis still reveals a great deal about what you’re doing and, possibly, what you’re saying. That’s because it focuses on the header, which discloses source, destination, size, timing, and so on.

A basic problem for the privacy minded is that the recipient of your communications can see that you sent it by looking at headers. So can authorized intermediaries like Internet service providers, and sometimes unauthorized intermediaries as well. A very simple form of traffic analysis might involve sitting somewhere between sender and recipient on the network, looking at headers.

But there are also more powerful kinds of traffic analysis. Some attackers spy on multiple parts of the Internet and use sophisticated statistical techniques to track the communications patterns of many different organizations and individuals. Encryption does not help against these attackers, since it only hides the content of Internet traffic, not the headers.

Staying anonymous

Tor can’t solve all anonymity problems. It focuses only on protecting the transport of data. You need to use protocol-specific support software if you don’t want the sites you visit to see your identifying information. For example, you can use Torbutton while browsing the web to withhold some information about your computer’s configuration.

Also, to protect your anonymity, be smart. Don’t provide your name or other revealing information in web forms. Be aware that, like all anonymizing networks that are fast enough for web browsing, Tor does not provide protection against end-to-end timing attacks: If your attacker can watch the traffic coming out of your computer, and also the traffic arriving at your chosen destination, he can use statistical analysis to discover that they are part of the same circuit.

Configuring Windows 7 to browse with TOR:

1. Go to website of tor project; https://www.torproject.org/

2. Clik to Download stable TOR;

3. Click to open downloaded .exe file, to start the setup;

4. When install leave default “FULL” instalation;

5.  Open the new installed program “VIDALIA”  and click on “Start Tor”

In 10-15 second you will be connected to TOR;

6. If you have some problem with default TorButton, check this;

Go to https://www.torproject.org/torbutton/ and install the stable TorButton; Firefox 5.0!

7. When TorButton install is finish, restart Firefox an you’ll see the new TorButton;

8. Click to this new button and choose “Toggle Tor Status”;

The Tor is now enabled in your browser, you see the green color on the TorButton;

9. Now TOR is started, as you can see on step 5. and you enable the tor for your browser “Firefox” with TorButton.

Now you can start your browsing through the internet and change your identity bi clicking “Use a New Identity” to change your IP address.

Every time you click on “Use a New Identity” you will get the different IP Address in Firefox, if  TOR is enabled.

If you want to test your IP Address, go to http://www.whatismyip.com/ and you will see, what is your current IP Address when you browsing through the internet with your Firefox Browser.

For more information; https://www.torproject.org/

Disable NetBIOS over TCP/IP in Windows 7 ent.

NetBIOS over TCP/IP (NBT, or sometimes NetBT) is a networking protocol that allows legacy computer applications relying on the NetBIOS API to be used on modern TCP/IP networks.

NetBIOS was developed in the early 1980s, targeting very small networks (about a dozen computers). Some applications still use NetBIOS, and do not scale well in today’s networks of hundreds of computers when NetBIOS is run over NBF. When properly configured, NBT allows those applications to be run on large TCP/IP networks (including the whole Internet, although that is likely to be subject to security problems) without change.

NetBIOS provides three distinct services:

  • Name service for name registration and resolution (port: 137)
  • Datagram distribution service for connectionless communication (port: 138)
  • Session service for connection-oriented communication (port: 139)

If you want disable NetBIOS over TCP/IP you take the following steps:

1. Right click to your network interface at the right down corner:

2. Choose “Open Network And Sharing Center”

3. Next click to “Change Adapter Settings”

4. Right click on your network adapter and choose “Properties”

5.  On your network adapter click to “Internet Protocol Version 4” and Properties.

6. When you open the “Properties” you hit on button “Advanced”

7. When you click to Advanced button, Windows will open “Advanced TCP/IP Settings”

In that last window you click on “WINS” tab and then choose “Disable NetBIOS over TCP/IP”

And NetBIOS over TCP/IP is Disabled.

LM Password and NTLMv2 Password

Introduction:

Passwords tend to be our main and sometimes only line of defense against intruders. Even if attackers do not have physical access to a machine they can often access a server through the remote desktop protocol or authenticate to a service via an outward facing web application.

The purpose of this article is to educate you on how Windows creates and stores password hashes, and how those hashes are cracked. After demonstrating how to crack Windows passwords I will provide some tips for ensuring you are not vulnerable to these types of attacks.

How Windows Stores Passwords:

Windows-based computers utilize two methods for the hashing of user passwords, both having drastically different security implications. These are LAN Manager (LM) and NT LAN Manager version 2 (NTLMv2). A hash is the result of a cryptographic function that takes an arbitrarily sized string of data, performs a mathematical encryption function on it, and returns a fixed-size string.

LM Password Hashes:

The LAN Manager hash was one of the first password hashing algorithms to be used by Windows operating systems, and the only version to be supported up until the advent of NTLMv2 used in Windows 2000, XP, Vista, and 7. These newer operating systems still support the use of LM hashes for backwards compatibility purposes. However, it is disabled by default for Windows Vista and Windows 7.

The LM hash of a password is computed using a six step process:

  1. The user’s password is converted into all uppercase letters
  2. The password has null characters added to it until it equals 14 characters
  3. The new password is split into two 7 character halves
  4. These values are used to create two DES encryption keys, one from each half with a parity bit added to each to create 64 bit keys.
  5. Each DES key is used to encrypt a preset ASCII string (KGS!@#$%), resulting in two 8-byte ciphertext values
  6. The two 8-byte ciphertext values are combined to form a 16-byte value, which is the completed LM hash

In practice, the password “PassWord123” would be converted as follows:

    1. PASSWORD123
    2. PASSWORD123000
    3. PASSWOR and D123000
    4. PASSWOR1 and D1230001
    5. E52CAC67419A9A22 and 664345140A852F61
    6. E52CAC67419A9A22664345140A852F61

LM stored passwords have a few distinct disadvantages. The first of these is that the encryption is based on the Data Encyrption Standard (DES). DES originated from a 1970s IBM project that was eventually modified by NIST, sponsored by the NSA, and released as an ANSI standard in 1981. DES was considered secure for many years but came under scrutiny in the nineties due to its small key size of only 56-bits. This came to a head in 1998 when the Electronic Frontier Foundation was able to crack DES in about 23 hours. Since this, DES has been considered insecure and has since been replaced with Triple-DES and AES. In short, it’s another encryption standard that has fallen victim to modern computing power and can be cracked in no time at all.

Perhaps the biggest weakness in the LM hash is in the creation of the DES keys. In this process, a user supplied password is automatically converted to all uppercase, padded to fourteen characters (this is the max length for an LM hashed password), and split into two seven character halves. Consider that there are 95 to the power of 14different possible passwords made up of 14 printable ASCII characters, this decreases to 95 to the power of 7possible passwords when split into a 7 character half, and then decreases to 69 to the power of 7 possible passwords when you are only allowed uppercase ASCII characters. Essentially, this makes the use of varying character cases and increased password length nearly useless when the password is stored as an LM hash, which makes LM passwords incredibly vulnerable to brute force cracking attempts.

NTLMv2 Password Hashes:

NT LAN Manager (NTLM) is the Microsoft authentication protocol that was created to be the successor of LM. Eventually enhanced, NTLMv2 was accepted as the new authentication method of choice and implemented with Windows NT 4.

The creation of an NTLMv2 hash (henceforth referred to as the NT hash) is actually a much simpler process in terms of what the operating system actually does, and relies on the MD4 hashing algorithm to create the hash based upon a series of mathematical calculations. The MD4 algorithm is used three times in order to produce the NT hash. In practice, the password “PassWord123” would be represented as an MD4 hash with “94354877D5B87105D7FEC0F3BF500B33”.


MD4 is considered to be significantly stronger than DES as it allows for longer password lengths, it allows for distinction between uppercase and lowercase letters and it does not split the password into smaller, easier to crack chunks.

Perhaps the biggest complaint with NTLMv2 created hashes is that Windows does not utilize a technique called salting. Salting is a technique in which a random number is generated in order to compute the hash for the password. This means that the same password could have two completely different hash values, which would be ideal.

With this being the case, it is possible for a user to generate what are called rainbow tables. Rainbow tables are not just coffee tables painted with bright colors; they are actually tables containing every single hash value for every possible password possibility up to a certain number of characters. Using a rainbow table, you can simply take the hash value you have extracted from the target computer and search for it. Once it is found in the table, you will have the password. As you can imagine, a rainbow table for even a small number of characters can grow to be very large, meaning that their generation, storage, and indexing can be quite a task.

More information:

http://en.wikipedia.org/wiki/LM_hash

http://en.wikipedia.org/wiki/NTLM

Exchange Server 2010 Service Pack 2 Announced

Exchange Server 2010 SP2 is due for release in the second half of this year and Microsoft has released some more detailed information on what to expect in this update.

With SP2, the following new features and capabilities will be included:

  • Outlook Web App (OWA) Mini: A browse-only version of OWA designed for low bandwidth and resolution devices. Based on the existing Exchange 2010 SP1 OWA infrastructure, this feature provides a simple text based interface to navigate the user’s mailbox and access to the global address list from a plurality of mobile devices.
  • Cross-Site Silent Redirection for Outlook Web App: With Service Pack 2, you will have the ability to enable silent redirection when CAS must redirect an OWA request to CAS infrastructure located in another Active Directory site.  Silent redirection can also provide a single sign-on experience when Forms-Based Authentication is used.
  • Hybrid Configuration Wizard: Organizations can choose to deploy a hybrid scenario where some mailboxes are on-premises and some are in Exchange Online with Microsoft Office 365. Hybrid deployments may be needed for migrations taking place over weeks, months or indefinite timeframes. This wizard helps simplify the configuration of Exchange sharing features, like: calendar and free/busy sharing, secure mailflow, mailbox moves, as well as online archive.
  • Address Book Policies: Allows organizations to segment their address books into smaller scoped subsets of users providing a more refined user experience than the previous manual configuration approach. We also blogged about this new feature recently in GAL Segmentation, Exchange Server 2010 and Address Book Policies.
  • Customer Requested Fixes: All fixes contained within update rollups released prior to Service Pack 2 will also be contained within SP2. Details of our regular Exchange 2010 release rhythm can be found in Exchange 2010 Servicing.

One thing to note is that SP2 will require an Active Directory schema update.

In order to support these newly added features, there will be a requirement for customers to update their Active Directory schema. We’ve heard the feedback loud and clear from our customers on multiple occasions regarding delays that can be caused to deployment as a result of needing to update your schema and as such with the release of Exchange 2010 SP2 are communicating the required changes ahead of release in order to assist our customers with planning their upgrade path ahead of time.

Microsoft Office 365

Microsoft Office 365

is a commercial software plus services offering a set of products from Microsoft Corporation, with the initial plan including a Professional subscription (for organizations of 25 and smaller) and an Enterprise subscription (for organizations with more individuals). [1] Office 365 was announced in the autumn of 2010, and was made available to the public on 28 June 2011.[2]

Office 365 includes the Microsoft Office suite of desktop applications and hosted versions of Microsoft’s Server products (including Exchange Server, SharePoint Server, and Lync Server), delivered and accessed over the Internet,[3] in effect, the next version of Business Productivity Online Services (BPOS).[4]

Microsoft Office 365

On June 28, 2011, Steve Ballmer, chief executive officer, Microsoft announced
the availability of Office 365, Microsoft’s next generation productivity
service. Office 365 is the culmination of more than 20 years of experience
delivering world class productivity solutions to people and businesses of all
sizes. It brings together Office, SharePoint, Exchange, and Lync in an
always-up-to-date cloud service. Customers may try it and buy it at www.office365.com.

Exchange 2007 Enterprise SP1 IMAP service, load is 100%

If you have problem, that your IMAP service on Exchange 2007 sp1
is taking 100% of your CPU’s, you must check it up how many items your users have in their folders/inbox.

The problem is, when user click on folder which have a large numbers of items in folder such as 300.000 and more, and then run some filtering or rule-ing in that folder.

In our organization Microsoft.exchange.imap4.exe service load 100% CPU on our quad-core maschine, and you must restart the imap4.exe service.

I now, that Microsoft recommends max. items per folder to max. 20,000 with Exchange 2007 server, but maybe you will have this problem sometime.

If you want to check how many items have some users, you can simply open the exchange management shell (Powershell) and type;

Get-Mailbox | Get-MailboxFolderStatistics | Where {$_.ItemsInFolder -gt 20000} | fl identity, itemsinfolder

This will prompt any user in your org, that have more than 20.000 items in any folder, trash, inbox,…

You will get something like that;

Identity : yourdomain.local/Users/Sales/Franz Kafka\Inbox

ItemsInFolder : 35851

Identity : yourdomain.local/Users/Sales/John Derre\Trash

ItemsInFolder : 50851

and so on…

This affect only IMAP users not MAPI based clients (Outlook).

More than 40.000 – 50.000 items is problem in our exchange organization.(IMAP users!)

You can limit this by some policy of folders,…

Also exmon is very good tool for exchange mailbox server role;
http://www.msexchange.org/tutorials/Microsoft-Exchange-Server-User-Monitor.html

With exmon you can check in real time witch user is getting a lot of % cpu and then check his mailbox.

End