Saturday, May 31, 2008

Finding the geographical location of client web requests

Nowadays, we see many sites that show our geographical location the screen when we hit the page.
So how do the sites find out this information? Well, on the server side, we can obtain the IP address of the client (or the Internet provider proxy) from the request object.

From this IP address, we can query a database(typically csv files) and obtain the location information. The country information is free and available for download here.

For state and city information, U got to stuff out some moolah.. :)
More information can be found out here-

Wednesday, May 28, 2008

Kernel mode and User mode

In IIS 6.0, the core HTTP engine (HTTP.SYS) runs in kernel mode and all worker processes run in user mode. So what exactly is the difference between kernel mode and user mode programs?

User mode and kernel mode refers to the privilege level a process has to the system hardware. The closer to the hardware the process becomes, the more sensitive the system is to provoking system failure. In any OS, you want to separate applications from OS services because you want the OS to remain functional if an application crashes.

Typical OS architecture has two rings: one ring running in system mode, and a ring running in user mode. The kernel has full control of the
hardware and provides abstractions for the processes running in user mode. A process running in user mode cannot access the hardware, and must use the abstractions provided by the kernel. It can call certain services of the kernel by making "system calls" or kernel calls. The
kernel only offers the basic services. All others are provided by programs running in user mode.

Kernel mode program also run much faster than User mode programs as they are much closer to the hardware.

Wednesday, May 21, 2008

Overcoming the 2 connection limit in browsers

The HTTP 1.1 protocol dictates that browsers cannot make more than 2 connections per domain name. This limitation can cause slow page loads for pages containing a lot of images or other references to external resources.
To overcome this limitation, we can create sub-domains and serve other static content from these sub-domains, so that the browser can create extra 2 connections for each sub-domain.
This trick is explained in the following links:

Actually the above technique is only useful if there are a lot of external resources that need to be loaded by the page. HTTP 1.1 also brought in the concept of HTTP pipelining. This means that over those two connections the browser can send the requests back-to-back before even before receiving a single response. This completely eliminates the dead-time between getting back the last packet of the previous request and then sending the next request. Instead, all the requests are queued at the server, which sends out responses as fast as TCP/IP would allow. If HTTP pipelining is enabled, then the page load speed improves dramatically.

But unfortunately, HTTP pipelining is disabled by default in IE/Firefox- bcoz not all proxies and servers support it still. To enable pipelining in Firefox type "about:config" in Firefox's location bar and enable the "network.http.pipelining" preference.