Wednesday, December 10, 2008

Zachman Framework

It's been some while since I attended John Zachman's seminar in Mumbai...

Recently someone asked me what the Zachman framework was? I tried to explain to him in simple terms as follows:

The Zachman framework is a classification structure for expressing the basic elements of an enterprise architecture. It a powerful tool for organizing architectural artifacts (design documents, specs, models, etc.) based on context or perspective.

The Zachman Framework is a matrix formed by the intersection of two classifications.
The columns represent the primitive interrogatives - why, how, when, who, where and why. The rows represent six different prespectives, which relate to stakeholder groups (Planner, Owner, Designer, Builder, Implementer and Worker).
The intersecting cells of the Framework correspond to models which, if documented, can provide a holistic view of the enterprise.

To view the model, please view below links:

Saturday, November 15, 2008

Updatable views using the 'INSTEAD OF' trigger

All views are not updatable in a database - especially if the underlying query is a complex one. This article gives interesting examples of the challenges faced in updating views.

But latest versions of databases, support a INSTEAD OF trigger, that enables us to make all views updatable. INSTEAD OF triggers fire in place of the triggering action.

The links below show how the 'INSTEAD OF' trigger can be used in different databases:

Tuesday, November 11, 2008

Cold start Vs Warm start of programs

We often notice that if we start a Windows Forms application the first time after a system reboot, then the start-up time is quite long. But if I close and open the win-form application again, then the application start-up time is reduced drastically. This is true for all programs - Java. .NET, VB... not just WinForms. 

For e.g. open a 5 MB word doc right after reboot and then reopen it again. There would be a difference of 5x-10x in startup times.  This difference between start-up times is called as Cold start and Warm start.

This happens because of 'disk-cache'. The 'disk-cache' is part of RAM that is dedicated to caching disk pages into memory. So the next time, a program is accessed, the disk cache is hit first and if the required program is found there, then it is loaded into the program space. Hence expensive disk access is eliminated and hence the start-up time is much faster. The pages remain in disk cache as long as possible and are removed based on a FIFO principle.

Operating systems have started using this technique to make programs appear faster. 
For example superfetch service on windows vista (xp has prefetch) does this by putting your most frequent used applications into the ram, so when you launch them, they are actually launched not for the first, but second time. 

Tuesday, November 04, 2008

Disabling startup programs

In Windows XP, if you are getting annoyed by the number of programs that start-up when the OS loads, then there is an easy way to disable the programs we don't need.

Just type "msconfig" in the 'Run' window.  A window titled 'System Configuration Utility' would come up. Navigate to the 'StartUp' tab and uncheck those programs that need not be started at boot time.

Friday, October 03, 2008

MVP Vs MVC pattern

I was recently working on a .NET Smart Client application and came across the 'MVP' (Model View Presenter) design pattern. Having spend years developing applications on the MVC (Model View Controller) pattern (e.g. Struts, Webworks, ROR, etc.), I was intrigued to find out the difference between MVC and MVP (if any)
The core principle behind MVP and MVC remain the same - clean separation of concerns between the presentation tier and the business tier.
Some consider MVP to be a subset of MVC and there are tons of articles on the web giving differences between the two..I found most of the information muggy and more confusing. Finally I came across Phil Haack blog that explains it in simple and lucid words.
Snippets from his blog:

The two patterns are similar in that they both are concerned with separating concerns and they both contain Models and Views. Many consider the MVP pattern to simply be a variant of the MVC pattern. The key difference is suggested by the problem that the MVP pattern sought to solve with the MVC pattern. Who handles the user input?

With MVC, it’s always the controller’s responsibility to handle mouse and keyboard events. With MVP, GUI components themselves initially handle the user’s input, but delegate to the interpretation of that input to the presenter.

In modern GUI systems, GUI components themselves handle user input such as mouse movements and clicks, rather than some central controller. Thus MVP pattern is widely used in WinForms, .NET SmartClient Factory, etc.

In most web architectures, the MVC pattern is used (e.g. Struts, ASP.NET MVC, ROR, Webworks), whereas MVP pattern is mostly used in thick client architectures such as WinForms.

Monday, August 04, 2008

What the hell are all those processes running in the taskbar?

Ever opened the TaskManager on windows and wondered why there are so processes running?
HowTo-Geek gives valuable information on some of the nagging questions:

1. juschedexe
2. ctfmonexe
3. rundll32
4. svchost

Wednesday, July 16, 2008

Comparison of iBatis, Hibernate and JPA

As architects, we often need to decide on what OR mapping tool to use. I came across this article on JavaLobby that compares iBatis, Hibernate and JPA.

Snippets from the article:

iBATIS is best used when you need complete control of the SQL. It is also useful when the SQL queries need to be fine-tuned. iBATIS should not be used when you have full control over both the application and the database design, because in such cases the application could be modified to suit the database, or vice versa. In such situations, you could build a fully object-relational application, and other ORM tools are preferable. As iBATIS is more SQL-centric, it is generally referred to as inverted -- fully ORM tools generate SQL, whereas iBATIS uses SQL directly. iBATIS is also inappropriate for non-relational databases, because such databases do not support transactions and other key features that iBATIS uses.

Hibernate is best used to leverage end-to-end OR mapping. It provides a complete ORM solution, but leaves you no control over queries. Hibernate is an ideal solution for situations where you have complete control over both the application and the database design. In such cases you may modify the application to suit the database, or vice versa. In these cases you could use Hibernate to build a fully object-relational application. Hibernate is the best option for object-oriented programmers who are less familiar with SQL.

JPA should be used when you need a standard Java-based persistence solution. JPA supports inheritance and polymorphism, both features of object-oriented programming. The downside of JPA is that it requires a provider that implements it. These vendor-specific tools also provide certain other features that are not defined as part of the JPA specification. One such feature is support for caching, which is not clearly defined in JPA but is well supported by Hibernate, one of the most popular frameworks that implements JPA. Also, JPA is defined to work with relational databases only. If your persistence solution needs to be extended to other types of data stores, like XML databases, then JPA is not the answer to your persistence problem.

iBATIS does not provide a complete ORM solution, and does not provide any direct mapping of objects and relational models. However, iBATIS provides you with complete control over queries. Hibernate provides a complete ORM solution, but offers you no control over the queries. Hibernate is very popular and a large and active community provides support for new users. JPA also provides a complete ORM solution, and provides support for object-oriented programming features like inheritance and polymorphism, but its performance depends on the persistence provider.

Sunday, July 06, 2008

iRise visualization software

Recently on request of one of our clients, we were asked to evaluate iRise V (isualization products.
A quick perusal of what the product offered made me smile. There are so many projects that fail because the end-product is not what the business required. Sometimes the business itself is not sure of what it wants or how the end product should look like.
It's estimated that 60% of projects fail because of misunderstanding of requirements.

Using iRise visualization products, business analysis can quickly build UI elements that have the same look-and-feel and navigation as the final end product.
More information at

P.S: Just got a mail from Tom Humbarger (iRise Project Manager) that an evaluation copy of iRise Studio PE is now available for free 30-day download from Would highly recommend it for a try.

Monday, June 23, 2008

Open Source IDS, Firewall, VPN Gateways

A few years back, if a SMB(small and medium business) shop wanted to install Firewalls, Network Intrusion Detection systems, VPN Gateways, etc. then it needed to shell out hundreds of dollars for commercial software from giants such as Cisco, Juniper etc.

But in the last few years, we have a slew of options available from the open-source world. I have been closely watching 2 products in this space:

1. Snort

Snort is an open source network intrusion prevention system, capable of performing real-time traffic analysis and packet logging on IP networks. It can perform protocol analysis, content searching/matching and can be used to detect a variety of attacks and probes, such as buffer overflows, stealth port scans, CGI attacks, SMB probes, OS fingerprinting attempts, and much more.
Snort has three primary uses. It can be used as a straight packet sniffer like tcpdump, a packet logger (useful for network traffic debugging, etc), or as a full blown network intrusion prevention system.

2. Untangle

Untangle delivers an integrated family of applications that help you simplify and consolidate the network and security products you need, in one place at the network gateway. The most popular applications let businesses block spam, spyware, viruses, and phish, filter out inappropriate web content, control unwanted protocols like instant messaging, and provide remote access and support options to their employees

Today Snort has become the de-factor standard for IDS. Even Untangle uses Snort for its IDS application. I was impressed with the range of applications available on the Untangle Gateway Platform. It includes SPAM blocker, Web Filter, Firewall, IDS based on Snort, VPN Gateway based on OpenVPN, a patented attack blocker, etc. A must try-out product :)

Saturday, June 14, 2008

File Watcher programs

During the early years, we often had to write our own component to monitor file changes or changes to a directory. Recently a friend of mine introduced me to the FileSystemWatcher class in .NET. We needed such a solid component and were contemplating writing it ourselves.
I was impressed with the wide range of features available in the FileSystemWatcher class. Not only will it detect changes in files, but it can monitor folders and sub-folders too.
And it can monitor a wide range of attributes - not just whether a file exists or not. For e.g. it can monitor whether the file size has changed, renamed, deleted etc.

I started wondering why there was no equivalent component in the Java SDK. The answer lies in the fact that such kind of event-raising is not available on Unix platforms. So polling is the only option that works cross-platform. When we poll, we cannot detect directory changes like renaming or moving. Also polling for a directory and all sub-directories has a big performance hit!

There a few programs available in Java that a developer can use instead of reinventing the wheel. Here are a few links that provide File watcher programs in Java:
Jahia File watcher

Tuesday, June 10, 2008

VSTS 2008 has a memory profiler

Visual Studio Team System 2005 had a profiler that was useful in obtaining response times and diagnosing time taken by each component/method.

Now in v2008, the Profiler comes equipped to do heap analysis. Information on how to enable heap analysis can be found at this blog
At the end of it, we get a report of the top methods allocating most memory, the types occupying most memory, etc.
This report is not as comprehensive as that provided by Numega DevPartner Studio.

Monday, June 09, 2008

Numega DevPartner Studio

Recently I installed DevPartner Studio and have explored the features provided by the tool. The tool integrates seamlessly into VS.NET and has the following high-level features:

- Static Analysis (Similar to FxCop or Code Analysis in VS Team Studio)

- Error Detection (You actually run the application/program and the tool would give a report of possible errors. This tool is more valuable in case of COM usage through .NET and in C, VC++ projects to detect OutOfBounds errors and dangling pointers, etc)

- Code Coverage (You run the application and at the end of it, a report is generated giving us code paths that were not executed. So helpful in finding dead code.)

- Memory Analysis (Heap analysis, shows object graph till root objects, no of objects created/destroyed etc.)

- Performance Analysis (E.g. U can create snapshots while running the application and the tool would give you the time spend in each method and many other stats. E.g. top 20 methods consuming most time, call graph, etc. Another cool feature is the source code window with time stats for each line on the left side. No more guessing what is taking so much time. )

- In Depth Performance Analyzer (CPU stats, Disk IO, Network IO, etc)

Visual Studio Team System provides us with many similar features. A VS product comparison can be found here.

Monday, June 02, 2008

Interesting features in Web 2.0 feature pack of Websphere v6.1

IBM has recently released a new feature pack for WAS 6.1 that has a host of cool features to build Web 2.0 applications.
AJAX support is provided by the DOJO Toolkit. There are a host of gadgets available (some added by IBM on top of DOJO) for RIA screens. Some features that were of particular interest to me are:
  • A javascript SOAP client that will enable web clients to make webservices requests directly.
  • Web-remoting RPC component that would enable a JS client to call an EJB method or a POJO method directly.
  • Apache Abdera library for manipulating ATOM/RSS feeds
  • JSON4J library on the server side to convert between JSON text and Java objects
  • AJAX messaging to implement Server side push. This is totally cool. I always wanted to experiment with the CometD functionality available in DOJO and here the interation between the DOJO Message Bus (client) and Websphere Service Integration bus was provided out of the box :)
A good example showing Stock Quote Streaming can be found here.

Saturday, May 31, 2008

Finding the geographical location of client web requests

Nowadays, we see many sites that show our geographical location the screen when we hit the page.
So how do the sites find out this information? Well, on the server side, we can obtain the IP address of the client (or the Internet provider proxy) from the request object.

From this IP address, we can query a database(typically csv files) and obtain the location information. The country information is free and available for download here.

For state and city information, U got to stuff out some moolah.. :)
More information can be found out here-

Wednesday, May 28, 2008

Kernel mode and User mode

In IIS 6.0, the core HTTP engine (HTTP.SYS) runs in kernel mode and all worker processes run in user mode. So what exactly is the difference between kernel mode and user mode programs?

User mode and kernel mode refers to the privilege level a process has to the system hardware. The closer to the hardware the process becomes, the more sensitive the system is to provoking system failure. In any OS, you want to separate applications from OS services because you want the OS to remain functional if an application crashes.

Typical OS architecture has two rings: one ring running in system mode, and a ring running in user mode. The kernel has full control of the
hardware and provides abstractions for the processes running in user mode. A process running in user mode cannot access the hardware, and must use the abstractions provided by the kernel. It can call certain services of the kernel by making "system calls" or kernel calls. The
kernel only offers the basic services. All others are provided by programs running in user mode.

Kernel mode program also run much faster than User mode programs as they are much closer to the hardware.

Wednesday, May 21, 2008

Overcoming the 2 connection limit in browsers

The HTTP 1.1 protocol dictates that browsers cannot make more than 2 connections per domain name. This limitation can cause slow page loads for pages containing a lot of images or other references to external resources.
To overcome this limitation, we can create sub-domains and serve other static content from these sub-domains, so that the browser can create extra 2 connections for each sub-domain.
This trick is explained in the following links:

Actually the above technique is only useful if there are a lot of external resources that need to be loaded by the page. HTTP 1.1 also brought in the concept of HTTP pipelining. This means that over those two connections the browser can send the requests back-to-back before even before receiving a single response. This completely eliminates the dead-time between getting back the last packet of the previous request and then sending the next request. Instead, all the requests are queued at the server, which sends out responses as fast as TCP/IP would allow. If HTTP pipelining is enabled, then the page load speed improves dramatically.

But unfortunately, HTTP pipelining is disabled by default in IE/Firefox- bcoz not all proxies and servers support it still. To enable pipelining in Firefox type "about:config" in Firefox's location bar and enable the "network.http.pipelining" preference.

Friday, March 21, 2008

Oracle RAC and Dataguard

A lot of IT managers get confused between what a Oracle RAC does and what Dataguard does.

Oracle RAC provides us with a cluster of Oracle instances for the same database/datastore. This enables massive scalability and availability. But it does not provide protection against failure of the database or corruption of the data. e.g. natural disaster that resulted in data-loss or data-corruption.

Oracle dataguard enables creation of a standby database at a DR site and thus provides data protection.

Friday, March 14, 2008

ViewState in ASP.NET

There is so much confusion over what ViewState does and why is it required. Finally the following link helped me in understanding a lot of things:

Snippets from the above article:
Server Controls utilize ViewState as the backing store for most, if not all their properties. That means when you declare an attribute on a server control, that value is usually ultimately stored as an entry in that control's ViewState StateBag.
ASP.NET calls TrackViewState() on the StateBag during the OnInit phase of the page/control lifecycle. This little trick ASP.NET uses to populate properties allows it to easily detect the difference between a declaratively set value and dynamically set value.
When the StateBag is asked to save and return it's state (StateBag.SaveViewState()), it only does so for the items contained within it that are marked as Dirty. That is why StateBag has the tracking feature. In order for data to be serialized, it must be marked as dirty. In order to be marked as dirty, it's value must be set after TrackViewState() is called.

When the page first begins to load during a postback (even prior to initialization), all the properties are set to their declared natural defaults. Then OnInit occurs. During the OnInit phase, ASP.NET calls TrackViewState() on all the StateBags. Then LoadViewState() is called with the deserialized data that was dirty from the previous request. The StateBag calls Add(key, value) for each of those items. Since the StateBag is tracking at this point, the value is marked dirty, so that it may be persisted once again for the next postback.

ViewState is only one way controls maintain values across postbacks. Regular good old HTML FORMS play a role, too. For example, disable viewstate on a textbox, and it will still maintain its value, because it is POSTING the value with the form. Make that TextBox invisible then do a post, and the value is lost. Thats where ViewState helps, which would allow it to maintain the value even if its invisible.

The option to set control properties in the OnPreInit event in order to avoid those values from being entered into the viewstate is a good idea to avoid storing these values in the ViewState. To avoid Datagrids from storing values in the ViewState, rebind the data to the grid on each page load.

Thursday, March 06, 2008

Security Principles

Browsing thru the OWASP site, I came across the following security principles that are of interest while designing the security architecture. Every security policy has 3 objectives:
a) Confidentiality b)Integrity c)Availability

Examples of security principles:
1: Securing the weakest link (The chain is only as strong as its weakest link) - e.g. Attackers will not target the firewall, but the applications accessible through the firewall.

2: Minimize Attack Surface Area - design the system such that the potential areas for intrusion are reduced.

3. Principle of Least Privilege - only give those permissions to the user that are required.

4. Principle of Defense in Depth - e.g. Do validations at the front-end using Javascript, in the web-tier using validation logic, in the database using constraints and triggers.

5. Fail securely - If an application/program fails, then it should not leave the system in an insecure state.

6. Separation of Duties - administrator should be able to turn the system on or off, set passwordpolicy but shouldn’t be able to log on to the storefront as a super privileged user, such as beingable to buy goods on behalf of other users.

7. Don't just rely on security by obscurity. Use security by design. Use elements of both the strategies.

The following links provide interesting material for reading:

Wednesday, March 05, 2008

XSS and CSRF attacks

Of late, I have been doing a lot of study to understand 'Cross-Site Scripting' attacks and 'Cross Sitre Request forging' attacks.
XSS flaws occur whenever an application takes user supplied data and sends it to a web browser without first validating or encoding that content. XSS allows attackers to execute script in the victim's browser which can hijack user sessions, deface web sites, possibly introduce worms, etc.

How to prevent XSS atacks? Tips from the site
1. Use proper input validation techniques
2. Encoding the output. This includes data read from files and databases.
For input validation, its better to go for a ‘positive’ security policy that specifies what is allowed rather than a ‘Negative’ or attack signature based policies as they are difficult to maintain and are likely to be incomplete.

The following link at MSDN contains some good info about preventing XSS attacks:
Microsoft even has a "Anti-Cross Site Scripting Library" available at:

The OWASP site defines CSRF as follows:
Cross-Site Request Forgery (CSRF) is an attack that tricks the victim into loading a page that contains a malicious request. It is malicious in the sense that it inherits the identity and privileges of the victim to perform an undesired function on the victim's behalf, like change the victim's e-mail address, home address, or password, or purchase something. CSRF attacks generally target functions that cause a state change on the server but can also be used to access sensitive data.

One technique to prevent event CSRF attacks is to use the 'Token Synchronization' pattern. Sample filters for JEE and .NET are available on the OSWAP site at the following links:

Storing sensitive information in properties file

In JEE and .NET applications we often store data access configuration parameters such as username, password, datasource URL in properties files. But how to protect this sensitive information? The obvious answer is to encrypt the file or the 'string' properties. But then the question is - where do U store the key? If U encrypt the key, then U would require another key to decrypt this key...a classic chicken-and-egg problem.

Browsing thru OWASP site pages, I came across this page that contained some interesting ideas for this problem. Snippet from the site:

"Some environments have protected locations, such as the WebSphere configuration files, or the system registry. You can use a file in a protected location, that uses OS access control to limit access to only the application. You could put the key in the code, but that makes it difficult to change and deploy securely. You can also force the master key to be entered when the system boots up so it's only in memory, but that means you lose automatic reboot. You can even split the key and put parts in more than one of these locations."

The page also contains a sample implementation in Java for encrypting properties file.

Tuesday, March 04, 2008

Various types of performance testing

Came across this site ( that lists down the various types of testing that can be done to performance test a system. Given below are snippets from the above page.
  1. Load Testing: Load Tests are end to end performance tests under anticipated production load. The primary objective of this test is to determine the response times for various time critical transactions and business processes and that they are within documented expectations (or Service Level Agreements - SLAs). The test also measures the capability of the application to function correctly under load, by measuring transaction pass/fail/error rates.
  2. Stress Testing: Stress Tests determine the load under which a system fails, and how it fails. This is in contrast to Load Testing, which attempts to simulate anticipated load. It is important to know in advance if a ‘stress’ situation will result in a catastrophic system failure, or if everything just 'goes really slow'. There are various varieties of Stress Tests, including spike, stepped and gradual ramp-up tests. Catastrophic failures require restarting various infrastructure and contribute to downtime, a stress-full environment for support staff and managers, as well as possible financial losses.
  3. Volume Testing: Volume Tests are often most appropriate to Messaging, Batch and Conversion processing type situations. In a Volume Test, typically the Response times are not measured. Instead, the Throughput is measured. A key to effective volume testing is the identification of the relevant capacity drivers. A capacity driver is something that directly impacts on the total processing capacity. For a messaging system, a capacity driver may well be the size of messages being processed. For batch processing, the type of records in the batch as well as the size of the database that the batch process interfaces with will have an impact on the number of batch records that can be processed per second.
  4. Network Testing: Network sensitivity tests are tests that set up scenarios of varying types of network activity (traffic, error rates...), and then measure the impact of that traffic on various applications that are bandwidth dependant. Very 'chatty' applications can appear to be more prone to response time degradation under certain conditions than other applications that actually use more bandwidth. For example, some applications may degrade to unacceptable levels of response time when a certain pattern of network traffic uses 50% of available bandwidth, while other applications are virtually un-changed in response time even with 85% of available bandwidth consumed elsewhere.This is a particularly important test for deployment of a time critical application over a WAN.
  5. Failover Testing: Failover Tests verify of redundancy mechanisms while under load. For example, such testing determines what will happen if multiple web servers are being used under peak anticipated load, and one of them dies. Does the load balancer react quickly enough? Can the other web servers handle the sudden dumping of extra load? This sort of testing allows technicians to address problems in advance, in the comfort of a testing situation, rather than in the heat of a production outage.
  6. Soak Testing: Soak testing is running a system at high levels of load for prolonged periods of time. A soak test would normally execute several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed. Also, due to memory leaks and other defects, it is possible that a system may ‘stop’ working after a certain number of transactions have been processed. It is important to identify such situations in a test environment.

Friday, February 29, 2008

Schema validation in webservices

Recently a friend of mine was working on a .NET SOA project and was facing a peculiar problem.The XML Schema he was using had a lot of restictions such as minOccurs, maxOccurs, int ranges, string regular expression patterns, etc.

But unfortunately in .NET 2.0 webservices, there was no easy way to validate input messages based on this schema. The reason for this behavior has to do with XmlSerializer, the underlying plumbing that takes care of object deserialization. XmlSerializer is very forgiving by design. It happily ignores XML nodes that it didn't expect and will use default CLR values for expected but missing XML nodes. It doesn't perform XML Schema validation and the consequence is that details like structure, ordering, occurrence constraints, or simple type restrictions are not enforced during deserialization.

Unfortunately there is no switch in .NET 2.0 that can turn on Schema validations. The only way to validate schemas is to use SOAP extensions and validate the message in the extension code block before deserialization occurs. The following links show us how to do it:
1. MSDN-1
2. MSDN-2

In the J2EE world, JAX-WS 2.0 has become the default standard for developing webservices. In most toolkits, we essentially have 2 options: Let the SOAP engine do the data-binding and validation or separate the data-binding and schema validation from the SOAP engine. For e.g. In Sun Metro project, validation can be enabled for the SOAP engine using the @SchemaValidation attribute on a webservice.(
In Axis 2, there is support to enable schema validation using the XMLBeans binding framework.

I found this whitepaper quite valuable in understanding the various options for schema validation in JEE webservices.

Wednesday, February 27, 2008

Problems with Struts SSLext

One of the projects I was consulting on, was using the SSL-ext plugin module of Struts to redirect all secure resources to SSL. More information on how to mix HTTP and HTTPS in a web-flow can be found in my blog post here.

The SSL plug-in module used a simple technique - a custom request processor/servlet filter would check the URL of each request and then check in the configuration file if the requested resource was secure and needed to be accessed using SSL. So if we receive a plain HTTP request for a secure resource, then a redirect HTTP 304 response would be sent back to the user. The redirected URL would have the HTTPS scheme.

Now, in the production environment, it was decided to move the SSL encryption/decryption to a Cisco Content Switch (at the hardware level). The Content switch would decrypt the content and then forward the request to the webserver as plain HTTP.
But this caused a problem at the AppServer level as the SSL-ext in Struts always sent redirect responses for all secure resources. This resulted in a recursive infinite loop of request-response traffic between the browser and the servers, due to which the browser showed a 'hanged' screen.
I was given the task to find an innovative solution to this problem. The challenge here was that we wanted to use the SSL-ext plug-in feature functionality so that it is impossible to access secure resources using plain HTTP. All we needed was some way for the content switch to pass information to the AppServer that the request that reached the content switch was HTTPS.

I decided to do this using HTTP headers. We configured the content switch to set the 'Via' HTTP header to "HTTPS" whenever a SSL request reaches the content switch. We changed the source code of SSL-ext to check for this header instead of the URL scheme and port number and send a redirect if necessary. This simple solution worked just fine and earned me kudos from the team :)

P.S: I got the idea of using HTTP header after reading the below URL. A must read :)

Tuesday, February 26, 2008

Architecture Principles

A member of my team recently asked me a simple question - "What are Architecture principles? And how are they different from the guidelines, best practices and NFRs that we already follow?"

Well the line of difference between the two is very blurred and in fact there could be a lot of overlap between them.

Principles are general rules and guidelines(that seldom change) that inform and support the way in which an organization sets about fulfilling its mission.
Architecture principles are principles that relate to architecture work. They embody the spirit and thinking of the enterprise architecture.These principles govern the architecture process, affecting the design, development and maintenance of applications in an enterprise.

The typical format in which a principle is jotted down is -
1. Statement - explaining what the principle is
2. Rationale - explaining why?

The following 2 links contain good reading material.

Examples of some interesting principles from the above sites:

Principle: Total cost of ownership design
In an atmosphere where complex and ever-changing systems are supporting all aspects of our business every hour of the day, it is easy to lose track of costs and benefits. And yet, these critical measures are fundamental to good decision-making. The Enterprise Architecture can and must assist in accounting for use, for change, for costs, and for effectiveness.
Rationale:Total costs of present and proposed alternatives, including unintended consequences and opportunities missed, must be a part of our decisions as we build the architecture of the future.

Principle: Mainstream technology use
Production IT solutions must use industry-proven, mainstream technologies except in those areas where advanced higher-risk solutions provide a substantial benefit. Mainstream is defined to exclude unproven technologies not yet in general use and older technologies and systems that have outlived their effectiveness.
Rationale:The enterprise may not want to be on the leading edge for its core service systems. Risk will be minimized.

Principle: Interoperability and reusability
Systems will be constructed with methods that substantially improve interoperability and the reusability of components.
Rationale:Enables the development of new inter-agency applications and services

Principle: Open systems
Design choices prioritized toward open systems will provide the best ability to create adaptable, flexible and interoperable designs.
Rationale:An open, vendor-neutral policy provides the flexibility and consistency that allows agencies to respond more quickly to changing business requirements.This policy allows the enterprise to choose from a variety of sources and select the most economical solution without impacting existing applications. It also supports implementation flexibility because technology components can be purchased from many vendors, insulating the enterprise from unexpected changes in vendor strategies and capabilities.

Principle: Scalability
The underlying technology infrastructure and applications must be scalable in size, capacity, and functionality to meet changing business and technical requirements.
Rationale:Reduces total cost of ownership by reducing the amount of application and platform changes needed to respond to increasing or decreasing demand on the system.Encourages reuse.Leverages the continuing decline in hardware costs

Principle: Integrated reliability, availability, maintainability
All systems, subsystems, and components must be designed with the inclusion of reliability and maintainability as an integral part. Systems must contain high-availability features commensurate with business availability needs. An assessment of business recovery requirements is mandatory when acquiring, developing, enhancing, or outsourcing systems. Based on that assessment, appropriate disaster recovery, and business continuity planning, design and testing must take place.
Rationale:Business depends upon the availability of information and services. To assure this, reliability and availability must be designed in from the beginning; they cannot be added afterward. The ability to manage and maintain all service resources must also be included in the design to assure availability.

Principle: Technological diversity is controlled to minimize the non-trivial cost of maintaining expertise in and connectivity between multiple processing environments.
Rationale:There is a real, non-trivial cost of infrastructure required to support alternative technologies for processing environments. There are further infrastructure costs incurred to keep multiple processor constructs interconnected and maintained. Limiting the number of supported components will simplify maintainability and reduce costs.

Monday, February 25, 2008

Tower Servers -> Rack Servers -> Blade Servers

The early servers were tower servers, so called because they were tall in height and took a lot of space and resulted in 'server-sprawl' in data-centers. The 90's saw the introduction of rack servers that were compact and saved a lot of 'real-estate' in the data-centers. A standard rack is 19 inch wide and 1.75 inch high. This dimension is called 1U. So a server component may occupy 1U, 2U or 4 half-U. The most common computer rack form-factor being 42U high, this configuration allows for 42 servers to be mounted on a single rack. Each server has it own power supply and network and switch configuration.
In the past few years, blade servers are gaining a lot of popularity. The advantage of blade servers is that instead of having a number of separate servers with their own power supplies, many blades are plugged into one chassis, like books in a bookshelf, containing processors, memory, hard drives and other components. The blades share the hardware, power and cooling supplied by the rack-mounted chassis -- saving energy and ultimately, money.
Another advantage is that enterprises can buy what they need today, and plug in another blade when their processing needs increase, thus spreading the cost of capital equipment over time.

In a rack server environment, typically 44% of the electricty consumption is by components such as power supplies and fans. In a blade server, that 44 percent is reduced to 10 percent because of the sharing of these components.This gives the blade server a tremendous advantage when it comes to electricity consumption and heat dissipation.

The only current disadvantage of blade servers is the vendor lock-in that comes in when you buy a blade environment and the system management software. In a rack system, it is possible to mix and match servers inside of a rack and across racks at will.

Friday, February 22, 2008

Are filters invoked when I do a RequestDispatcher.forward() ?

Before Servlet 2.4 specification, it was not clear whether filters should be invoked for forwarded requests, included requests and requests to the error page defined in

But in Servlet 2.4 specs, there is an extra element inside the filter mapping tag.

Possible values are REQUEST, FORWARD, INCLUDE, and ERROR (Request is the default)

First steps for diagnosis of memory leaks in Websphere Application Server v6.1

- Enable verbose GC.

- If using Sun JVM, use the -XX:+HeapDumpOnOutOfMemoryError option to tell the VM to generate a heap dump if OutOfMemoryError is thrown. If using IBM JVM, then enable automatic heap dump generation.

- Start the lightweight memory leak detection

- Generate heap dumps manually if required using the wsadmin tool.

- Use the MDD4J tool (Memory Dump Diagnostic for Java) for diagnosing root causes behind memory leaks in the Java heap. The heap dump collected in the above steps will be the input to this tool. More information about this tool can be found here , here and here.
The MDD4J tool supports the following heap dump formats:
1.IBM Portable Heap Dump (.phd) format (for WebSphere Application Server Versions 6.x on most platforms)
2.IBM Text heap dump format (for WebSphere Application Server Versions 5.0 and 4.0 on most platforms)
3.HPROF heap dump format (for WebSphere Application Server on the Solaris® and HP-UX platforms)
4.SVC Dumps (WebSphere on the IBM zSeries)

On Solaris platform, starting with Java 1.5, Sun has been shipping a cool tool called jmap which allows you to attach to any 1.5 JVM and obtain heap layout information, class histograms and complete heap snapshots. The cool thing is that you don’t have to configure the JVM with any special options, and that it therefore runs exactly as during normal operation.

jmap -dump:format=b,file=snapshot2.jmap PID_OF_PROCESS
jhat snapshot2.jmap

What causes memory leaks in Java?

We know that a memory leak can occur in Java applications when object references are unintentionally held onto after the references are no longer needed. Typical examples of these are large collection objects, a unbounded cache, large number of session objects, infinite loops etc. A memory leak results in a OutOfMemoryError.
Besides this core reason, there are other factors that may also result in a OutOfMemoryError exception

1. Java heap fragmentation: Heap fragmentation occurs when no contiguous chunk of free Java heap space is available from which to allocate Java objects. Various causes for this problem exist, including the repeated allocation of large objects (no single large fragment in first generation). In this case, even if we see good amount of free heap, memory allocation fails.

2. Memory leaks in native heap. This problem occurs when a native component, like database connections, is leaking.

3. Perm size has exhausted. The permanent generation of the heap contains class objects. If your code is using a lot of reflection/introspection, then a number of temporary class objects are created that would exhaust the perm space. More info can be found here.

Wednesday, February 13, 2008

Can U request Google, Yahoo to not index Ur site?

I knew the way web crawlers/bots work to index your website. In fact Google also has a feature of submitting a SiteMap to better index the pages in Ur site. But what if U don't want some pages to be crawled. Well, today I learned that there is a way in which we can request crawlers to ignore certain pages in the site. The trick is to place a 'robots.txt' file in the root directory of the site. This text file contains folders and URLs that need not be crawled.
The protocol, however, is purely advisory. It relies on the cooperation of the web robot, so that marking an area of a site out of bounds with robots.txt does not guarantee privacya

Friday, February 08, 2008

Loosely typed vs Strongly typed webservices

This article gives a good overview of the difference between a loosely coupled and strongly coupled webservice.

Jotting down some points from the article here:

A loosely typed webservice is one where the WSDL (interface definition) does not expose the XML schema of the message to be transferred. One example of a loosely typed description is Web service that encodes the actual content of a message as a single stringThe service interface describes one input parameter, of type xsd:string, and one output parameter.

Another way of indicating that no strict definition of a message format exists in a WSDL file is the use of the element. Its occurrence in a schema simply indicates that any kind of XML can appear in its place at runtime. In that respect, it behaves much like the case where single string parameters are defined. The difference is that here, real XML goes into the message, as opposed to an encoded string.

Pros and Cons:
The service consumer and service provider need to agree on a common format that they both understand, and they both need to develop code that builds and interprets that format properly. There is not much a tool can do to help here, since the exact definition of the message content is not included in the WSDL definition.

On the positive side, if the message structure changes, you do not have to update the WSDL definition. You just have to ensure that all the participants are aware of such a change and can handle the updated format.

Examples where Loosely coupled services make sense:
1. For example, assume you have a set of coarse-grained services that take potentially large XML documents as input. These documents might have different structures, depending on the context in which they are used. And this structure might change often throughout the lifetime of the service. A Web service can be implemented in a way that it can handle all these different types of messages, possibly parsing them and routing them to their eventual destination. Changes to message formats can be made so that they are backward compatible, that is, so that existing code does not have to be updated.

2. Another example is the use of intermediaries. They have a Web service interface and receive messages. But many times, they provide some generic processing for a message before routing it to its final destination. For example, an intermediary that provides logging of messages does not need a strongly typed interface, because it simply logs the content of any message that it receives.

3. Finally, a message format might exist that can not be described in an XML schema, or the resulting schema cannot be handled by the Web service engine of choice.

A strongly typed service contains a complete definition of its input and output messages in XML Schema, a schema that is either included in the WSDL definition or referred to by that WSDL definition.

Strongly typed Web services provide both consumers and providers of services with a complete definition of the data structure that is part of a service. Tools can easily generate code from such a formal contract that makes the use of SOAP and XML transparent to the client and server-side code.

Smart clients....Has the pendulum swung back?

Before the emergence of the 'web' most of the UI development was 'rich-thick-client' based on MS platforms such as VB, MFC, etc. VB development enjoyed a lot of popularity because of the component based model and the ready availability of rich UI widgets and development environments such as Visual Studio.
Such rich clients suffered from 2 major drawbacks: DLL hell and application upgrade/deployment.
Maintaining the correct version of the application across all clients distributed across locations was a big pain.

To address this problem, many thick clients were migrated to a web-based UI with a centralized server and repository. But this 'silver bullet' again had 2 main drawbacks - the user experience was nowhere close to that of a rich client. Even with the emergence of RIA (Rich Internet Applications) and AJAX, those applications requiring tons of data input and fast keyboard based navigation across screens, the web UI offered serious limitations.

And it is here that .NET smart clients come into the picture. A .NET smart client has the following features:
- Uses the 'click once' platform feature to automatically update DLLs on the client machine. Thus deployment is no longer a problem.
- Smart clients build on .NET resolve the DLL hell problem using metadata versioning of assemblies.
- Smart clients can talk with webservices to obtain data and fulfill other business requirements.
- Smart clients can work in offline mode - offering either limited or full functionality.
- Smart clients can easily integrate with other applications on the desktop providing an integrated approach to the user.
- Using WPF, users can be given a rich user interface that they demand.

Tuesday, February 05, 2008

Deep linking in AJAX applications

AJAX appplications face the challenge of deep-linking because many times the page URL does not change, only the content changes using AJAX.
And without deep linking, spiders and bots may not be able to index your page.

Currently there are 2 strategies for deep-linking AJAX sites:
- Using anchors. Onload, javascript checks location.href for an anchor and then calls the existing ajax methods to add and remove the appropriate content.
- place the stateful URL ("Link to this page") somewhere within the page, at a location and using a style that will become well-known conventions. Both Google Maps and MSN Earth follow this technique.

"Neither Google Maps nor MSN Virtual Earth records the state of a particular map view on the browser’s URL-line. Google Maps hides all the parameters. MSN Virtual Earth presents only a GUID whose purpose I don’t yet understand, but which doesn’t record the map’s state. In both cases, the stateful URL is found elsewhere than on the URL-line. Google Maps presents it directly, with a link icon and a Link to this page label. MSN Virtual Earth presents it indirectly — clicking Permalink leads to a popup window containing the stateful URL."

Sunday, February 03, 2008

Technical Architect responsibilities defined

This weekend, I was going thru some of my old books in my library and found one book that was my favourite during my developer days - J2EE Architects handbook. I perused through the first chapter that tried to define the role of a technical architect. The tasks given in the book were something that I was doing day in and out for projects. The simple and lucid language used in that chapter prompted me to blog some snippets from it.

Technical Architect

  • Identifies the technologies that would be used for the project.
  • Recommends the development methodologies and frameworks for the project.
  • Provides the overall design and structure to the application.
  • Ensures that the project is adequately defined.
  • Ensures that the design is adequately documented.
  • Establishes design/coding guidelines and best practices. Drives usage of design patterns.
  • Mentors developers for difficult tasks.
  • Enforces compliance with coding guidelines using code reviews etc.
  • Assists the project manager in estimating project costs and efforts.
  • Assists management in assessing technical competence of developers.
  • Provides technical advice and guidance to the project manager.
  • Responsible for ensuring that the data model is adequate.
  • Guiding the team is doing POCs and early risk assessments.

Saturday, February 02, 2008

Basic tools for Solaris Performance monitoring

One of the development teams I was consulting for was facing some performance problems on the Solaris platform. The Websphere JVM was hanging and the performance of the application dropped.

The three basic tools on the Solaris platform that are invaluable at this time are:
1. vmstat - gives CPU stats, memory utilization
2. iostat - Gives I/O stats
3. netstat - gives network stats.

This link gives good info on understanding the thump rules to apply to analyze the output of these commands.

Another command that is very powerful on solaris is the 'prstat' command. prstat is more versatile than 'top' that is present on most unix systems.

To find out the top 5 processes consuming CPU time:
prstat -s cpu -a -n 5

To find out the top 5 processes consuming most memory:
prstat -s size -n 5

To dump the CPU usage of a process every 15 secs to a file
prstat -p 2443 15 > server.out &

A very nice feature of prstat is that by using the -L switch, prstat will report statistics for each thread of a process.
prstat -L -p 3295

The following links also provide good info about some basic troubleshooting tips:

Friday, February 01, 2008

Capabilities of an ESB

Just saw Mark Richards presentation on ESB. The presentation is simple and easy to understand without all the fancy jargon.
The streaming video is available at:

Capabilites of an ESB:

Routing: This is the ability to route a request to a particular service provider based on some criteria. Note that there are many different types of routing and not all ESB providers offer them i.e. static, content-based, policy-based. This is a core capability that an ESB has to offer (at least static routing). Without it a product cannot be considered an ESB.

Message transformation: Ability to convert the incoming business request to a format understood by the service provider (can include re-structuring the message as well).

Message enhancement: We don't want to change the client and in order to do that we often need to enhance (removing or adding information, rules based enhancement) the message sent to the service provider.

Protocol transformation: The ability to accept one type of protocol as input and communicate to the service provider through a different protocol (XML/HTTP->RMI, etc).

Message Processing: Ability to perform request management by accepting an input request and ensuring delivery back through message synchronization i.e queue management.

Service orchestration: Low-level coordination of different service implementations. Hopefully it has nothing to do with BPEL. It's usually implemented through aggregate services or interprocess communication.

Transaction management: The ability to provide a single unit of work for a service request for the coordination of multiple services. Usually very limited since it's difficult to propagate the transaction between different services. But at least the ESB should provide a framework for compensating transactions.

Security: Provides the A's of security. There are no "silos" in SOA so the ESB has to be able to provide this.

Thursday, January 31, 2008

YWorks - products

Came across this company that calls itself a 'diagramming company'.
A list of their product offerings can be found here.

Two interesting 'free' products that are included are:
- ANT script visualizer
- Java obfuscator

Monday, January 28, 2008

Ready to go templates

Believe it or not! There is a open-source project that aims to provide ready-to-go templates for all phases of a product or an application.

Monday, January 21, 2008

Difference between CPU time and elapsed time

In Performance metrics jargon, we often come across two measurements: CPU time and total elapsed time. So whats the difference between the two?

CPU time is the time for which the CPU was busy executing the task. It does not take into account the time spent in waiting for I/O (disk IO or network IO). Since I/O operations, such as reading files from disk, are performed by the OS, these operations may involve noticeable amount of time in waiting for I/O subsystems to complete their operations. This waiting time will be included in the elapsed time, but not CPU time. Hence CPU time is usually less than the elapsed time.

But in certain cases, the CPU time may be more than the elapsed time !
When multiple threads are used on a multi-processor system or a multi-core system, more than one CPU may be used to complete a task. In this case, the CPU time may be more than the elapsed time.

Monday, January 14, 2008

Difference between different editions of VS 2008

I was looking for a official comparision between the different VS2008 editions and thankfully I found it here.

The express editions are different downloads for each language. i.e. for C#, VB.NET
So if you want to open different projects in different languages, then go for standard or professional. Then the express editions do not have visual designers for WPF and WWF.

Some of the differences between the standard edition and professional edition are that you can use the professional edition to build MS Office applications. Also the professional edition has support for Crystal reports, Unit testing etc.

More information can be found at the following links:

ETag response header

HTTP 1.1 introduced a new kind of cache validator i.e. ETag.

ETags are unique identifiers that are generated by the server and changed every time the resource is updated. An ETag is a string that uniquely identifies a specific version of a component. The only format constraints are that the string be quoted.

ETag: "10c24bc-4ab-457e1c1f"

The problem with ETags is that they typically are constructed using attributes that make them unique to a specific server hosting a site. So in a typical clustered environment, if the next request for a cached resource goes to a different server, then the ETag won't match and the resource would be downloaded again.

Most modern Web servers will generate both ETag and Last-Modified validators for static content automatically; you won't have to do anything.

More info about ETag can be found at the following links:

The advantage of ETag over the 'last-modified-date' tag is that the fact that ETags don't require a date but will take any string. So U can compose ETag with any custom logic and source.

Parallel download in browsers

We all know that browsers make multiple requests to servers for each and every resource that is contained in the page. But what is the max number of persistent connections that the browser can make to download all elements of a page?

Well to my suprise, the W3C specification on HTTP 1.1 actually states the same.
"A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy. "

What this essentially means is that broswers can download no more than two components in parallel per hostname. U always discover new things :)

YSlow for Firebug - cool cool extension

I have been a die-hard fan of the 'Firebug' add-on to Firefox. Firebug has proved to be invaluable to me for diagnosing front-end performance issues.
Now Yahoo has extended Firebug with another feature called 'YSlow' that shows good aggregate data and also recommends changes to be made in the page.

More information can be found at