Tuesday, April 26, 2011

Remoting in .NET

.NET Remoting is an enabler for application communication. It is a generic system for different applications to use to
communicate with one another. .NET objects are exposed to remote processes, thus allowing interprocess communication.
The applications can be located on the same computer, different computers on the same network, or even computers across
separate networks.
Independent Devices

Rules of Remoting:

  1. There must be 2 independent devices. 
  2. One have functionality and another will access that functionality. 
  3. Functionality must be in running or active mode e.g. Television must be in running mode to work with remote
    An Example of remoting is that we have have SQL Server and SQL Server client edition. Here Client edition tool work as
    remote. Same with Oracle also.

    If we implement such functionality then we use Remoting. It could be 1-tier or other architec
    tures.
Remoting process


When a call is made from the Client a Marshall object is created at the server.
Marshall Object: This is special type of object that catch request, analyze, execute functionality and reply Data/
Information back.

There are two type of Communication between Server and Client.
Singleton-    if SMO(Single Marshall Object) handles MR(Multiple Request).
Single Call- if MMO( Multiple Marshall Object)s Handles MR(Multiple Request).
In communication Server do not know about that where is client and same with client. this also do not know about
the server location. communication is not done directly it is done by proxy.

Proxy contain two parts
Stub- This implement the Skelton.
Skeleton- Contains Rules of Communication.

Server contain Skelton and Client contain Stub. Stub communicate with the Skeleton.

Three things required to implement the Remoting
Business Login – Marshall Object./
Skeleton at Server
Stub at Client

First Create Business Logic that served to the client:
create a class that inherits Class MarshalRefObject. To do this create new project and select class library as in Fig below.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace BusinessLogic
{
    public class BL:MarshalByRefObject
    {
        public string Greet(string strName)
        {
            return ("Hello" + strName);
        }
    }
}
Build the solution. after this it will create a library file (.dll) that you have to use in next step; in creation of the server.
2.  Server Code...
Create New project ..Select Console Application and Reference System.Runtime.Remoting as in Fig 2.


Now add BusinessLogic dll to your solution using add reference.
and write the following code:
using System.Runtime.Remoting;
using System.Runtime.Remoting.Channels.Tcp;
using System.Runtime.Remoting.Channels;
namespace Server
{
    class Program
    {
        static void Main(string[] args)
        {
            TcpChannel tc = new TcpChannel(8030);
            ChannelServices.RegisterChannel(tc, false);
            RemotingConfiguration.RegisterWellKnownServiceType(typeof(BusinessLogic.BL), "xyz", WellKnownObjectMode.SingleCall);
            Console.Write("Server is Ready!");
            Console.ReadKey();
        }
    }
}
.. Here you have to register the channel and get the type of information that travel on the channel is BusinessLogic's object.
Now move to the client's working.
Step 3: Client
Create a new project and select Windows Form Application with two text boxes and one Button as in Figure  6.
Here you have add a interface with the same name space as in the Business Logic and specify all the methods name
as it is in the Business Logic.
Here is code of that class file that contains the Interface.
namespace BusinessLogic
{
    public interface BL
    {
        string Greet(string strName);
    }
}

figure 6
and now write following code to communicate with the server.
using System.Runtime.Remoting;
using System.Runtime.Remoting.Channels;
using System.Runtime.Remoting.Channels.Tcp;
namespace Client
{
    public partial class Form1 : Form
    {
        public Form1()
        {
            InitializeComponent();
        }
        BusinessLogic.BL obj;
        private void Form1_Load(object sender, EventArgs e)
        {
            TcpChannel tc = new TcpChannel();
            ChannelServices.RegisterChannel(tc, false);
            object o = Activator.GetObject(typeof(BusinessLogic.BL), "tcp://Niranjan-PC:8030/xyz");
            obj = (BusinessLogic.BL)o;
        }
        private void btnSend_Click(object sender, EventArgs e)
        {
            txtReceivedMessage.Text = obj.Greet(txtMessage.Text);
        }
    }
}
here it get the object type of our interface structure and the server name is also specified and with there service
that we have placed on server named as xyz.

.NET Remoting versus Distributed COM

In the past interprocess communication between applications was handled through Distributed COM, or DCOM. DCOM
works well and the performance is adequate when applications exist on computers of similar type on the same network.
However, DCOM has its drawbacks in the Internet connected world. DCOM relies on a proprietary binary protocol that
not all object models support, which hinders interoperability across platforms. In addition, have you tried to get DCOM to
work through a firewall? DCOM wants to communicate over a range of ports that are typically blocked by firewalls. There
are a ways to get it to work, but they either decrease the effectiveness of the firewall (why bother to even have the firewall
if you open up a ton of ports on it), or require you to get a firewall that allows support for binary traffic over port 80.
.NET Remoting eliminates the difficulties of DCOM by supporting different transport protocol formats and communication
protocols. This allows .NET Remoting to be adaptable to the network environment in which it is being used.

.NET Remoting versus Web Services


Unless you have been living in a cave, or are way behind in your reading, you have probably read something about Web
services. When you read the description of .NET Remoting it may remind you a lot of what you're read about Web services.
That is because Web services fall under the umbrella of .NET Remoting, but have a simplified programming model and are
intended for a wide target audience.
Web services involve allowing applications to exchange messages in a way that is platform, object model, and programming
language independent. Web services are stateless and know nothing about the client that is making the request. The clients
communicate by transferring messages back and forth in a specific format known as the Simple Object Access Protocol, or
SOAP. (Want to get some funny looks in the hallway? Stand around in the hallway near the marketing department with your
colleagues and discuss the benefits of using SOAP).

The following list outlines some of the major differences between .NET Remoting and Web services that will help you to
decide when to use one or the other:
  • ASP.NET based Web services can only be accessed over HTTP. .NET Remoting can be used across any protocol.

  • Web services work in a stateless environment where each request results in a new object created to service the request.
    .NET Remoting supports state management options and can correlate multiple calls from the same client and support
    callbacks.

  • Web services serialize objects through XML contained in the SOAP messages and can thus only handle items that can
    be fully expressed in XML. .NET Remoting relies on the existence of the common language runtime assemblies that
    contain information about data types. This limits the information that must be passed about an object and allows objects
    to be passed by value or by reference.

  • Web services support interoperability across platforms and are good for heterogeneous environments. .NET Remoting
    requires the clients be built using .NET, or another framework that supports .NET Remoting, which means a
    homogeneous environment

Wednesday, April 20, 2011

jQuery Rocks

This is a basic tutorial, designed to help you get started using jQuery. If you don't have a test page setup yet, start by creating
a new HTML page with the following contents:

<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>Demo</title>
</head>
<body>
<a href="http://jquery.com/">jQuery</a>
<script src="jquery.js"></script>
<script>
</script>
</body>
</html> 
Edit the src attribute in the script tag to point to your copy of jquery.js. For example, if jquery.js is in the same directory as
your HTML file, you can use:

<script src="jquery.js"></script> 
You can download your own copy of jQuery from the Downloading jQuery page


Launching Code on Document Ready

The first thing that most Javascript programmers end up doing is adding some code to their program, similar to this:
window.onload = function(){ alert("welcome"); } 
Inside of which is the code that you want to run right when the page is loaded. Problematically, however, the Javascript code
isn't run until all images are finished downloading (this includes banner ads). The reason for using window.onload in the first
place is that the HTML 'document' isn't finished loading yet, when you first try to run your code.

To circumvent both problems, jQuery has a simple statement that checks the document and waits until it's ready to be
manipulated.

$(document).ready(function(){    // Your code here  }); 
Inside the ready event, add a click handler to the link:
$(document).ready(function(){     
               $("a").click(function(event){ 
                      alert("Thanks for visiting!"); 
                 }); 
}); 
Save your HTML file and reload the test page in your browser. Clicking the link on the page should make a browser's alert
pop-up, before leaving to go to the main jQuery page.

For click and most other events, you can prevent the default behaviour - here, following the link to jquery.com - by calling
event.preventDefault() in the event handler:

$(document).ready(function(){
$("a").click(function(event){
alert("As you can see, the link no longer 
took you to jquery.com");
event.preventDefault();
     });
}); 
 
use this code and see the smaller part of JQuery capabilities:
create a html file and run it.
<!--
<head>
    <title></title>
    <script src="scripts/jquery-1.5.2.min.js" ></script>
</head>
<body>
<!--<script type="text/javascript" language="javascript" >
    window.onload = function () { alert("Welcome"); }
</script>-->
<!-- this JQuery onload equivalent-->
<!--
<script>
    $(document).ready(function () { alert("welcome"); })
</script> -->
<!-- JQuery script for links click event action -->
<script >
    $(document).ready(function () {
        $("a").click(function (event) {
            //            event.preventDefault();
            //            alert("Thanks for visiting!");
            event.preventDefault();
            $(this).hide("slow");
        });
        $("#jai").mouseenter(function (event) {
            event.preventDefault();
            $(this).hide("slow");
        });
    });
    </script>
<div id="jai" style="width:200px; height:200px; margin:0px auto; background-color:Blue;"></div>
<a href="#" id="ram" >Ram</a>
</body>
</html>
-->

Thursday, April 14, 2011

Session - State Management in ASP.net


To manage information at server side Session is best method. It Provides
* Security
* Reliability
* it can store any type of data
How Session works and it created on Server
When Browser send request to your website then server generates a UID called sessionid and
this 'Sessionid' is managed in Session Table. A table is created on the server for managing the
session id for various web clients( Browsers).
This session id is sent to the client. after this session is also sent to the server after first request.
Figure 1
When Server assign session id then event iss also fired called 'Session_Start'.
The generated session id is stored in the cooked. server send session id to client and client send
this session id every time and this cookie  mechanism based on cookies working.Cookie is created
at client side with then name 'aspnet_sessionid'. It is a Non-persistent cookies.
A simple program that demonstrate the working of Session Management.
1. create a empty web application
2. Add a webpage and name them a.aspx
3. Add Global.asax- Global Application Class
4. add a button on a.aspx
and write code this code.
protected void Page_Load(object sender, EventArgs e)
   {
       Response.Write("No of cookies: " + Request.Cookies.Count.ToString());
   }
   protected void btnSend_Click(object sender, EventArgs e)
   {
       Response.Write("No of cookies: " + Request.Cookies.Count.ToString());
   }
when you first time run it will show no of cookies =0 , but when you click on button then
it will show count =1. session id is sent by server after the first request and when you click on button
then it have cookies that store the information of the session id.
Transferring on page information to another page:
1. add a text box on a.aspx and add a new page named b.aspx
2. write below code at button's click event.
Figure 2 Form a.aspx

protected void btnSend_Click(object sender, EventArgs e)
   {
      Session.Add("msg",txtMessage.Text);
       Response.Redirect("b.aspx");
   }
3.  now write following code at b.aspx Page_Load event.
protected void Page_Load(object sender, EventArgs e)
   {
       Response.Write("Message From A:- " +Session["msg"].ToString());
       Response.Write("<br/>Cookie Name:- "+Request.Cookies[0].Name);
       Response.Write("<br/>Session Id:- "+Request.Cookies[0].Value);
   }
output will be after clicking on send button with a message in textbox.

Figure 3 Output on another page

Case
if someone directly access the b.aspx, what will happen.

Figure 4 Error Message: if some one access b.aspx directly

it will give error because we have not assed value to the msg in session. To avoid this set any
default value to the msg at the start of the Session as:
Go to Global.asax and find event named Session_Start and here you should initialize the session
using add method.
as:
Session.Add("msg", "null");
and at the click of the button you just update the value of the session except adding value to the
session as:
Session[]=txtMessage.Text;
Response.Redirect("b.aspx");
Some Session Methods and Properties :
Abandon()- Used to kill the session by force as Session.Abandon();
CookieMode –Property have two options Cookie or CookieLess
IsCookieLess- check the client that cookie is supported or not.
IsNewSessoin- checks that is it your first request to the server from client.
Mode- InProc and OutProc
           Inproc – at server cookies etc.
           outproc- information saved in external application like state server, SQL server db.
SessionId- returns SessionId
timeout – Specifies the number of minutes that a session can remain idle before the server terminates it
automatically. The default is 10 minutes. Session.Timeout has no hard-coded limit. It should not be set
higher than 20 minutes (except in special cases) because every open session is holding onto memory. It
should also not be set lower than 4 minutes because clients rarely respond within that time resulting in a
loss of session state. Most of the books tell this 20 minutes. according to me it depends upon IIS.

IIS 6.0: The minimum allowed value is 1 minute and the maximum is 1440 minutes.
Changing Session Related Information
if you want to change the session related information then go to Web.config and use following Session
tag to specify Session values as in the <System.Web> section.
<sessionState timeout="1" cookieName="Niranjan" regenerateExpiredSessionId ="true"  />
if you want to regenerate new session id if your session expire after 20 minutes or fix time. it will not work
on local machine if you request from remote then it will work.
To Implement CookieLess
use cookieless attribute in SessionState for implementing this.
it have 5 options:
1. AutoDetect- Detech first that Browser support cookie then cookie else cookieless
2. Use Cookies- Use cookies
3. UseDeviceProfile- Use Device Information
4. UseUri- session id is managed in url not good, if you use this url in another browser then it will get same
    value. it is not good in case of online transactions.
Session Mode -
InProc
OutProc
If you want that info is save either sever close or anything happen with the web application then
use the OutProc Mode of Session State.
you can store information in two external application
1. StateServer
To use State Server do the following:
<sessionState timeout="1" cookieName="Niranjan" regenerateExpiredSessionId ="true"  mode=
"StateServer" stateConnectionString ="tcpip=localhost:42424" stateNetworkTimeout="5"/>


if you running a web application just restart your IIS. your session information will be lost. if you have done
done above settings then it will not loose your session values either your app reset or anything happen.
one thing if you run your website after doing these setting and you have already restarted the IIS just now.
then it will give error. because we have not started the state server. To start state server go to control panel
and open services then start the ASP.NET State  Service.
Figure 5- Enabling ASP.NET State Service
now it will work fine.
2. SQLServer
To use State Server do the following:
<sessionState  mode="SQLServer" sqlConnectionString="datasource=.; integrated security=true"
sqlCommandTimeout="5"/>
To manage session information in SQL Server, you have to create some Database, table in the SQL Server.
If you want to use temporary storage in  tempdb to provide tables for storing information, then you have to
create table there
using .net framework specified SQL Script.
these scripts are stored in the  directory C:\Windows\Microsoft.NET\Framework\v2.0.50727.
to create table in the tempdb, the script file name is InstallSqlState.sql.
If you want to save State Information Permanently then use another Script InstallPersistSqlState.sql.
it will crate a  new database with the required table to store the state information.
open query Analyzer and run these script in the database.
Remember:
if tempdb files are installed then iwill not save to the ASPState_db (permanent Storage database that
will be create after running the InstallPersistSqlState.sql.
*first it check tempdb data table then your permanent storage database.
*your connection string dost not specify db name because .net know the DB name.
 

Monday, April 11, 2011

AJAX

Ajax the acronym stands for Asynchronous JavaScript and XML. Here's a curveball: Ajax doesn't
have to use XML, and neither does it have to be asynchronous. Ajax applications can use XML,
and they can be updated asynchronously. These are quite common tricks and techniques used
to update the page, but they are not tied to these technologies.

Partial Post back
we send specified information to server and according to this information we get response.
for example:
we have country and states drop down list. if we select country then states dropdown list will
be filled according the country.
This is done by JavaScript but we do not know JavaScript. In .Net we have Ajax controls. when
their event fire then these control's value sent to the server and fetched data manipulated.
we need to code in Jscript. these control generate Jscript code and assigned to the browser.
To Implement Ajax:
First add Script Manager control.
->to do partial post back here is control Update Panel.
->Now add control to page in the Update Panel
A Small Example of this:
* Add Scrip Manager and UpdatePanel Control to your page
* Add a label and button in UpdatePanel
* Add a label and button on Page outside of Update Panel

protected void btnUpdatePanelButton_Click(object sender, EventArgs e)
    {
        lblUpdatePanelLabel.Text = DateTime.Now.ToString();
    }
    protected void Button1_Click(object sender, EventArgs e)
    {
        lblWebFormButton.Text = DateTime.Now.ToString();
     
    }
when you click on btnWebFormButton then it will post back and then display date etc., but 
when you click on btnUpdatePanelButton then page will not past back; only these controls
information will be send to the server for the changes.
run it and look and check the difference..
one more thing..
put these buttons code at form Load event.. and check the behavior of these controls.
Script Manager generates the JavaScript events etc. for the controls which are in update panel.
Synchronous Post Back
by default  case post back is synchronous post back.
Asynchronous Post Back 
if your control is of the Update Panel and want to implement Ajax on this control's particular
event, it is done by Asynchronous Post Back.
To do this open properties of Update Panel and there is property named Triggers.
* Here you can specify particular control name and event of the control on which asynchronous
   post back take place.

protected void btnUpdatePanelButton_Click(object sender, EventArgs e)
    {
        lblUpdatePanelLabel.Text = DateTime.Now.ToString();
    }
    protected void Button1_Click(object sender, EventArgs e)
    {
        //lblWebFormButton.Text = DateTime.Now.ToString();
        lblUpdatePanelLabel.Text = "Asynchronous PostBack";
    }
Things to remember:
* control's whose information to fetch partially must be in UpdatePanel
* Event of Control's that fire could be inside the UpdatePanel or outside the Update Panel.
* If Control is out of the UpdatePanel and implementing Ajax for control outside the Update Panel
Called Asynchronous Partial Post Back.

Sunday, April 10, 2011

Sharepoint Object Model

The Microsoft.Office.Server namespace is the root namespace of all Office Server objects and Microsoft.SharePoint
 
is the root namespace for all WSS objects.

Figure 1 illustrates some of the key classes contained in each of these namespaces, as well as to which functional
area they belong. The namespace of each group of classes is shown in parentheses next to the functional area.
For example, all list-related functionality is handled within the
Microsoft.SharePoint namespace.

FIGURE 1 Key classes in the SharePoint object model, segmented by functional area.

Object Model                                                                       Corresponding Microsoft Class for these
Web application                                                                  
Site Collections                                                                    SPSiteCollectioin
Site Collection                                                                      SPSite
Sites                                                                                       SPWebCollection
Site                                                                                         SPWeb
Lists                                                                                        SPListCollection
List/Library                                                                            SPList
Items/Rows                                                                           SPListItemCollection
Item/Row                                                                               SPListItem

A Simple Program to demonstrate SharePoint Object Model.

Create a list in your website and create three columns named: Title, Desc, Status

1.Create a Windows application and a form in it.
Figure :  Object Model Form
2. now code it is code for this application to work with list items of your SharePoint Website:
sing System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using Microsoft.SharePoint;

namespace SPObjectModel
{
    public partial class Form1 : Form
    {
        public Form1()
        {
            InitializeComponent();
        }

        private void Form1_Load(object sender, EventArgs e)
        {
            LoadData();
        }
/* this is like bind function in asp.net databinding with gridview */
        private void LoadData()
        {
            try
            {
//first create your main site- in yello box write your sharepoint website url
                using (SPSite spSite = new SPSite("http://s-e0912c86c85d4:40/"))
                {   //now create get the object of your website that have list
                    using (SPWeb spWeb = spSite.OpenWeb())
                    {
                        lstItems.Items.Clear();
                        //create object or get your list from database using this. 
                        SPList list = spWeb.Lists["Employee"];
                        foreach (SPListItem item in list.Items)
                        {
                            lstItems.Items.Add(item.Title);
                        }
                    }
                }

            }
            catch (Exception ex)
            {
                MessageBox.Show("Error : " + ex.Message);
            }
        }

        private void lstItems_SelectedIndexChanged(object sender, EventArgs e)
        {
            try
            {
                using (SPSite spSite = new SPSite("http://s-e0912c86c85d4:40/"))
                {
                    using (SPWeb spWeb = spSite.OpenWeb())
                    {
                        SPList list = spWeb.Lists["Employee"];

                        SPQuery spQuery = new SPQuery();
                        spQuery.Query = "<Where><Eq><FieldRef Name='Title' />"
                            + "<Value Type='Text'>" + lstItems.SelectedItem + "</Value></Eq></Where>";

                        SPListItemCollection items= list.GetItems(spQuery);
                        SPListItem item = items[0];

                        txtTitle.Text = Convert.ToString(item["Title"]);
                        txtDesc.Text = Convert.ToString(item["Desc"]);
                        txtStatus.Text = Convert.ToString(item["Status"]);
                    }
                }

            }
            catch (Exception ex)
            {
                MessageBox.Show("Error : " + ex.Message);
            }
        }

        private void btnAdd_Click(object sender, EventArgs e)
        {
            try
            {
                using (SPSite spSite = new SPSite("http://s-e0912c86c85d4:40/"))
                {
                    using (SPWeb spWeb = spSite.OpenWeb())
                    {
                        SPList list = spWeb.Lists["Employee"];

                        SPListItem item = list.Items.Add();

                        item["Title"] = txtTitle.Text;
                        item["Desc"] = txtDesc.Text;
                        item["Status"] = txtStatus.Text;

                        item.Update();

                        txtTitle.Text = "";
                        txtDesc.Text = "";
                        txtStatus.Text = "";

                        LoadData();
                    }
                }

            }
            catch (Exception ex)
            {
                MessageBox.Show("Error : " + ex.Message);
            }
        }

        private void btnUpdate_Click(object sender, EventArgs e)
        {
            try
            {
                using (SPSite spSite = new SPSite("http://s-e0912c86c85d4:40/"))
                {
                    using (SPWeb spWeb = spSite.OpenWeb())
                    {
                        SPList list = spWeb.Lists["Employee"];

                        SPQuery spQuery = new SPQuery();
                        spQuery.Query = "<Where><Eq><FieldRef Name='Title' />"
                            + "<Value Type='Text'>" + lstItems.SelectedItem + "</Value></Eq></Where>";

                        SPListItemCollection items = list.GetItems(spQuery);
                        SPListItem item = items[0];

                        item["Title"] = txtTitle.Text;
                        item["Desc"] = txtDesc.Text;
                        item["Status"] = txtStatus.Text;

                        item.Update();

                        LoadData();
                    }
                }

            }
            catch (Exception ex)
            {
                MessageBox.Show("Error : " + ex.Message);
            }
        }

        private void btnDelete_Click(object sender, EventArgs e)
        {
            try
            {
                using (SPSite spSite = new SPSite("http://s-e0912c86c85d4:40/"))
                {
                    using (SPWeb spWeb = spSite.OpenWeb())
                    {
                        SPList list = spWeb.Lists["Employee"];

                        SPQuery spQuery = new SPQuery();
                        spQuery.Query = "<Where><Eq><FieldRef Name='Title' />"
                    + "<Value Type='Text'>" + lstItems.SelectedItem + "</Value></Eq></Where>";

                        SPListItemCollection items = list.GetItems(spQuery);
                        SPListItem item = items[0];

                        item.Delete();

                        LoadData();
                    }
                }

            }
            catch (Exception ex)
            {
                MessageBox.Show("Error : " + ex.Message);
            }
        }
    }
}


Monday, April 4, 2011

Creating cross tab queries and pivot tables in SQL

Sometimes, you just absolutely have to generate a cross tab in SQL. It won't do to have the reporting system do it, nor is it feasible to build that functionality into the application. For example:

  • You may be using a reporting solution that doesn't provide this functionality.
  • You are using a legacy application that you'd rather not fiddle with.
  • You'd like to export some data, already set out in the required format, to a text file.

It is for these exceptional cases that I decided to write a dynamic cross tab stored procedure.

The exception rather than the rule

There is a general rule which states that data manipulation of this sort is best left to the application or reporting levels of the system, and for good reason. The SQL database engine's primary role is the storage and retrieval of information, not the complex processing of it. Anyone who has tried to pound data in SQL into a meaningful set of information, using a complicated set of business rules, will probably agree that SQL tends to discourage you from doing so, and the more fancy and creative you try to make your solution, the stronger that discouragement becomes.

It has also been said that just because you can do something, it doesn't mean you should. True, but I for one think that the opposite is also applicable. Just because it seems that you can't do something, it doesn't mean you shouldn't. It's a balancing act that demands careful consideration. I have found some applications for which this stored procedure was the ideal solution – I hinted at these in the first paragraph. However, there are just as many, if not more, where it shouldn't be used. The stored procedure can have an adverse affect on performance if not used correctly, or used on an expensive or large data source. I leave you with the advice that the script described here should be used carefully and sparingly, and not sprinkled willy-nilly about your databases.

Requirements

All of my demonstration code will use the trusty Northwind sample database. It comes with SQL Server 2000 by default, but if you've gotten rid of it, or if you're running Server 2005, you can download it from the Microsoft website.

Once Northwind has been downloaded and attached, create the sys_CrossTab stored procedure in the database and you're on your way.

A simple cross tab query

The Northwind database has a table called Categories, which is used to partition the full compliment of products into eight distinct groups, namely Beverages, Condiments, Confections, Dairy Products, Grains/Cereals, Meat/Poultry, Produce and Seafood. If the North Wind Trading Company were a real entity, it would not be inconceivable for one of the bean counters to request a report listing the total value of orders placed, by year, by category. This would be the perfect opportunity to try out a cross tab query. The simplest way to do this is to use the CASE function.

SELECT      YEAR(ord.OrderDate) YEAR, 
            SUM(CASE prod.CategoryID WHEN 1 THEN
                    det.UnitPrice * det.Quantity ELSE 0 END) Beverages,
            SUM(CASE prod.CategoryID WHEN 2 THEN
                    det.UnitPrice * det.Quantity ELSE 0 END) Condiments,
            SUM(CASE prod.CategoryID WHEN 3 THEN
                    det.UnitPrice * det.Quantity ELSE 0 END) Confections,
            SUM(CASE prod.CategoryID WHEN 4 THEN
                    det.UnitPrice * det.Quantity ELSE 0 END) [Dairy Products],
            SUM(CASE prod.CategoryID WHEN 5 THEN
                    det.UnitPrice * det.Quantity ELSE 0 END) [Grains/Cereals],
            SUM(CASE prod.CategoryID WHEN 6 THEN   
                    det.UnitPrice * det.Quantity ELSE 0 END) [Meat/Poultry],
            SUM(CASE prod.CategoryID WHEN 7 THEN
                    det.UnitPrice * det.Quantity ELSE 0 END) Produce,
            SUM(CASE prod.CategoryID WHEN 8 THEN
                    det.UnitPrice * det.Quantity ELSE 0 END) Seafood

FROM        Orders ord

INNER JOIN  [Order Details] det
ON          det.OrderID = ord.OrderID

INNER JOIN  Products prod
ON          prod.ProductID = det.ProductID

GROUP BY    YEAR(ord.OrderDate)

ORDER BY    YEAR(ord.OrderDate
)

This will return

So, you quickly type up the query, you show the accountant how to import the data into an Excel spreadsheet, and you're off for a pint to celebrate your ingenuity.

Shortly thereafter, the chap decides that the data report is not quite granular enough, and would like a similar report split by product name rather than category. There are 77 products, so it involves a few more CASE statements. You grumble to yourself quietly while demonstrating your cut-and-paste proficiency and generate the new report, showing the breakdown by product.

Thanks to your report, the company decides that a few of the product lines are not generating the revenue that they should, so they drop those products and add a few new ones. The accountant is dismayed to discover that the report you wrote for him still shows the old products, and has not included the new products into the report. This is where your quick solution starts to go south.

Enter the dynamic cross tab

There comes a point when maintaining all of these 'hard-coded' cross tabs is more effort than spending some time developing a more generic, permanent solution. The solution I arrived at still essentially uses the CASE function to cross tab the data. The only real difference is that the list of CASE statements is built up dynamically, based on the data that you wish to use to describe the columns.

The stored procedure I created started as a simple dynamic CASE statement builder, using sp_executesql. It immediately became useful and soon people were asking, "How do I get it to do..." questions. Bit by bit, it evolved to the monster it is today. The intention has always been to have a procedure that was so generic and portable, that it could be added to anyone's database and cross tabs could be created immediately without any further setup or change in SQL code. Although simplicity of use may have suffered a little, I feel that the primary objective has been achieved.

Using the stored procedure

For starters, let's generate a cross tab result set giving a list of companies in the first column, the name of the contact at the company in the second column and a list of the stocked products from column three onwards. Inside the grid, we'll give the total value of the orders placed by that company, for that product. It must be sorted by company name.

The SQL query that returns the source data that we require is

SELECT          cus.CompanyName, cus.ContactName, prod.ProductID, 
                prod.ProductName, det.UnitPrice, det.Quantity
      
FROM            Orders ord

INNER JOIN      [Order Details] det
ON              det.OrderID         = ord.OrderID

INNER JOIN      Products prod
ON              prod.ProductID      = det.ProductID

INNER JOIN      Customers cus
ON              cus.CustomerID      = ord.CustomerID

 

And here's how we'll do it:

 

EXEC sys_CrossTab
   
'Orders ord
    inner join      [Order Details] det
    on              det.OrderID         = ord.OrderID
    inner join      Products prod
    on              prod.ProductID      = det.ProductID
    inner join      Customers cus
    on              cus.CustomerID      = ord.CustomerID'
,
--  @SQLSource    
    'prod.ProductID',                                     
--  @ColFieldID   
    'prod.ProductName',                                   
--  @ColFieldName 
    'prod.ProductName',                                   
--  @ColFieldOrder
    'det.UnitPrice * det.Quantity',                       
--  @CalcFieldName
    'cus.CompanyName, cus.ContactName',                   
--  @RowFieldNames
    NULL,                                                 
--  @TempTableName
    'sum',                                                
--  @CalcOperation
    0,                                                    
--  @Debug        
    NULL,                                                 
--  @SourceFilter 
    0,                                                    
--  @NumColOrdering
    'Total',                                              
--  @RowTotals   
    NULL,                                                 
--  @ColTotals   
    'CompanyName',                                        
--  @OrderBy     
    'int'                                                 
--  @CalcFieldType

The first few rows and columns returned will be

Structure of the stored procedure

If you wish to fine-tune the procedure, make it more efficient, maybe adapt it to your individual needs and cut out some of the functionality you'll never use, you may be interested in how it was put together. If you've ideas of a better way of doing things, then please do share it with all of us. The stored procedure is fairly well documented and you should be able to find your way around the code.

You'll notice that there are a good few varchar(8000) variable declarations right up front. Very early into the project, I found that varchar(8000) just wasn't large enough for anything beyond the most trivial query. The only way around this storage problem was to create a range of these variables, and as the first one filled up, I'd start adding information into the next. A range of variables have been declared for each portion of the final query that we are building, namely the CASE statements, the select field list, the totals and so on.

The first order of business is to determine the names of the columns of the cross tab. This will be the first of two queries on your source data. We insert all distinct column names into a memory table (#Columns), in the order that they should appear in the cross tab. If you've chosen to show column totals, these will be calculated and stored at this point.

Next, any prefixes from the row fields are stripped out. This is important, as we'll be grouping by these fields and the aliases, or table references, can complicate the generated query.

I then define a cursor that runs over the items that were inserted into the #Columns memory table. This generates the CASE statements that are used to perform the aggregate functions on the source data. Some work is also done on the generation of the SQL statement portions for row and column totals, as well as the insert statement into the target temporary table, if these options were selected.

Once we've built up the bits and pieces, we string them together and run the query. If you look into the stored procedure code, you'll see that I've identified eight different scenarios, based on whether or not we've elected to save to a temp table or use row and column totals. The applicable scenario is determined and the final SQL statement is then pieced together appropriately, along with the debug version if debugging was enabled. This will be the second query on your data source.

It would be difficult to describe the stored procedure in more detail than this, without getting terribly long winded about it. However, I do feel that the code is adequately commented and you shouldn't have too much hassle making modifications should you choose to do so. The best advice I have to offer is to make use of the debugging facility, as you'll immediately see the effect of your change on the generated SQL code.

The stored procedure parameters, explained

The prototype of the stored procedure is as follows:

CREATE PROC [dbo].[sys_CrossTab]
   
@SQLSource        varchar(8000),
    @ColFieldID       varchar(8000
),
    @ColFieldName     varchar(8000
),
    @ColFieldOrder    varchar(8000
),
    @CalcFieldName    varchar(8000
),
    @RowFieldNames    varchar(8000
),
    @TempTableName    varchar(200) =
null,
    @CalcOperation    varchar(50) = 'sum',   
    @Debug            bit = 0
,
    @SourceFilter     varchar(8000) =
null,
    @NumColOrdering   bit = 0
,
    @RowTotals        varchar(100) =
null,
    @ColTotals        varchar(100) =
null,
    @OrderBy          varchar(8000) =
null,
   
@CalcFieldType    varchar(100) = 'int'

My original application didn't have need of nvarchars, and I really needed the extra storage space, so I decided to use the varchar data type. I would recommend that you alter these to nvarchars if you want code that is culture-safe.

Some detail of the purpose and usage of each parameter is given. If my description is a little too vague for you, have a look at the example script above and the output it generated, or even better, run the script for yourself and experiment with it.

@SQLSource

The first parameter, @SQLSource, is just that; the source of the data you wish to generate the cross tab from. This can be a table name, view name, function name or even the FROM clause of a SELECT statement, as we've used in the example. Have another look at the SQL statement I presented, and compare it to the text used for the @SQLSource parameter. It's basically the portion of the SQL statement from after the FROM keyword, up to but not including the WHERE clause, if one exists. If you wish to use a table, view or function, use just the name and possibly its alias - leave out the SELECT keyword.

@ColFieldID

We need to decide, for each row in the source data, which column to assign the values to. The @ColFieldID parameter is used to select the column to be used for this function. The ProductID field is used in our example. The number of distinct values that this column has in the source data will tell you how many columns will be used in the cross tab. This is an important consideration, especially if you wish to use the results of the cross tab in an Excel spreadsheet, as Excel puts an upper limit on the number of columns that it can handle.

@ColFieldName

Use @ColFieldName to provide the name of the field that will contain the captions for each column of the cross tab. It can be the same field as used for @ColFieldID.

@ColFieldOrder

If you require the columns to be sorted, you can specify a field by which the ordering should occur. The @ColFieldOrder parameter should hold the name of this ordering field. This too can be the same field as @ColFieldID. You might also want to set the @NumColOrdering parameter if the ordering is important. By default, the columns will be sorted alphanumerically. If you require then to be sorted numerically, set @NumColOrdering to 1. The description of that parameter will give a little more detail.

@CalcFieldName

@CalcFieldName should contain the name of the field that will be used to create the data within the cross tab grid. This will be the base data of the count, sum, average or whichever aggregate function you choose. Naturally, you should ensure that the data type of this field matches the operation you wish to perform. You cannot perform a SUM operation on varchar field, although a COUNT operation is perfectly acceptable.

@RowFieldNames

Here you will provide a comma-separated list, consisting of one or more field names, to be used as the first few columns of the grid. The aggregate function that you intend to perform will be carried out as a function of the grouping of the fields you specify here, so choose them wisely.

@TempTableName

Occasionally, the cross tab is not the final result, but a means to an end. Maybe you'd like to perform further queries on the cross tab data generated, or you'd like to join it to other tables. The @TempTableName parameter was added for this reason. It provides a way for the cross tab data to be inserted into a temporary table that you can then use for further processing.

There are a number of caveats here though. Firstly, you'll need to create the temp table before you call the cross tab stored procedure (because of SQL's scoping rules). When creating a table, you'll need to provide at least one column though. The simplest is to do something like

CREATE TABLE #CrossTab (Dummy TINYINT NULL)

You will then pass in the name of the temp table (#CrossTab in this case) to the stored procedure. Once the cross tab generation has completed, your temp table will contain the cross tab information in addition to your Dummy field. If, like me, you feel that the dummy field is 'wasted', you can declare it as an identity field, thereby adding a sequence number to your table.

CREATE TABLE #CrossTab (Sequence INT IDENTITY(1,1))

The users of your query are a lot less likely to be perturbed by a sequence number than an empty, useless column at the front of the result set.

@CalcOperation

Here we tell the stored procedure what to do with the source data we're providing. Acceptable values for this parameter are any of SQL's aggregate functions, namely AVG, SUM, COUNT, MIN, MAX and their ilk. Make sure that you match the operation to the data type, i.e. no SUMming of varchar data.

@Debug

The @Debug parameter, switched off by default, can be quite handy. When enabled (set to 1), it will print out the SQL code used to generate the cross tab. If you're not expecting the columns of your cross tab to alter, you can run the SQL printed out by the debugging code instead of using the stored procedure, which will be considerably more efficient. In this way, you can use the stored procedure as a SQL generation tool.

Take note that the row totals will not be calculated by the debug SQL. The stored procedure will 'hardcode' the totals that it calculated at the time that it was run.

@SourceFilter

@SourceFilter lets you input some SQL code to filter the source data prior to it been cross tabbed. This would be the code of the WHERE clause to match that of the SELECT clause as given to the @SQLSource parameter. There is no reason why you can't include a WHERE clause as part of the data given to @SQLSource, although I find it easier and more maintainable to specify it separately.

@NumColOrdering

If you intend for your columns to be arranged in a particular order, you'll give the field name to order them by to the @ColFieldOrder parameter, and you'll use the @NumColOrdering field to specify how the ordering is to take place. A value of 0 (the default) will cause the data to be sorted alphanumerically, and a value of 1 will sort in numerically.

If you're not sure about the difference between the two, consider the following list: 2, 1, 10, 11, 20, 100. When this is sorted numerically, it will be 1, 2, 10, 11, 20, 100. However, sorting it alphanumerically will result in 1, 10, 100, 11, 2, 20. Naturally, alphanumeric sorting will also handle A's, B's and C's, whereas numeric sorting will cause a type mismatch error to be raised.

@RowTotals

If this parameter is set to something other than NULL, an additional column will be added as the final column of the result set (the column name being the value given here), and will contain the sum of the cross tab values for each row.

@ColTotals

If set to something other than NULL, an additional row will be added as the final row of the result set, and will contain the sum of the cross tab values for the each column. There are a number of things to look out for with this one though. Firstly, you'll need to pass the field names already wrapped in quotes into the parameter. For example, if you wish the line to be marked as Total, you'll need to set @ColTotals to '''Total'''. Secondly, you'll need to provide as many values as fields that you've specified in the @RowFieldNames parameter. In our example, we've used two fields, so we need to provide two values to @ColTotals. Lastly, this total row may not necessarily appear at the bottom of the result set, depending on whether you've given an @OrderBy parameter value. The totals are added prior to the cross tab being sorted.

If you've enabled the debug printing option, the SQL code given to you will also not calculate the column totals dynamically. The totals will have been determined during the initial execution of the stored procedure, and these fixed values are then joined onto the rest of the result set.

@OrderBy

The @OrderBy parameter allows you to provide an ORDER BY clause. If used, this must be one or more of the fields used in the @RowFieldnames parameter. If you're using @ColTotals, keep in mind that the column totals row will be considered part of the cross tab data, and will be ordered along with the other rows.

@CalcFieldType

The data type of the calculated fields in the cross tab grid can be specified by the @CalcFieldType parameter. This will be INT types by default. Set the type to one that is appropriate for the type of operation being performed, and the type of data you expect to see in the cross tab.

The challenge!

If you're going to try the stored procedure out, you may as well get something for your effort. The Simple-Talk editor, Tony Davis, has kindly offered to sponsor a prize for the first three correct responses to the challenge. It is also based on the Northwind database, and you'll need to do the following:

Compile a cross tab report that displays the order value by customer, by quarter. You should also group the clients by the country in which they are based. Sort the list by country, and then by company name. Show both row and column totals, to appear at the right and bottom of the report respectively. I've included a screen shot so that you can see what the report should look like.

Post the source code for your solution in the comments to this article, (or send it to Tony at editor@simple-talk.com).

In conclusion

You'll find that once you've done one cross tab, you've pretty much done them all. The greatest difficulty is in actually deciding what you want displayed, and then collecting the source data for the stored procedure. The actual generation of the cross tab is then simply a matter of matching the field names to the input parameters.

I hope that you'll find this stored procedure as helpful as I have - it's one of the more valuable items in my toolbox. If you discover some novel use for it, or a new idea on how to improve it a little, please share it with us. I for one would be interested to hear about it.