Friday, March 25, 2011

C++ Programming Lab - Program 2

/ * Program to find the sum of two Surface points */

 

#include<iostream.h>

void main()

{

struct point

{

int x,y;

 

};

 

point p1,p2, p3;

cout<<" Enter coordinates for P1:";

cin>>p1.x>> p1.y;

cout<<"Enter coordinates for P2:";

cin>>p2.x>>p2.y;

 

p3.x=p1.x+p2.x;

p3.y=p1.y+p2.y;

cout<<"coordinates of P1 + P2:"<<p3.x<<","<<p3.y;

}

C++ Programming Lab - Program 1

/*      Raising a number n to a power p */

#include <iostream.h>

void power(double n,int p)

{

double pow=1;

int i=1;

while (i<=p)

{

pow=pow*n;

i=i+1;

}

cout<<"N Raise to power p="<<pow;

}

void main

{

double n;

int p;

cout<<"enter any no";

cin>>n;

cout<<"Enter power (p):";

cin>>p;

power(n,p);

 

}

Thursday, March 24, 2011

Convert number to words.

Add button and textbox on the design page and on click event of the button on the coding page write this code.

private void button1_Click(object sender, EventArgs e)
{
Class2 obj = new Class2();



MessageBox.Show(obj.NumberToWords(Convert.ToInt32(textBox1.Text)) +" ruppees);
}

Create Class with the same name and call the function.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace WindowsFormsApplication2
{
class Class2
{
private string[] _smallNumbers = new string[]
{"Zero", "One", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight",
"Nine", "Ten", "Eleven", "Twelve", "Thirteen", "Fourteen", "Fifteen",
"Sixteen", "Seventeen", "Eighteen", "Nineteen"};

// Tens number names from twenty upwards
private string[] _tens = new string[]
{"", "", "Twenty", "Thirty", "Forty", "Fifty", "Sixty", "Seventy", "Eighty",
"Ninety"};

// Scale number names for use during recombination
private string[] _scaleNumbers = new string[]
{"", "Thousand", "Million", "Billion"};



public String NumberToWords(int number)
{
if (number == 0)
{
return _smallNumbers[0];
}
int[] digitGroups = new int[4];

// Ensure a positive number to extract from
int positive = Math.Abs(number);

// Extract the three-digit groups
for (int i = 0; i < grouptext =" new" i =" 0;" combined =" groupText[0];" appendand =" (digitGroups[0]"> 0) && (digitGroups[0] < i =" 1;" prefix =" groupText[i]" appendand =" false;" combined =" prefix" combined = "Negative " grouptext = "" hundreds =" threeDigits" tensunits =" threeDigits" tens =" tensUnits" units =" tensUnits">= 2)
{
groupText += _tens[tens];
if (units != 0)
{
groupText += " " + _smallNumbers[units];
}
}
else if (tensUnits != 0)
groupText += _smallNumbers[tensUnits];

return groupText;
}

}
}

Wednesday, March 23, 2011

How to use Microsoft SQL Server Reporting Services in ASP.NET

There are times in a developer's life when you get to use great new tools.  Recently, at a client site, I was
asked to create a few web based reports that would fit into the application that I was working on.  I was
given my options; create them using a repeater control, Crystal Reports or Microsoft SQL Server
Reporting Services (SSRS).
Each of these comes with their own benefits and detractions straight from the top. 
The repeater control is easy to get the data formatted, but getting the pages to break in the correct spot is
notoriously painful.
Crystal Reports is standard, a little painful, but easy to develop.  However, in production there would have
been some licensing difficulties.  I was "encouraged" to not use Crystal.
SSRS are great when they are installed on a SQL Server.  However, this client did not have them installed
and had no intention of installing them on the SQL Server 2008 database that the application would run
upon.
SSRS was the preferred option, not only by the client, but by me as well. Then the dilemma struck me,
I had never used SSRS locally. Thus began my local SSRS adventure.

The Scenario
Before we get started on actually writing our local report, we should delve into the database that we will be
using. As with most example applications we will be working with the Northwind database.  We will write
a report based on the Customers and Orders Tables. I will be demonstrating this using parameters and
without using parameters. The basic query that I will be using in both instances is below.
Listing 1
Select OrderID,        OrderDate,        ShippedDate,        CompanyName,        ContactName,        ShipName,        ShipAddress,        ShipCity,        ShipRegion,        ShipPostalCode,        ShipCountry from   Customers cst inner join Orders od on       od.CustomerID = cst.CustomerID 
First, I will create a new website called SSRSExample.  I will be using Visual Basic 2010 as my
development language. Also, since this is an example, I am going to go light on the design and focus
more on the process of creating the report.
Figure 1
The first thing that we want to do, now that the project is created, is to add the Bin and App_Code folders
to the project.
Figure 2
Now that the Bin and App_Code folders exist in the project, we need to add a dataset in the App_Code
folder.  Do this by right clicking on the App_Code folder and selecting Add New Item.  Select DataSet,
change the name, and click Add.
Figure 3



The Table Adaptor Configuration Wizard will be a pop-up that will ask you to define the data that will be
used to populate the dataset.  First, select your database server from the list or create a new connection to
the server then click Next.
Figure 4

You will be asked to save your connection in the web.config file. I will say yes; it will make adding the
next DataSet a little easier. Then Click Next.
Figure 5


This first example will use the Use SQL Statements option. As you can see you can create a stored
procedure from here or use an existing one.  In my development I use existing stored procedures. They
give me more flexibility and control over the application. For the example though, we will just "Use SQL
Statements."  Click Next.
Figure 6

This next page of the wizard will allow you to enter the SQL Statement with which you want to populate
the DataSet.  Click Next, Next, then Finish.  No options need changed on the following two screens.
Figure 7

After finishing you will then see the following screen.
Figure 8

Now we will open up the Default.aspx page and add a ReportViewer control to the page.
Figure 9

A blank report viewer will be added to the page, resize to fit.  I will make this one 800x600 from properties
of Report Viewer control.
Figure 10
Now, right click on the project name and Add New Item.  Select Report and change the name.  Click Add.
Figure 11

A blank report will be added to the project and a new control tab will be added with the Toolbox and Server
Explorer tabs.  This tab holds the controls for the dataset we just created.
Figure 12

Now that the report is open, add a Table to the report.
Figure 13




Start adding boxes to accommodate the data that we want to display. Some formatting may need to take
place to fit everything into the 800 x 600 format that we have created on the Report Viewer that we placed
on Default.aspx.  I have added 5 boxes and dragged and dropped the OrderID, CompanyName, OrderDate
, ShippedDate and ShipAddress into the fields.  I have added formatting to the two date fields and some
concatenation to the Address field. The next few images will show how the formatting was done.
Figure 14


Now that the report has been created we need to link it to the ReportViewer on Default.aspx.  Click on
the to open up the ReportViewer Tasks box and select the report from the drop down list. This will
add an ObjectDataSource to the page.
Figure 15
Rename the ObjectDataSource (odsExample1) and click on the again to choose the data source to use
with the report.
Figure 16
At this point you can run the application! So hit F5 and watch the magic happen.
Figure 17
Now, granted, this is a simplistic report, there is no filtering that has taken place, just a mess of a report. 
The question now becomes, how do we create a report that will accept parameters and give a little more
meaningful results? That is a good question, and I will show you.
We will use the same project; just add two new pages, a new report and a new DataSet.  I will only
highlight the differences between the two.  The two pages that will be added will be a criteria selection
page and a report page. The new DataSet will accept a parameter (CustomerID).
Here is the new query in the Example2 DataSet, notice that I added a where clause and a parameter
@CustomerID.
Figure 18
The report that I created for this one looks identical to the one in example 1.  However, it is now pointing
to the new DataSet.
This is the basic, very basic form for the Example2Select.aspx page.  Another feature of the .NET
Framework 4.0 that I find very useful is Cross Page Post Backs.  I will explain a little about them now,
since I am using them for the page submissions.
Figure 19 
To do a cross page post back, you have to set the PostBackURL property of the Submit Button.  I have
set the property to ~/Example2.aspx.  When the button is clicked this will send the post back straight to
the page referenced in the PostBackURL property.  In order to make this as easy and as powerful as
possible I have also added the following public read only property to the Example2Select.aspx page:
Listing 2
Public ReadOnly Property CustID() As String  Get   Return tCustID.Text     End Get End Property
When setting up the page that is being submitted to, the following code needs placed in the .aspx page markup:
Listing 3
<%@ PreviousPageType VirtualPath="~/Example2Select.aspx" %>
This tag sets up a reference to the page that is posting to this page, in this case, Example2Select.aspx. The
power of this tag will become clear with the next piece of code that runs before this page is rendered. This
codes job is to reference Example2Select.aspx and get the CustID property that was entered and then pass
it into the odsExample2 ObjectDataSource. The power in setting the above tag is in how the CustID
property is referenced, PreviousPage.CustID.  Without the PreviousPageType being explicitely set,
conversions would have to be done just to get the CustomerID.  This allows for a Strongly Typed Reference
to the CustID property.
Listing 4
Protected Sub odsExample2_Selecting(ByVal sender As Object, _         
ByVal
e As System.Web.UI.WebControls.ObjectDataSourceSelectingEventArgs) _ 
       Handles odsExample2.Selecting       
   e.InputParameters("CustomerID") = PreviousPage.CustID

End
 
Sub
These are the only real differences in the two processes.  So, with no further ado, the screen shots of the
application running with parameters.

Tuesday, March 22, 2011

Applying Themes in ASP.net At RunTime

Step 1: Under the App_Code folder, we add a class file named Theme.cs:

public class Theme
{
private string _name;
public string Name
{
get { return _name; }
set { _name = value; }
}
public Theme(string name)
{
Name = name;
}
}

Step 2: Under the App_Code folder, we add a ThemeManager class file named ThemeManager.cs. This will list all the available themes under the /App_Themes folder.

using System;
using System.Data;
using System.Configuration;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using System.Collections;
using System.Collections.Generic;
using System.IO;
public class ThemeManager
{ #region Theme-Related Method public static List GetThemes()
{
DirectoryInfo dInfo = new DirectoryInfo(System.Web.HttpContext.Current.Server.MapPath("App_Themes"));
DirectoryInfo[] dArrInfo = dInfo.GetDirectories();
List list = new List();
foreach (DirectoryInfo sDirectory in dArrInfo)
{
Theme temp = new Theme(sDirectory.Name);
list.Add(temp);
}
return list;
}
#endregion
}

Step 3: Comment out any pre-defined themes such as in the web.config. You don't need this because the application level default theme will be specified in the BasePage class in Step 6.

Step 4: In you master page, such as Default.master, add a data source and a radiobutton list. You can use a dropdownlist if you would prefer that.



Step 5: In the master page code-behind, such as Default.master.cs, add these methods:

protected void strTheme_DataBound(object sender, EventArgs e) { strTheme.SelectedValue = Page.Theme; } protected void strTheme_SelectedIndexChanged(object sender, EventArgs e) { Session.Add("MyTheme", strTheme.SelectedValue); Server.Transfer(Request.FilePath); }


Step 6: Add the BasePage class under App_Code, and specify the default theme. Here, we use "White".

using System;
using System.Data;
using System.Configuration;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
public class BasePage : System.Web.UI.Page
{
protected override void OnPreInit(EventArgs e)
{ base.OnPreInit(e);
if (Session["MyTheme"] == null)
{ Session.Add("MyTheme", "White");
Page.Theme = ((string)Session["MyTheme"]);
}
else
{ Page.Theme = ((string)Session["MyTheme"]); }
}
}

Step 7: Inherit all the pages using the dynamic theme from BasePage:

Saturday, March 19, 2011

CAST Data-Type Conversions in SQL

 
May be you have  covered the data types that SQL recognizes and supports. Ideally, each column in a
database table has a perfect choice of data type. In this non-ideal world, however, exactly what that
perfect choice may be isn't always clear. In defining a database table, suppose you assign a data type
to a column that works perfectly for your current application. Later, you want to expand your
application's scope or write an entirely new application that uses the data differently. This new use
could require a data type different from the one you originally chose.
You may want to compare a column of one type in one table with a column of a different type in a
different table. For example, you could have dates stored columns contain the same things (dates, for
example), the fact that the types are different may prevent you from making the comparison. In SQL-86
and SQL-89, type incompatibility posed a big problem. SQL-92, however, introduced an easy-to-use
solution in the CAST expression. The CAST expression converts table data or host variables of one type
to another type. After you make the conversion, you can proceed with the operation or analysis that you
originally envisioned.
Naturally, you face some restrictions when using the CAST expression. You can't just indiscriminately
convert data of any type into any other type. The data that you're converting must be compatible with
the new data type. You can, for example, use CAST to convert the CHAR(10) character string '1998-04-26'
to the DATE type. But you can't use CAST to convert the CHAR(10) character string 'rhinoceros' to the
DATE type. You can't convert an INTEGER to the SMALLINT type if the former exceeds the maximum
size of a SMALLINT.
You can convert an item of any character type to any other type (such as numeric or date) provided that
the item's value has the form of a literal of the new type. Conversely, you can convert an item of any
type to any of the character types, provided that the value of the item has the form of a literal of the
original type.
The following list describes some additional conversions you can make:
� Any numeric type to any other numeric type. If converting to a type of less fractional precision, the
    system rounds or truncates the result.
� Any exact numeric type to a single component interval, such as INTERVAL DAY or INTERVAL SECOND.
� Any DATE to a TIMESTAMP. The time part of the TIMESTAMP fills in with zeros.
� Any TIME to a TIME with a different fractional-seconds precision or a TIMESTAMP. The date part of the
    TIMESTAMP fills in with the current date.
� Any TIMESTAMP to a DATE, a TIME, or a TIMESTAMP with a different fractional-seconds precision.
� Any year-month INTERVAL to an exact numeric type or another yearmonth INTERVAL with different
    leading-field precision.
� Any day-time INTERVAL to an exact numeric type or another day-time INTERVAL with different leading-
    field precision.
Using CAST within SQL
Suppose that you work for a sales company that keeps track of prospective employees as well as
employees whom you've actually hired. You list the prospective employees in a table named PROSPECT,
and you distinguish them by their Social Security numbers, which you store as a CHAR(9) type. You list
the employees in a table named EMPLOYEE, and you distinguish them by their Social Security numbers,
which are of the INTEGER type. You now want to generate a list of all people who appear in both tables.
You can use CAST to perform the task, as follows:
SELECT * FROM EMPLOYEE
WHERE EMPLOYEE.SSN =
CAST(PROSPECT.SSN AS INTEGER) ;

Using CAST between SQL and the host language
The key use of CAST is to deal with data types that are in SQL but not in the host language that you use.
The following list offers some examples of these data types:
� SQL has DECIMAL and NUMERIC, but FORTRAN and Pascal don't.
� SQL has FLOAT and REAL, but standard COBOL doesn't.
� SQL has DATETIME, which no other language has.

Suppose that you want to use FORTRAN or Pascal to access tables with DECIMAL(5,3) columns, and
you don't want the inaccuracies that result from converting those values to the REAL data type of
FORTRAN and Pascal. You can perform this task by CASTing the data to and from characterstring
host variables. You retrieve a numeric salary of 198.37 as a CHAR(10) value of '0000198.37'. Then if
you want to update that salary to 203.74, you can place that value in a CHAR(10) as '0000203.74'.
First, you use CAST to change the SQL DECIMAL(5,3) data type to the CHAR(10) type for the employee
whose ID number you're storing in the host variable :emp_id_var, as follows:
SELECT CAST(Salary AS CHAR(10)) INTO :salary_var
FROM EMP
WHERE EmpID = :emp_id_var ;


Then the application examines the resulting character string value in :salary_var, possibly sets the string
to a new value of '000203.74', and then updates the database by using the following SQL code:
UPDATE EMP
SET Salary = CAST(:salary_var AS DECIMAL(5,3))
WHERE EmpID = :emp_id_var ;

Dealing with character-string values like '000198.37' is awkward in FORTRAN or Pascal, but you can write
a set of subroutines to do the necessary manipulations. You can then retrieve and update any SQL data
from any host language and get and set exact values.
The general idea is that CAST is most valuable for converting between host types and the database rather
than for converting within the database.
 
an Simple example of conversion is as:
use NIR
declare @as int;
select @as= dbo.emp.Salary from dbo.emp where dbo.emp.Id=1;
 
declare @ds varchar(22);
set @ds= CAST(@as as varchar(22));
print @ds;

 
image

Thursday, March 17, 2011

Triggers in SQL Server

DDL Triggers
DDL triggers respond to an event associated with a Data Definition Language
(DDL) statement. These DDL statements are
� CREATE
� ALTER
� DROP
You use DDL triggers for the following purposes:
� To prevent changes being made to a schema
� To log who makes changes to a schema
� To respond in some desired way to changes made in a schema

 
Preventing undesired changes
The following example shows you how to prevent undesired changes being made to the tables of the
TriggerImpl database.
1. Use SQL Server Management Studio to connect to the desired SQL Server 2005 instance.
2. Click the New Database Query button.
3. Ensure you're using the TriggerImpl database:
USE TriggerImpl
4. Create a trigger to prevent changes being made to the TriggerImpl
database:
CREATE TRIGGER PreventChanges
ON DATABASE
FOR DROP_TABLE, ALTER_TABLE, CREATE_TABLE
AS
PRINT 'Making alterations to the TriggerImpl database
is not permitted.'
PRINT 'To make changes you must disable this DDL
trigger.'
ROLLBACK

The first line provides a name for the trigger. The second line specifies that the trigger apply the database
changes. A trigger is bound to a database object; in this case, the current database, TriggerImpl . The third
line specifies that the trigger executes for DROP TABLE, ALTER TABLE, and CREATE TABLE statements. The
FOR keyword indicates that the trigger runs before the DDL statement executes.
5. Attempt to create a new table called DDLTriggerTest:
CREATE TABLE DDLTriggerTest
(ID int PRIMARY KEY,
SomeColumn varchar(30))

An error message displays.
6. Attempt to drop the dbo.ConstraintTest table that you created earlier in this chapter:
DROP TABLE dbo.ConstraintTest
The attempted change is prevented with a message similar to the message shown in previous step.
7. Drop the trigger:
DROP TRIGGER PreventChanges
ON DATABASE

8. Retry creating the DDLTriggerTest table, which failed in Step 5:
CREATE TABLE DDLTriggerTest
(ID int PRIMARY KEY,
SomeColumn varchar(30))

Because you dropped the trigger in Step 7, you can now successfully create the DDLTriggerTest table.
Auditing changes
Another use of a DDL trigger is to log how and when changes are made indat abase or table structure.
In the following example, I show you how to create a DDL trigger for the ALTER TABLE statement.
Follow these steps:
1. Ensure you are using the Chapter14 database:
    USE nir
2. Create a table called AuditedTable. Later you monitor this table for
    changes in its structure made by using the ALTER TABLE statement.
    CREATE TABLE AuditedTable
    (MessageID int PRIMARY KEY,
    Message varchar(100))
3. Insert a sample row into the AuditedTable table.
    INSERT INTO AuditedTable
    VALUES (1, 'Hello World!')
4. Confirm that the row has been inserted .
    SELECT *
    FROM AuditedTable
5. Create a table DDLAudit to contain the information used for auditing.
    Using a TIMESTAMP column allows easy monitoring of the sequence of alterations made:
    CREATE TABLE DDLAudit
    (
    Changed TIMESTAMP,
    DateChanged DateTime,
    TableName char(30),
    UserName varchar(50)
    )
6. Confirm that the DDLAudit table has been created and is empty.
    SELECT *
    FROM DDLAudit
7. Insert a sample row manually into the DDLAudit table.
    INSERT INTO DDLAudit (DateChanged, UserName)
    VALUES (GetDate(), 'John Smith')
8. Confirm that the sample row has been inserted.
    SELECT *
    FROM DDLAudit
9. Create a trigger named AuditDDL, which responds to an ALTER TABLE statement.
Notice that in the FOR clause, you write ALTER_TABLE with an underscore.
Notice too that the GetDate() function is used to retrieve the date and time when the row is inserted
into the DDLAudit table. The suser_sname() function is used to retrieve the system name of the
user making the change in the table schema.

CREATE TRIGGER AuditDDL
ON DATABASE
FOR ALTER_TABLE
AS
INSERT INTO dbo.DDLAudit(DateChanged,
TableName, UserName)
SELECT GetDate(), 'AuditedTable', suser_sname()
-- End of Trigger
 
The trigger now responds to any attempt to use the ALTER TABLE statement to alter the structure of the
AuditedTable table.
10. Use the following code to attempt to add an additional column to the AuditedTable table.
ALTER TABLE AuditedTable
ADD Comment varchar(30)
11. Inspect the content of the DDLAudit table.
SELECT *
FROM DDLAudit

DML Triggers
A DML trigger is executed in response to an event associated with a Data Modification Language (DML)
statement. A DML trigger is associated with one of the following statements:
� INSERT
� UPDATE
� DELETE
You can use DML triggers either to replace a DML statement or to execute after a DML statement. A
trigger that replaces a DML statement is called an INSTEAD OF trigger. A trigger that executes after
a DML statement is called an AFTER trigger.
The inserted and deleted tables
SQL Server automatically manages the deleted and inserted tables. If you delete rows from a table, the
deleted table contains a row that matches the rows deleted from the other table. Similarly, if you update
a row, the deleted table contains a row with the old values. When you execute an UPDATE, values are
inserted into both the inserted and deleted tables.
If you insert data into a table, a copy of that row or those rows is contained in the inserted table.
You can use the inserted and deleted tables to determine what kind of change has been made to the
data, as I show you in the next section.
Triggers for auditing DML
A common use for DML triggers is to record, for audit purposes, changes made to data. The following
steps show you how to create a DML trigger to store information about who changed data:
1. Open a new database engine query in SQL Server Management Studio.
2. Ensure you are working in the NIR database.
    USE NIR
3. Create a table to store messages called DMLAuditedTable:
   CREATE TABLE DMLAuditedTable
    (MessageID int PRIMARY KEY,
    Message varchar(100))

    This is the table you want to audit.
4. Enter a sample value in the DMLAuditedTable table:
    INSERT INTO DMLAuditedTable
    VALUES (1, 'Hello World!')
5. Confirm the successful INSERT operation:
    SELECT *
    FROM DMLAuditedTable
6. Create a table, DMLAudit, to store the audit information:
    CREATE TABLE DMLAudit
    (
    Changed TIMESTAMP,
    DateChanged DateTime,
    TableName char(30),
    UserName varchar(50),
    Operation char(6)
    )   

The changed column is of type TIMESTAMP to store information about the sequence in changes made
to the DMLAuditedTable table. In the Operation column, you store information about whether the DML
change was an INSERT or an UPDATE operation.
7. Enter a sample row manually into the DMLAudit table:
INSERT INTO DMLAudit (DateChanged, UserName)
VALUES (GetDate(), 'John Smith')

8. Confirm the successful INSERT operation into the DMLAudit table:
    SELECT *
    FROM DMLAudit
9. Create a DML trigger called AuditDML:
    CREATE TRIGGER AuditDML
    ON dbo.DMLAuditedTable
    AFTER INSERT, UPDATE
-- NOT FOR REPLICATION
AS
DECLARE @Operation char(6)
IF EXISTS(SELECT * FROM deleted)
SET @Operation = 'Update'
ELSE
SET @Operation = 'Insert'
INSERT INTO dbo.DMLAudit(DateChanged,
TableName, UserName, Operation)
SELECT GetDate(), 'DMLAuditedTable', suser_sname(),
@Operation
-- End of Trigger

Notice the IF clause that uses information from the deleted table to determine whether the operation is
an UPDATE or an INSERT. That information is stored in the @Operation variable. The GetDate() function
retrieves the data and time of the operation and the suser_sname() function retrieves the username.
The Operation column stores the value in the @Operation variable.
10. Test whether the DML trigger responds to an INSERT operation on the
    DMLAuditedTable table by using the following code:
    INSERT INTO DMLAuditedTable
    VALUES (2, 'To be or not to be, that is the
    question.')

11. Execute a SELECT statement on the DMLAudit table to confirm that
    the INSERT operation has been executed:
    SELECT *
    FROM dbo.DMLAudit
12. Execute an UPDATE statement against the DMLAuditedTable table:
    UPDATE DMLAuditedTable
    SET Message = 'Goodbye World!'
    WHERE MessageID = 1
13. Test whether the AuditDML trigger has added a row to the DMLAudit table by using the following code:
    SELECT *
    FROM DMLAudit
output window shows that the UPDATE operation also caused a row to be added to the DMLAudit table.
Notice that the value in the Operation column is Update.
The information you store in an audit table can be much more extensive than shown in this example.
The scope is limited only by your knowledge of T-SQL and your business setting.

Wednesday, March 16, 2011

Maintaining Data Integrity with Constraints and Triggers

Maintaining the integrity of the data in a SQL Server 2005 instance is crucially important to the reliable
operation of your business that uses SQL Server data. SQL Server uses several mechanisms, including
constraints and triggers, to help ensure data integrity. In this chapter, I tell you about constraints and
triggers that are tools to help maintain data integrity in SQL Server 2005.
A constraint is a rule that is enforced by SQL Server 2005. Microsoft suggests that, in SQL Server 2005,
constraints are the preferred way to enforce business rules.
A trigger is a special kind of stored procedure that executes in response to an event inside SQL Server.
A common use of triggers is to create an audit trail. For example, suppose you want to keep an audit
trail of who makes changes to prices in your online store. Each time someone modifies a row in the
relevant table, a trigger executes, which could store information such as the timeof the change, who
made the change, what the original price was, and what the new price is. Such information allows your
business to monitor trends in prices and also to find out who made any possibly wrong changes in price.

Constraints, Defaults, Rules, and Triggers
In this section, I describe constraints, defaults, rules, and triggers, which provide a range of ways to
enforce business rules inside SQL Server databases. In this article, I create a simple database by using
the following code.
Examples later in this aritcle use the ConstriggerDB database.
CREATE DATABASE ConstriggerDB
Constraints
Constraints (rules enforced by SQL Server 2005) provide a key way to ensure several aspects of data
integrity. Microsoft recommends that you use constraints rather than triggers in SQL Server 2005 to
ensure data integrity. The constraints supported in SQL Server 2005 are
Primary key: Provides a way to uniquely identify each row in a table. A primary key constraint is a
    specialized form of a unique constraint.
Unique constraint: Specifies that each value in a column is unique. One difference between a
    unique constraint and a primary key constraint is that a column with a unique constraint can contain
    NULL values, which are not permitted in a column that has a primary key constraint. If a column is a
    primary key or part of a primary key, you cannot also set up a unique constraint for that column.
Check constraint: Specifies rules that values in a column must obey. A check constraint uses an
    expression to define the permitted values in a column. Later in this chapter, I show you how to define
    check constraints on a column.
Defaults
A default is a database object that you define and bind to a column. If during an insert operation, you
don't supply a value for a column to which the default is bound, then the default is inserted into that
column.
The following example creates a default of Unknown for values inserted into a table that records student
grades.
First, specify that you use the ConstriggerDB database:
USE ConstriggerDB
Then create the default called StudentGradeUnknown, specifying that it is the string Unknown:
CREATE DEFAULT StudentGradeUnknown AS 'Unknown'
Then create a simple table to store student grades:
CREATE TABLE StudentGrades
(StudentID int PRIMARY KEY,
Examination varchar(10),
Grade varchar(7))
At this stage, the StudentGradeUnknown default exists in the ConstriggerDB database. You need to bind
it to the Grade column in the StudentGrades table. Use this code, which makes use of the sp_bindefault
system stored procedure:

sp_bindefault 'StudentGradeUnknown', 'StudentGrades.Grade'
GO
To confirm that the default operates, use the following INSERT statement. Notice that no value is supplied
for the Grade column:
INSERT INTO StudentGrades(StudentID, Examination)
VALUES(1, 'XML101')
Also, insert a row where you supply a value in the Grade column:
INSERT INTO StudentGrades
VALUES(2, 'SVG101', 'A')
You can confirm the values in the StudentGrades table by using the following
code:
SELECT * FROM StudentGrades
In the first row shown in output, the value Unknown in the Grade column was supplied by the default bound
to that column. In the second row, you supplied a value for the Grade column so the default was not used.
Rules
Rules are included in SQL Server 2005 for backwards compatibility with SQL Server 2000. Check constraints
in SQL Server 2005 provide similar functionality.

Microsoft recommends that you use check constraints rather than rules in new code.
The following example that demonstrates how to create and use a rule uses a test table called TestTable in
the ConstriggerDB database, which I created with this code:
USE ConstriggerDB
CREATE TABLE TestTable
(ID int PRIMARY KEY,
Data char(1))

To create a rule in the ConstriggerDB database, use the following code:
CREATE RULE aThroughcOnly
AS
@ruleval >= 'a' AND @ruleval <= 'c'

The preceding rule specifies that the value must be lowercase, between lowercase a and lowercase c. You
also need to bind the rule you have created to a column. The following code binds it to the Data column in
the TestTable table:
sp_bindrule aThroughcOnly, 'TestTable.[Data]'
 
To insert a row with an allowed value, use the following code:
INSERT INTO dbo.TestTable
VALUES (1, 'b')
You should be prevented from entering a value that doesn't correspond to the rule you created. The following
attempts to insert a disallowed character in the Data column.
INSERT INTO dbo.TestTable
VALUES (2, 'd')

After you create a rule, you're likely to leave it in place unless you want to convert it to a constraint as
described in the preceding section. To unbind a rule from a specified column, use the following code:
sp_unbindrule 'dbo.TestTable.Data'
Triggers
Triggers are used to help maintain data integrity and enforce business rules. (Remember that a trigger is a
special kind of stored procedure that executes in response to an event inside SQL Server.) They complement
the protection of data integrity that constraints, defaults, and rules can provide. A trigger is a specialized
stored procedure. Unlike regular stored procedures, you cannot use an input parameter with a trigger, nor
can a trigger return a value. A trigger is associated with a particular table. When a specified event occurs,
the trigger executes.
Triggers are broadly divided into two groups:
DDL triggers: Data Definition Language triggers
DML triggers: Data Modification Language triggers

Triggers are classified as follows:
INSTEAD OF triggers: These execute instead of the statement to which
    they are related.
AFTER triggers: These execute after the statement to which they are
    related.
 
I describe and demonstrate several of these types of triggers later in another article soon.
 
 


 

How to Maintain Integrity using Transactions

Many business activities depend on an action being accompanied by a corresponding action.
For example, if you are a customer and pay for goods but don't receive them, something is wrong.
Similarly, if you take goods and don't pay for them, again, something is wrong. The expected typical
scenario is a transaction where you pay for goods and you receive the goods. More than one action
is required to make up a transaction.
Another common example of more than one operation making up a transaction is when you transfer
money from one bank account to another. Suppose you're transferring a regular payment to a company.
Your bank takes the money out of your account and puts it into the account of the company or person
that you're paying. You would be annoyed if the money was taken out of your account and didn't reach
the account where it was supposed to go. If the money was never transferred to the company's account
you, as a customer of a bank, would not be happy whether the money was put in the wrong account or
just disappeared. The different parts of that transaction must be kept together. There are two possible
scenarios:
No money is taken from your account and nothing is transferred to the
    other account (possibly because of insufficient funds in your account or
    a network problem is preventing the transfer).
The right amount of money is taken from your account and is placed in
    the other account.
In a SQL Server transaction, either all the component parts of a transaction are carried out or none of
them are. This concept is called atomicity.
Transactions
In SQL Server 2005, there are several levels of transaction. In the preceding paragraphs, I mention
business level transactions. These are the subject of this chapter. Behind the scenes in SQL Server 2005,
other transactions take place routinely. For example, if you add data to a table that has an index, both
the table and the index need to be updated or neither is updated. If that coordination of operations
doesn't happen, then the index and table are inconsistent with each other, which is unacceptable.

ACID
ACID describes four essential characteristics of a transaction:
Atomicity: Atomicity means that the transaction cannot be divided and still make sense. With the
    transfer between bank accounts, either both parts of the transaction take place successfully or neither
     happens.
Consistency: Consistency means that the database is in a consistent state before the transaction
    takes place and remains in a consistent state after the transaction. For example, if you add a row to
    a table, then the index must also be updated.
Isolation: This is the idea that a transaction should be able to proceed as if it were completely
    isolated from any other transaction. For multi-userdatabases, it is increasingly important that the
    product supports this.
Durability: This is the concept that a transaction survives even if there is a hardware failure. It
    should be possible to re-create the data up to thelast completed transaction that completed a split
    second before the hardware failure.
The transaction log
Each SQL Server 2005 database has an associated transaction log. The transaction log contains
information about recent changes that have been made in the database that have not yet been
committed to disk. When SQL Server is restarted, any transactions not yet committed to disk are
committed to disk during startup. This facility supports durability of the ACID acronym that I discuss
in the preceding section.
Coding Transactions
When a transaction involves, for example, removing money from one account and transferring it to
another account, then both accounts are updated. First,look at how SQL Server carries out a simple update.
A simple update
Imagine that you have a database called Departments that has columns, which include DepartmentName
and DepartmentManager. When the manager of the IT department is replaced, you need to update the
information inthe DepartmentManager column. To do this, you use code like the following:
UPDATE Departments
SET DepartmentManager = 'John Smith'
WHERE Department = 'IT'
The WHERE clause works much as it does in a SELECT statement. It selectsthe rows where the
Department column contains the IT value and the SETclause causes the value in the
DepartmentManager column to update to the John Smith value.
A simple transaction
To demonstrate a simple transaction, I create a database called TransactionDemo. In that database,
I create a couple of tables called PersonalAccount and CompanyAccount:
CREATE DATABASE TransactionDemo
USE TransactionDemo
CREATE TABLE PersonalAccount (AccountID INT PRIMARY KEY,
Name VARCHAR(30), BALANCE MONEY)

I create two accounts, one for John Smith and one for Jane Doe, with each individual having a balance
of $100.00:
INSERT INTO PersonalAccount
VALUES (1, 'John Smith', 100.00)
INSERT INTO PersonalAccount
VALUES (2, 'Jane Doe', 100.00)
Similarly, I create a CompanyAccount table:

CREATE TABLE CompanyAccount (AccountID INT PRIMARY KEY,
Name VARCHAR(30), BALANCE MONEY)
Then I create two rows in it, with each company having a balance of $10,000:
INSERT INTO CompanyAccount
VALUES (1, 'Acme Company', 10000.00)
INSERT INTO CompanyAccount
VALUES (2, 'XMML.com', 10000.00)

To confirm that the two tables have been created with appropriate values in
each column, use the following code:
SELECT * FROM PersonalAccount
SELECT * FROM CompanyAccount
The following code transfers $50.00 from John Smith's personal account to
XMML.com's company account by using a transaction:
BEGIN TRANSACTION
UPDATE PersonalAccount
SET BALANCE = 50.00
WHERE Name = 'John Smith'
UPDATE CompanyAccount
SET BALANCE = 10050.00
WHERE Name = 'XMML.com'
COMMIT TRANSACTION
GO
Confirm that the balance in John Smith's personal account and XMML.com's company account have
changed appropriately by using the following code:
SELECT * FROM PersonalAccount
SELECT * FROM CompanyAccount

Often, a transaction has some error checking included in the code. To include error checking when
making a transfer from Jane Doe to Acme Company, you can use @@ERROR:
BEGIN TRANSACTION
UPDATE PersonalAccount
SET BALANCE = 0.00
WHERE Name = 'Jane Doe'
IF @@ERROR <> 0
PRINT N'Could not set balance in PersonalAccount.'
UPDATE CompanyAccount
SET BALANCE = 10100.00
WHERE Name = 'Acme Company'
IF @@ERROR <> 0
PRINT N'Could not set balance in CompanyAccount.'
COMMIT TRANSACTION
GO

To confirm that you have changed the row for Jane Doe in the Personal Account table and the row for Acme
Company in the CompanyAccount table, use the following code:
SELECT * FROM PersonalAccount
SELECT * FROM CompanyAccount
The BEGIN TRANSACTION statement marks the beginning of the T-SQL code to be treated as a transaction.
If the T-SQL code in the transaction executes successfully, the COMMIT TRANSACTION statement is
reached and the transaction is committed.
If an error occurs during processing, the ROLLBACK TRANSACTION statement executes. After the COMMIT
TRANSACTION statement executes, you cannot use a ROLLBACK TRANSACTION statement to roll back the
transaction.
Implicit transactions
The preceding examples showed explicit transactions. T-SQL also supports implicit transactions.
To start implicit transactions, you use the SET IMPLICIT_TRANSACTIONS ON statement. Each statement after that statement until a transaction is committed is considered to be part of that transaction. After you do that,you must explicitly commit the statements that make up each transaction by using the COMMIT

TRANSACTION statement. The statements after the COMMIT TRANSACTION statement are considered to be the first statement of the next transaction. Again, that transaction must be explicitly committed.
To turn implicit transactions off, you use the SET IMPLICIT_TRANSACTIONS OFF statement. The default behavior, if you do not SET IMPLICIT TRANSACTIONS ON, is that each individual T-SQL statement is treated as a transaction rather than as a group of T-SQL statements.
You can't combine Data Definition Language (DDL) statements in a single transaction.

Making Availability of Data and Preventing Data Loss in SQL Server

To keep database secure and reliable then here is some expect that make sql server to most
power database management system in microsoft technologies:
  • Keeping your hardware secure
  • Taking advantage of database mirroring
  • Creating checkpoints
  • Keeping your database running with clustering
  • Producing database snapshots
  • Backing up and restoring data
In a connected world, your colleagues and customers need almost continuous
access to data. This means that you need to avoid users temporarily
losing access to data. Or, if you can't completely avoid such temporary problems,
make sure that you can recover from them quickly. More important,
you must take careful steps to ensure that the chance of users permanently
losing access to data is as close to zero as possible.
SQL Server 2005 supports many features that improve the chances of keeping
your SQL Server databases available to users. For example, database mirroring
is a new feature that allows almost instant switching to a backup SQL
Server if a primary server goes down.
As well as achieving high availability of data, it is crucially important that you
avoid the permanent loss of any business data that is needed for the running
of your business. Losing important business data can be fatal for your continued
employment and can, in some cases, also be fatal for the business.
Taking appropriate steps to back up data and ensure that you can restore it is
of enormous importance.
Reducing Downtime with Database Mirroring
Database mirroring is an option to improve the availability of a SQL Server
database or databases. Database mirroring is new in SQL Server 2005. You
choose to mirror the databases on a SQL Server instance on a database-bydatabase
basis.
Note:
Database mirroring was intended to be available in the November 2005
release of SQL Server 2005. Microsoft has delayed support of the database
mirroring feature in a production environment, although you can enable it in
the November 2005 release for evaluation purposes by using trace flag 1400.
Microsoft recommends that you do not use database mirroring in the original
release in a production environment.
Database mirroring overview
You have three server machines in a common setup for database mirroring.
One machine (the principal) has the copy of a database that applications read
and write to. Another machine (the mirror) has a copy of the principal database.
The mirror database is kept almost instantaneously in synch with the
principal database via a network connection. All transactions that are applied
to the principal database are also applied to the mirror database.
You might wonder how, with two copies of the data, applications know which
copy of the database to read and write to. The third machine is a witness and
has the "casting vote" as to which of the other two machines is running the
principal database.
Database mirroring gives very fast switching if the principal database
becomes unavailable. Typically, it takes less than three seconds to be up and
running again, using the mirror database. Many users don't notice an interruption
of response; at the most, perhaps just a slightly slower response than
normal.
Microsoft claims zero data loss for database mirroring. Transactions are sent
to the mirror database's log at the same time as they are written to the principal
database's log. The chances of any transaction being lost on the mirror
are extremely low.
Note: You cannot mirror the master, msdb, tempdb, or model databases.
You can switch control to the mirror database either manually or automatically.
Given that one of the advantages of database mirroring is the really
rapid switching that can occur automatically, I envisage automatic switching
being the typical scenario.
Another useful feature of database mirroring is that any changes that are
made on the new principal database (the former mirror database) are automatically
synchronized with the former principal database when the former
principal server is available again.

You can use database mirroring together with replication. For example, if
you're replicating the data from a headquarters SQL Server instance to
branch offices, all or any of the headquarters or branch office instances to
which replication takes place can be a database mirroring configuration.
While replication and database mirroring are separate processes, you can, in
appropriate circumstances, usefully combine them.

 
Transparent client redirect
Database mirroring depends on a companion new technology on the client
side that is called transparent client redirect. Essentially the client knows
about both the principal database and the mirror database. While the principal
database is working correctly, the client only connects to it. When the
principal database fails and the former mirror database becomes the new
principal database, the client automatically connects to the new principal
database.
Database views
You can use another new feature, called database views, with database mirroring.
Database views allow you to make read-only use of the mirror database.
The mirror database is only minimally out of synchronization with the
principal database, because the transaction log of the principal database is
immediately sent to and applied to the mirror database. For any data
retrieval that doesn't require absolutely up-to-date, real-time information, the
mirror database is satisfactory. Any database access that involves writing to
the database must use the principal database.
One important potential use of database views is as the data source for
Reporting Services. Because reporting requires only read access to the database,
you can retrieve any data you need while taking some load off the principal
database.

 
Differences from failover clustering
I list here some key differences between database mirroring and failover clustering:
 
� Database mirroring allows failover at the database level. Failover clustering
   allows failover at the server level or SQL Server instance level.
� Database mirroring works with standard computers, standard storage,
    and standard networks. Failover clustering requires specific, certified
    hardware.
� Database mirroring has no shared storage components. Failover clustering
    uses shared hard drives.
� Database mirroring allows Reporting Services to run against the
    mirror database. Reporting Services cannot be run against a currently
    inactive node in a failover cluster.
� Database mirroring has two (or more) copies of a database. Failover
    clustering works with a single copy of databases, which are stored on
    shared hard drives.
� Database mirroring is much faster than failover clustering. Typical figures
    might be 3 seconds versus 60 seconds, although exact figures
    depend on various factors specific to your setup.

 
Similarities to failover clustering
The following points apply to both database mirroring and failover clustering:
� Both support automatic detection and failover.
� Each has a manual failover option.
� Each supports transparent client connection to the backup database or
    server.
� Each achieves "zero" work loss.
� Database views minimize the effects of DBA or application errors.
 
 

Tuesday, March 15, 2011

How to get Browser in JavaScript

The Navigator object contains information about the visitor's browser.


Browser Detection

Almost everything in this tutorial works on all JavaScript-enabled browsers. However, there are some things that
just don't work on certain browsers - especially on older browsers.

Sometimes it can be useful to detect the visitor's browser, and then serve the appropriate information.

The Navigator object contains information about the visitor's browser name, version, and more.

NoteNote: There is no public standard that applies to the navigator object, but all major browsers support it.


The Navigator Object

The Navigator object contains all information about the visitor's browser:

Example

/* <div id="example"></div>

<script type="text/javascript">

txt = "<p>Browser CodeName: " + navigator.appCodeName + "</p>";
txt+= "<p>Browser Name: " + navigator.appName + "</p>";
txt+= "<p>Browser Version: " + navigator.appVersion + "</p>";
txt+= "<p>Cookies Enabled: " + navigator.cookieEnabled + "</p>";
txt+= "<p>Platform: " + navigator.platform + "</p>";
txt+= "<p>User-agent header: " + navigator.userAgent + "</p>";

document.getElementById("example").innerHTML=txt;

</script>

*/

The output will be

Browser CodeName: Mozilla

Browser Name: Netscape

Browser Version: 5.0 (Windows; en-US)

Cookies Enabled: true

Platform: Win32

User-agent header: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.2.13)
Gecko/20101203 Firefox/3.6.13 (.NET CLR 3.5.30729)

 

JavaScript RegExp for Validation

A regular expression is an object that describes a pattern of characters.
Regular expressions are used to perform pattern-matching and "search-and-replace" functions on text.

Syntax

var patt=new RegExp(pattern,modifiers);

or more simply:

var patt=/pattern/modifiers;
  • pattern specifies the pattern of an expression
  • modifiers specify if a search should be global, case-sensitive, etc.

Modifiers

Modifiers are used to perform case-insensitive and global searches:
Modifier
Description
Perform case-insensitive matching
Perform a global match (find all matches rather than stopping after the first match)
m
Perform multiline matching

Brackets

Brackets are used to find a range of characters:
Expression
Description
Find any character between the brackets
Find any character not between the brackets
[0-9]
Find any digit from 0 to 9
[A-Z]
Find any character from uppercase A to uppercase Z
[a-z]
Find any character from lowercase a to lowercase z
[A-z]
Find any character from uppercase A to lowercase z
[adgk]
Find any character in the given set
[^adgk]
Find any character outside the given set
(red|blue|green)
Find any of the alternatives specified

Metacharacters

Metacharacters are characters with a special meaning:
Metacharacter
Description
Find a single character, except newline or line terminator
Find a word character
Find a non-word character
Find a digit
Find a non-digit character
Find a whitespace character
Find a non-whitespace character
Find a match at the beginning/end of a word
Find a match not at the beginning/end of a word
\0
Find a NUL character
Find a new line character
\f
Find a form feed character
\r
Find a carriage return character
\t
Find a tab character
\v
Find a vertical tab character
Find the character specified by an octal number xxx
Find the character specified by a hexadecimal number dd
Find the Unicode character specified by a hexadecimal number xxxx

Quantifiers

Quantifier
Description
Matches any string that contains at least one n
Matches any string that contains zero or more occurrences of n
Matches any string that contains zero or one occurrences of n
Matches any string that contains a sequence of X n's
Matches any string that contains a sequence of X to Y n's
Matches any string that contains a sequence of at least X n's
Matches any string with n at the end of it
Matches any string with n at the beginning of it
Matches any string that is followed by a specific string n
Matches any string that is not followed by a specific string n

RegExp Object Properties

Property
Description
Specifies if the "g" modifier is set
Specifies if the "i" modifier is set
The index at which to start the next match
Specifies if the "m" modifier is set
The text of the RegExp pattern

RegExp Object Methods

Method
Description
Compiles a regular expression
Tests for a match in a string. Returns the first match
Tests for a match in a string. Returns true or false

test()

The test() method searches a string for a specified value, and returns true or false, depending on the result.
The following example searches a string for the character "e":

Example

var patt1=new RegExp("e");
document.write(patt1.test("The best things in life are free"));

Since there is an "e" in the string, the output of the code above will be:
true

exec()

The exec() method searches a string for a specified value, and returns the text of the found value. If no match is found,
it returns null.

The following example searches a string for the character "e":

Example 1

var patt1=new RegExp("e");
document.write(patt1.exec("The best things in life are free"));

Since there is an "e" in the string, the output of the code above will be:
e

You can read this from www.w3schools.com.