Tag Archives: sql

New developer features in SQL Server 2012-2014

New developer features in SQL Server 2012-2014

  • OFFSET and FETCH clause for ORDER BY
  • WITH RESULT SETS for EXECUTE
  • Improved error handling with THROW
  • New Functions
  • New Query hints
  • Column store index
  • SEQUENCE object
  • In Memory OLTP
  • Contained Database
  • Incremental Statistics
  • Lock priority for online operations 

1) OFFSET and FETCH clause for ORDER BY

Microsoft SQL Server 2012 introduces an interesting clause namely OFFSET and FETCH to the existing ORDER BY clause. Using OFFSET and FETCH clause developers can write TSQL code to fetch only a set of rows from the complete result set window. This will help developers going forward achieve SQL Server Pagination there by improving performance when retrieving and displaying large number of records from SQL Server Database

Example 1.1:

In the below TSQL “OFFSET 0 ROWS” and “FETCH NEXT 5 ROWS ONLY” has been used as extension to ORDER BY clause.

Use AdventureWorks2008R2
GO

SELECT
PP.FirstName + ‘ ‘ + PP.LastName AS ‘Name’

,PA.City
,PA.PostalCode
FROM  Person.Address PA
INNER JOIN
Person.BusinessEntityAddress PBEA
ON PA.AddressID = PBEA.AddressID
INNER JOIN
Person.Person PP
ON PBEA.BusinessEntityID = PP.BusinessEntityID
ORDER BY PP.FirstName
OFFSET 0 ROWS
FETCH NEXT 5 ROWS ONLY

Using OFFSET and FETCH NEXT ORDER BY clause developer going forward in SQL Server 2012 can write stored procedures which can retrieve only specific set of rows rather than retrieving the entire query result. This feature will improve the query performance and get only the required result set.

Example 1.2:

Use AdventureWorks2008R2
GO

CREATE PROCEDURE PersonNameAndLocationUsingSQLServerPagingFeature
(
@StartingRowNumber    INT,
@RowCountPerPage      INT
)
AS
BEGIN
SELECT
PP.FirstName + ‘ ‘ + PP.LastName AS ‘Name’

,PA.City

,PA.PostalCode
FROM  Person.Address PA
INNER JOIN
Person.BusinessEntityAddress PBEA
ON PA.AddressID = PBEA.AddressID
INNER JOIN
Person.Person PP
ON PBEA.BusinessEntityID = PP.BusinessEntityID
ORDER BY PP.FirstName
OFFSET (@StartingRowNumber – 1) * @RowCountPerPage ROWS
FETCH NEXT @RowCountPerPage ROWS ONLY
END
GO

/* Leveraging SQL Server Pagination Using OFFSET and FETCH NEXT Feature in SQL Server 2012 */
Use AdventureWorks2008R2
GO

EXEC PersonNameAndLocationUsingSQLServerPagingFeature 1,10
GO 

2) WITH RESULT SETS for EXECUTE

There was always a limitation in the previous versions of SQL Server that whenever you wanted to change a Column Name or a Data Type within the result set of a Stored Procedure you ended up making changes to all the references within a Stored Procedure. This limitation has been overcome with the release of WITH RESULT SETS feature in SQL Server 2012.

Microsoft SQL Server 2012 extends the EXECUTE statement to introduce WITH RESULT SETS option which can be used to change the Column Name and Data Types of the result set returned by the execution of stored procedure.

Example 2.1:

CREATE PROCEDURE WithResultSets_SQLServer2012
AS
BEGIN
SELECT
TOP 5
PP.FirstName + ‘ ‘ + PP.LastName AS Name,

PA.City,

PA.PostalCode
FROM  Person.Address PA
INNER JOIN

Person.BusinessEntityAddress PBEA
ON PA.AddressID = PBEA.AddressID
INNER JOIN
Person.Person PP
ON PBEA.BusinessEntityID = PP.BusinessEntityID
ORDER BY PP.FirstName
END
GO

Once the stored procedure is created successfully. The next step will be to execute the above stored procedure using WITH RESULT SET Feature of SQL Server 2012.

/* Execute Stored Procedure */

EXEC WithResultSets_SQLServer2012

GO

/*
Execute Stored Procedure – Using WITH RESULT SETS Feature of SQL Server 2012
*/
EXEC WithResultSets_SQLServer2012
WITH RESULT SETS
(
(
[Employe Name] NVARCHAR(100),
[Employee City]                  NVARCHAR(20),
[Employee Postal Code]    NVARCHAR(30)
)
)
GO

The Column Names are changed from Name to Employee NameCity to Employee City and PostalCode to Employee Postal Code. Similarly, the data type was changed from VARCHAR to NVARCHAR

 3) Improved error handling with THROW:

In order to improve error handling in TSQL scripts Microsoft has introduced throw statement in SQL Server 2012. This feature will help the database developers to handle errors more effectively.

Previously used RaiseError had its shortcomings. Below table shows the difference between them along with examples:

RAISERRORTHROW
Version of the Sql Server in which it is introduced?
Introduced in SQL SERVER 7.0. And as per BOL, Microsoft is suggesting to start using THROW statement instead of RAISERROR in New Applications.

RAISERROR can’t be used in the Sql Server 2014’s Natively compiled Stored Procedures.

Introduced in SQL SERVER 2012. THROW statement seems to be simple and easy to use than RAISERROR.

THROW statement can be used in the Sql Server 2014’s Natively Compiled Stored Procedure.

SYNTAX
RAISERROR

( { error_number | message

| @local_variable }

{ ,severity ,state }

[ ,argument [ ,…n ] ] )

[ WITH option [ ,…n ] ]

THROW

[ { error_number

| @local_variable },

{ message | @local_variable },

{ state | @local_variable } ]

[ ; ]

Can re-throw the original exception that invoked the CATCH block?
NO. It always generates new exception and results in the loss of the original exception details. Below example demonstrates this:

BEGIN TRY

DECLARE @result INT

–Generate divide-by-zero error

SET @result = 55/0

END TRY

BEGIN CATCH

–Get the details of the error

–that invoked the CATCH block

DECLARE

@ErMessage NVARCHAR(2048),

@ErSeverity INT,

@ErState INT

SELECT

@ErMessage = ERROR_MESSAGE(),

@ErSeverity = ERROR_SEVERITY(),

@ErState = ERROR_STATE()

RAISERROR (@ErMessage,

@ErSeverity,

@ErState )

END CATCH

 

RESULT:
Msg 50000, Level 16, State 1, Line 19
Divide by zero error encountered.

NOTE: The actual line number of the code which generated Divided By Zero error here is 4, but the exception message returned by RAISERROR is showing it as 19. Also the error number corresponding to divide by zero error is 8134 in the SYS.Messages table, but the one returned by RAISERROR is 50000.

YES. To Re-THROW the original exception caught in the TRY Block, we can just specify the THROW statement without any parameters in the CATCH block. Below example demonstrates this:

BEGIN TRY

DECLARE @result INT

–Generate divide-by-zero error

SET @result = 55/0

END TRY

BEGIN CATCH

THROW

END CATCH

RESULT:
Msg 8134, Level 16, State 1, Line 4
Divide by zero error encountered.

With above example it is clear that THROW statement is very simple for RE-THROWING the exception. And also it returns correct error number and line number.

Causes the statement batch to be ended?
Example 1: In the below Batch of statements the PRINT statement after RAISERROR statement will be executed.

BEGIN

PRINT ‘BEFORE RAISERROR’

RAISERROR(‘RAISERROR TEST’,16,1)

PRINT ‘AFTER RAISERROR’

END

RESULT:

BEFORE RAISERROR
Msg 50000, Level 16, State 1, Line 3
RAISERROR TEST
AFTER RAISERROR

Example 2: In the below example all the statement’s after RAISERROR statement are executed.

BEGIN TRY

DECLARE @RESULT INT = 55/0

END TRY

BEGIN CATCH

PRINT ‘BEFORE RAISERROR’;

–Get the details of the error

–that invoked the CATCH block

DECLARE

@ErMessage NVARCHAR(2048),

@ErSeverity INT,

@ErState INT

SELECT

@ErMessage = ERROR_MESSAGE(),

@ErSeverity = ERROR_SEVERITY(),

@ErState = ERROR_STATE()

RAISERROR (@ErMessage,

@ErSeverity,

@ErState )

PRINT ‘AFTER RAISERROR’

END CATCH

PRINT ‘AFTER CATCH’

RESULT:
BEFORE RAISERROR
Msg 50000, Level 16, State 1, Line 19
Divide by zero error encountered.
AFTER RAISERROR
AFTER CATCH

Example 1: In the below Batch of statements the PRINT statement after THROW statement will not executed.

BEGIN

PRINT ‘BEFORE THROW’;

THROW 50000,’THROW TEST’,1

PRINT ‘AFTER THROW’

END

RESULT:

BEFORE THROW
Msg 50000, Level 16, State 1, Line 3
THROW TEST

Example 2: In the below example no PRINT statement’s after THROW statement are executed.

BEGIN TRY

DECLARE @RESULT INT = 55/0

END TRY

BEGIN CATCH

PRINT ‘BEFORE THROW’;

THROW;

PRINT ‘AFTER THROW’

END CATCH

PRINT ‘AFTER CATCH’

RESULT:
BEFORE THROW
Msg 8134, Level 16, State 1, Line 2
Divide by zero error encountered.

CAN SET SEVERITY LEVEL?
YES. The severity parameter specifies the severity of the exception.NO. There is no severity parameter. The exception severity is always set to 16.
Requires preceding statement to end with semicolon (;) statement terminator?
NO.YES. The statement before the THROW statement must be followed by the semicolon (;) statement terminator.
CAN RAISE SYSTEM ERROR MESSAGE?
The SYS.MESSAGES Table will have both system-defined and user-defined messages. Message IDs less than 50000 are system messages.
YES. With RAISERROR we can raise the System Exception.
Example:
RAISERROR (40655,16,1)RESULT:
Msg 40655, Level 16, State 1, Line 1
Database ‘master’ cannot be restored.
NO. With THROW we can’t raise the System Exception. But when it used in CATCH BLOCK it can Re-THROW the system exception.

Example: Trying to raise system exception (i.e. exception with ErrorNumber less than 50000).

THROW 40655, ‘Database master cannot be restored.’, 1

RESULT:
Msg 35100, Level 16, State 10, Line 1
Error number 40655 in the THROW statement is outside the valid range. Specify an error number in the valid range of 50000 to 2147483647

CAN RAISE user-defined message with message_id greater than 50000 which is not defined in SYS.MESSAGES table?
NO. If a msg_id is passed to RAISERROR, the ID must be defined in sys.messages.Example:

RAISERROR (60000, 16, 1)

RESULT:
Msg 18054, Level 16, State 1, Line 1
Error 60000, severity 16, state 1 was raised, but no message with that error number was found in sys.messages. If error is larger than 50000, make sure the user-defined message is added using sp_addmessage.

Now add the Message to SYS.MESSAGES Table by using the below statement:

EXEC sys.sp_addmessage 60000, 16, ‘Test User Defined Message’

Now try to Raise the Error:
RAISERROR (60000, 16, 1)

RESULT:
Msg 60000, Level 16, State 1, Line 1
Test User Defined Message

YES. The error_number parameter does not have to be defined in sys.messages.Example:
THROW 60000, ‘Test User Defined Message’, 1RESULT:
Msg 60000, Level 16, State 1, Line 1
Test User Defined Message
Allows substitution parameters in the message parameter?
By using the below statement add a sample test message with parameteres to the SYS.Messages Table:
EXEC sp_addmessage 70000,16,‘Message with Parameter 1: %d and Parameter 2:%s’
YES.The msg_str parameter can contain printf formatting styles.Example 1:

RAISERROR (70000, 16, 1, 505,‘Basavaraj’ )

RESULT:
Msg 70000, Level 16, State 1, Line 1
Message with Parameter 1: 505 and Parameter 2:Basavaraj

NO.The message parameter does not accept printf style formatting.

Example : Message manipulation is not allowed in the THROW statement

Below statement will fail

THROW 58000,‘String1’ + ‘ String2’,1

RESULT:
Msg 102, Level 15, State 1, Line 1
Incorrect syntax near ‘+’.

We can solve such problems, we can prepare the message prior to the THROW statement and then pass it to throw statement as a variable. Below example illustrates this.

DECLARE @message NVARCHAR(2048)
SET @message = ‘String1’ + ‘ String2’;
THROW 58000, @message, 1

RESULT:
Msg 58000, Level 16, State 1, Line 3
String1 String2

 

4) New Functions

SQL Server 2012 introduces 14 new built-in functions. These functions ease the path of migration for information workers by emulating functionality that is found in the expression languages of many desktop applications.

The new functions are:

 

Conversion functions

  • PARSE – Returns the result of an expression, translated to the requested data type

 

Example:

SELECT PARSE(‘Monday, 13 December 2010’ AS datetime2 USING ‘en-US’)

AS Result;

  • TRY_CONVERT – Returns a value cast to the specified data type if the cast succeeds; otherwise, returns null.

 

Example:

SELECT TRY_CONVERT(datetime2, ’12/31/2010′) AS Result;

GO

  • TRY_PARSE – Returns the result of an expression, translated to the requested data type, or null if the cast fails in SQL Server 2012. Use TRY_PARSE only for converting from string to date/time and number types.

Date and time functions

 

  • DATEFROMPARTSReturns a date value for the specified year, month, and day.

 

Example:

SELECT DATEFROMPARTS ( 2010, 12, 31 ) AS Result;

 

  • DATETIME2FROMPARTSReturns a datetime2 value for the specified date and time and with the specified precision.

 

Example:

SELECT DATETIME2FROMPARTS ( 2010, 12, 31, 23, 59, 59, 0, 0 ) AS Result;

 

 

  • DATETIMEOFFSETFROMPARTSReturns a datetimeoffset value for the specified date and time and with the specified offsets and precision.

 

Example:

SELECT DATETIMEOFFSETFROMPARTS ( 2010, 12, 31, 14, 23, 23, 0, 12, 0, 7 ) AS Result;

 

  • EOMONTHReturns the last day of the month that contains the specified date, with an optional offset.

Example:

DECLARE @date VARCHAR(255) = ’12/1/2011′;

SELECT EOMONTH ( @date ) AS Result;

GO

           

 

 

Logical functions

 

  • CHOOSE – Returns the item at the specified index from a list of value

 

Example:

USE AdventureWorks2012;

GO

SELECT ProductCategoryID, CHOOSE (ProductCategoryID, ‘A’,’B’,’C’,’D’,’E’) AS Expression1

FROM Production.ProductCategory;

ProductCategoryID Expression1               —————– ———–               3                 C               1                 A               2                 B               4                 D                (4 row(s) affected)

  • IIF – Returns one of two values, depending on whether the Boolean expression evaluates to true or false

Example:

DECLARE @a int = 45, @b int = 40;

SELECT IIF ( @a > @b, ‘TRUE’, ‘FALSE’ ) AS Result;

Result               ——–               TRUE                (1 row(s) affected)

 

String functions

 

  • CONCAT Returns a string that is the result of concatenating two or more string values.

 

Example:

SELECT CONCAT ( ‘Happy ‘, ‘Birthday ‘, 11, ‘/’, ’25’ ) AS Result;

 

  • FORMATReturns a value formatted with the specified format and optional culture in SQL Server 2012. Use the FORMAT function for locale-aware formatting of date/time and number values as strings.

 

Example:

DECLARE @d DATETIME = ’10/01/2011′;

SELECT FORMAT ( @d, ‘d’, ‘en-US’ ) AS ‘US English Result’

,FORMAT ( @d, ‘d’, ‘en-gb’ ) AS ‘Great Britain English Result’

,FORMAT ( @d, ‘d’, ‘de-de’ ) AS ‘German Result’

,FORMAT ( @d, ‘d’, ‘zh-cn’ ) AS ‘Simplified Chinese (PRC) Result’;

SELECT FORMAT ( @d, ‘D’, ‘en-US’ ) AS ‘US English Result’

,FORMAT ( @d, ‘D’, ‘en-gb’ ) AS ‘Great Britain English Result’

,FORMAT ( @d, ‘D’, ‘de-de’ ) AS ‘German Result’

,FORMAT ( @d, ‘D’, ‘zh-cn’ ) AS ‘Chinese (Simplified PRC) Result’;

5) New and Enhanced Query Optimizer Hints

The syntax for the FORCESEEK table hint has been modified. You can now specify an index and index columns to further control the access method on the index. The existing FORCESEEK syntax remains unmodified and works as before. No changes to applications are necessary if you do not plan to use the new functionality.

The FORCESCAN table hint has been added. It complements the FORCESEEK hint by specifying that the query optimizer use only an index scan operation as the access path to the table or view referenced in the query. The FORCESCAN hint can be useful for queries in which the optimizer underestimates the number of affected rows and chooses a seek operation rather than a scan operation. FORCESCAN can be specified with or without an INDEX hint.

Example:

CREATE INDEX IX_SOH_OrderDate

ON Sales.SalesOrderHeader(OrderDate, CustomerID, TotalDue);

DECLARE @start_date DATETIME = ‘20060101’, @end_date DATETIME = ‘20120102’;

SELECT SalesOrderID, OrderDate, CustomerID, TotalDue, OnlineOrderFlag

FROM Sales.SalesOrderHeader WITH (FORCESCAN)

WHERE OrderDate >= @start_date

AND OrderDate < @end_date;

SELECT SalesOrderID, OrderDate, CustomerID, TotalDue, OnlineOrderFlag

FROM Sales.SalesOrderHeader WITH (FORCESEEK)

WHERE OrderDate >= @start_date

AND OrderDate < @end_date;

Here we see that the query that was forced to seek actually took longer, used more CPU, and had to perform a much larger number of reads.

We can see why when comparing the plans. The scan shows a trivial plan:

While the seek has to perform an expensive key lookup to go get the single bit column that is not covered by the index:

Ideally, we would have added OnlineOrderFlag to the index, to eliminate that costly key lookup. But, as mentioned above, changing the index is not always possible (and may have other unknown side effects you might not have the luxury to explore). Feel free to play with this and adjust the start and end dates so that you have very low and very high row counts. You will see that sometimes the seek fares better, but sometimes the scan is much more efficient.

6) Column store Indexes

The SQL Server in-memory columnstore index stores and manages data by using column-based data storage and column-based query processing. Columnstore indexes work well for data warehousing workloads that primarily perform bulk loads and read-only queries. Use the columnstore index to achieve up to 10x query performance gains over traditional row-oriented storage, and up to 7x data compression over the uncompressed data size.

A columnstore index is a technology for storing, retrieving and managing data by using a columnar data format, called a columnstore. SQL Server supports both clustered and nonclustered columnstore indexes. Both use the same in-memory columnstore technology, but they do have differences in purpose and in features they support.

Benefits

Columnstore indexes work well for mostly read-only queries that perform analysis on large data sets. Often, these are queries for data warehousing workloads. Columnstore indexes give high performance gains for queries that use full table scans, and are not well-suited for queries that seek into the data, searching for a particular value.

Column store Index benefits:

  • Columns often have similar data, which results in high compression rates.
  • High compression rates improve query performance by using a smaller in-memory footprint. In turn, query performance can improve because SQL Server can perform more query and data operations in-memory.
  • A new query execution mechanism called batch-mode execution has been added to SQL Server that reduces CPU usage by a large amount. Batch-mode execution is closely integrated with, and optimized around, the columnstore storage format. Batch-mode execution is sometimes known as vector-based or vectorized execution.
  • Queries often select only a few columns from a table, which reduces total I/O from the physical media.

In SQL Server, a clustered columnstore index:

  • Is available in Enterprise, Developer, and Evaluation editions.
  • Is updateable.
  • Is the primary storage method for the entire table.
  • Has no key columns. All columns are included columns.
  • Is the only index on the table. It cannot be combined with any other indexes.
  • Can be configured to use columnstore or columnstore archival compression.
  • Does not physically store columns in a sorted order. Instead, it stores data to improve compression and performance.

CREATE TABLE SimpleTable(

ProductKey [int] NOT NULL,

OrderDateKey [int] NOT NULL,

DueDateKey [int] NOT NULL,

ShipDateKey [int] NOT NULL);

GO

CREATE CLUSTERED COLUMNSTORE INDEX cci_Simple ON SimpleTable;

GO

In SQL Server, a nonclustered columnstore index:

  • Can index a subset of columns in the clustered index or heap. For example, it can index the frequently used columns.
  • Requires extra storage to store a copy of the columns in the index.
  • Is updated by rebuilding the index or switching partitions in and out. It is not updateable by using the DML operations such as insert, update, and delete.
  • Can be combined with other indexes on the table.
  • Can be configured to use columnstore or columnstore archival compression.
  • Does not physically store columns in a sorted order. Instead, it stores data to improve compression and performance. Pre-sorting the data before creating the columnstore index is not required, but can improve columnstore compression.

CREATE TABLE SimpleTable

(ProductKey [int] NOT NULL,

OrderDateKey [int] NOT NULL,

DueDateKey [int] NOT NULL,

ShipDateKey [int] NOT NULL);

GO

CREATE CLUSTERED INDEX cl_simple ON SimpleTable (ProductKey);

GO

CREATE NONCLUSTERED COLUMNSTORE INDEX csindx_simple

ON SimpleTable

(OrderDateKey, DueDateKey, ShipDateKey);

GO

https://msdn.microsoft.com/en-us/library/gg492088(v=sql.120).aspx

 

7) Sequence Numbers

A sequence is a user-defined schema-bound object that generates a sequence of numeric values according to the specification with which the sequence was created. The sequence of numeric values is generated in an ascending or descending order at a defined interval and may cycle (repeat) as requested.

Sequences, unlike identity columns, are not associated with tables. An application refers to a sequence object to receive its next value. User applications can reference a sequence object and coordinate the values keys across multiple rows and tables.

A sequence is created independently of the tables by using the CREATE SEQUENCE statement. Options enable you to control the increment, maximum and minimum values, starting point, automatic restarting capability, and caching to improve performance.

Unlike identity column values, which are generated when rows are inserted, an application can obtain the next sequence number before inserting the row by calling the NEXT VALUE FOR function. The sequence number is allocated when NEXT VALUE FOR is called even if the number is never inserted into a table. The NEXT VALUE FOR function can be used as the default value for a column in a table definition.

Use sp_sequence_get_range to get a range of multiple sequence numbers at once. A sequence can be defined as any integer data type. If the data type is not specified, a sequence defaults to bigint.

Example 7.1 :

Below is a simple code to create a sequence object.

create sequence MySeq as int

start with 1  — Start with value 1

increment by 1– Increment with value 1

minvalue 0 — Minimum value to start is zero

maxvalue 100 — Maximum it can go to 100

no cycle — Do not go above 100

–cycle — put cycle if you want to cycle the values from start

cache 50 — Increment 50 values in memory rather than incrementing from IO

–no cache — specify no cache if you want IO on disk

Using Sequences

Use sequences instead of identity columns in the following scenarios:

  • The application requires a number before the insert into the table is made.
  • The application requires sharing a single series of numbers between multiple tables or multiple columns within a table.
  • The application must restart the number series when a specified number is reached.
  • The application requires sequence values to be sorted by another field. The NEXT VALUE FOR function can apply the OVER clause to the function call. The OVER clause guarantees that the values returned are generated in the order of the OVER clause’s ORDER BY clause.
  • An application requires multiple numbers to be assigned at the same time. For example, an application needs to reserve five sequential numbers. Requesting identity values could result in gaps in the series if other processes were simultaneously issued numbers. Calling sp_sequence_get_range can retrieve several numbers in the sequence at once.

 

Limitations

Unlike identity columns, whose values cannot be changed, sequence values are not automatically protected after insertion into the table. To prevent sequence values from being changed, use an update trigger on the table to roll back changes.

 

Uniqueness is not automatically enforced for sequence values. The ability to reuse sequence values is by design. If sequence values in a table are required to be unique, create a unique index on the column. If sequence values in a table are required to be unique throughout a group of tables, create triggers to prevent duplicates caused by update statements or sequence number cycling.

The sequence object generates numbers according to its definition, but the sequence object does not control how the numbers are used. Sequence numbers inserted into a table can have gaps when a transaction is rolled back, when a sequence object is shared by multiple tables, or when sequence numbers are allocated without using them in tables. When created with the CACHE option, an unexpected shutdown, such as a power failure, can lose the sequence numbers in the cache.

If there are multiple instances of the NEXT VALUE FOR function specifying the same sequence generator within a single Transact-SQL statement, all those instances return the same value for a given row processed by that Transact-SQL statement. This behavior is consistent with the ANSI standard.

Example 7.2:

Using sequence in single table

–Create the Test schema

CREATE SCHEMA Test ;

GO

— Create a table

CREATE TABLE Test.Orders

(OrderID int PRIMARY KEY,

Name varchar(20) NOT NULL,

Qty int NOT NULL);

GO

— Create a sequence

CREATE SEQUENCE Test.CountBy1

START WITH 1

INCREMENT BY 1 ;

GO

— Insert three records

INSERT Test.Orders (OrderID, Name, Qty)

VALUES (NEXT VALUE FOR Test.CountBy1, ‘Tire’, 2) ;

INSERT test.Orders (OrderID, Name, Qty)

VALUES (NEXT VALUE FOR Test.CountBy1, ‘Seat’, 1) ;

INSERT test.Orders (OrderID, Name, Qty)

VALUES (NEXT VALUE FOR Test.CountBy1, ‘Brake’, 1) ;

GO

— View the table

SELECT * FROM Test.Orders ;

GO

Example 7.3:

Using sequence across multiple tables

CREATE SCHEMA Audit ;

GO

CREATE SEQUENCE Audit.EventCounter

AS int

START WITH 1

INCREMENT BY 1 ;

GO

CREATE TABLE Audit.ProcessEvents

(

EventID int PRIMARY KEY CLUSTERED

DEFAULT (NEXT VALUE FOR Audit.EventCounter),

EventTime datetime NOT NULL DEFAULT (getdate()),

EventCode nvarchar(5) NOT NULL,

Description nvarchar(300) NULL

) ;

GO

CREATE TABLE Audit.ErrorEvents

(

EventID int PRIMARY KEY CLUSTERED

DEFAULT (NEXT VALUE FOR Audit.EventCounter),

EventTime datetime NOT NULL DEFAULT (getdate()),

EquipmentID int NULL,

ErrorNumber int NOT NULL,

EventDesc nvarchar(256) NULL

) ;

GO

CREATE TABLE Audit.StartStopEvents

(

EventID int PRIMARY KEY CLUSTERED

DEFAULT (NEXT VALUE FOR Audit.EventCounter),

EventTime datetime NOT NULL DEFAULT (getdate()),

EquipmentID int NOT NULL,

StartOrStop bit NOT NULL

) ;

GO

INSERT Audit.StartStopEvents (EquipmentID, StartOrStop)

VALUES (248, 0) ;

INSERT Audit.StartStopEvents (EquipmentID, StartOrStop)

VALUES (72, 0) ;

INSERT Audit.ProcessEvents (EventCode, Description)

VALUES (2735,

‘Clean room temperature 18 degrees C.’) ;

INSERT Audit.ProcessEvents (EventCode, Description)

VALUES (18, ‘Spin rate threashold exceeded.’) ;

INSERT Audit.ErrorEvents (EquipmentID, ErrorNumber, EventDesc)

VALUES (248, 82, ‘Feeder jam’) ;

INSERT Audit.StartStopEvents (EquipmentID, StartOrStop)

VALUES (248, 1) ;

INSERT Audit.ProcessEvents (EventCode, Description)

VALUES (1841, ‘Central feed in bypass mode.’) ;

— The following statement combines all events, though not all fields.

SELECT EventID, EventTime, Description FROM Audit.ProcessEvents

UNION SELECT EventID, EventTime, EventDesc FROM Audit.ErrorEvents

UNION SELECT EventID, EventTime,

CASE StartOrStop

WHEN 0 THEN ‘Start’

ELSE ‘Stop’

END

FROM Audit.StartStopEvents

ORDER BY EventID ;

GO

OUTPUT –

EventID  EventTime                Description

1        2009-11-02 15:00:51.157  Start

2        2009-11-02 15:00:51.160  Start

3        2009-11-02 15:00:51.167  Clean room temperature 18 degrees C.

4        2009-11-02 15:00:51.167  Spin rate threshold exceeded.

5        2009-11-02 15:00:51.173  Feeder jam

6        2009-11-02 15:00:51.177  Stop

7        2009-11-02 15:00:51.180  Central feed in bypass mode.

 

8) In-Memory OLTP (In-Memory Optimization)

New in SQL Server 2014, In-Memory OLTP can significantly improve OLTP database application performance. In-Memory OLTP is a memory-optimized database engine integrated into the SQL Server engine, optimized for OLTP.

To use In-Memory OLTP, you define a heavily accessed table as memory optimized. Memory-optimized-tables are fully transactional, durable, and are accessed using Transact-SQL in the same way as disk-based tables. A query can reference both memory-optimized tables and disk-based tables. A transaction can update data in memory-optimized tables and disk-based tables. Stored procedures that only reference memory-optimized tables can be natively compiled into machine code for further performance improvements.

The In-Memory OLTP engine is designed for extremely high session concurrency for OLTP type of transactions driven from a highly scaled-out middle-tier. To achieve this, it uses latch-free data structures and optimistic, multi-version concurrency control. The result is predictable, sub-millisecond low latency and high throughput with linear scaling for database transactions. The actual performance gain depends on many factors, but 5-to-20 times performance improvements are common.

The following table summarizes the workload patterns that may benefit most by using In-Memory OLTP:

Implementation ScenarioImplementation ScenarioBenefits of In-Memory OLTP
High data insertion rate from multiple concurrent connections.Primarily append-only store.

Unable to keep up with the insert workload.

Eliminate contention.

Reduce logging.

Read performance and scale with periodic batch inserts and updates.High performance read operations, especially when each server request has multiple read operations to perform.

Unable to meet scale-up requirements.

Eliminate contention when new data arrives.

Lower latency data retrieval.

Minimize code execution time.

Intensive business logic processing in the database server.Insert, update, and delete workload.

Intensive computation inside stored procedures.

Read and write contention.

Eliminate contention.

Minimize code execution time for reduced latency and improved throughput.

Low latency.Require low latency business transactions which typical database solutions cannot achieve.Eliminate contention.

Minimize code execution time.

Low latency code execution.

Efficient data retrieval.

Session state management.Frequent insert, update and point lookups.

High scale load from numerous stateless web servers.

Eliminate contention.

Efficient data retrieval.

Optional IO reduction or removal, when using non-durable tables

  1. In-Memory OLTP will improve performance best in OLTP with short-running transactions.
  2. Programming patterns that In-Memory OLTP will improve include concurrency scenarios, point lookups, workloads where there are many inserts and updates, and business logic in stored procedures.
  3. Integration with SQL Server means you can have both memory-optimized tables and disk-based tables in the same database, and query across both types of tables.

In SQL Server 2014 there are limitations in Transact-SQL surface area supported for In-Memory OLTP.

In-Memory OLTP achieves significant performance and scalability gains by using:

  • Algorithms that are optimized for accessing memory-resident data.
  • Optimistic concurrency control that eliminates logical locks.
  • Lock free objects that eliminate all physical locks and latches. Threads that perform transactional work don’t use locks or latches for concurrency control.
  • Natively compiled stored procedures, which have significantly better performance than interpreted stored procedures, when accessing a memory-optimized table.

Example:

CREATE DATABASE imoltp

GO

————————————–

— create database with a memory-optimized filegroup and a container.

ALTER DATABASE imoltp ADD FILEGROUP imoltp_mod CONTAINS MEMORY_OPTIMIZED_DATA

ALTER DATABASE imoltp ADD FILE (name=’imoltp_mod1′, filename=’d:\data\imoltp_mod1′) TO FILEGROUP imoltp_mod

ALTER DATABASE imoltp SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT=ON

GO

USE imoltp

GO

— create a durable (data will be persisted) memory-optimized table

— two of the columns are indexed

CREATE TABLE dbo.ShoppingCart (

ShoppingCartId INT IDENTITY(1,1) PRIMARY KEY NONCLUSTERED,

UserId INT NOT NULL INDEX ix_UserId NONCLUSTERED HASH WITH (BUCKET_COUNT=1000000),

CreatedDate DATETIME2 NOT NULL,

TotalPrice MONEY

) WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_AND_DATA)

GO

— create a non-durable table. Data will not be persisted, data loss if the server turns off unexpectedly

CREATE TABLE dbo.UserSession (

SessionId INT IDENTITY(1,1) PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT=400000),

UserId int NOT NULL,

CreatedDate DATETIME2 NOT NULL,

ShoppingCartId INT,

INDEX ix_UserId NONCLUSTERED HASH (UserId) WITH (BUCKET_COUNT=400000)

)

WITH (MEMORY_OPTIMIZED=ON, DURABILITY=SCHEMA_ONLY)

GO

https://msdn.microsoft.com/en-in/library/dn133186.aspx

9) Contained Databases

A contained database is a database that is isolated from other databases and from the instance of SQL Server that hosts the database. SQL Server 2014 helps user to isolate their database from the instance in 4 ways.

  • Much of the metadata that describes a database is maintained in the database. (In addition to, or instead of, maintaining metadata in the master database.)
  • All metadata are defined using the same collation.
  • User authentication can be performed by the database, reducing the databases dependency on the logins of the instance of SQL Server.
  • The SQL Server environment (DMV’s, XEvents, etc.) reports and can act upon containment information.

Containment

User entities that reside entirely within the database are considered contained. Any entities that reside outside of the database, or rely on interaction with functions outside of the database, are considered uncontained.

In general, user entities fall into the following categories of containment:

  • Fully contained user entities (those that never cross the database boundary), for example sys.indexes. Any code that uses these features or any object that references only these entities is also fully contained.
  • Uncontained user entities (those that cross the database boundary), for example sys.server_principals or a server principal (login) itself. Any code that uses these entities or any functions that references these entities are uncontained.

Partially Contained Database

The contained database feature is currently available only in a partially contained state. A partially contained database is a contained database that allows the use of uncontained features.

Use the sys.dm_db_uncontained_entities and sys.sql_modules (Transact-SQL) view to return information about uncontained objects or features. By determining the containment status of the elements of your database, you can discover what objects or features must be replaced or altered to promote containment.

The behavior of partially contained databases differs most distinctly from that of non-contained databases with regard to collation.

Benefits of using Partially Contained Databases

There are issues and complications associated with the non-contained databases that can be resolved by using a partially contained database.

Database Movement

One of the problems that occurs when moving databases, is that some important information can be unavailable when a database is moved from one instance to another. For example, login information is stored within the instance instead of in the database. When you move a non-contained database from one instance to another instance of SQL Server, this information is left behind. You must identify the missing information and move it with your database to the new instance of SQL Server. This process can be difficult and time-consuming.

The partially contained database can store important information in the database so the database still has the information after it is moved.

Benefit of Contained Database Users with AlwaysOn

By reducing the ties to the instance of SQL Server, partially contained databases can be useful during failover when you use Always On Availability Groups.

Creating contained users enables the user to connect directly to the contained database. This is a very significant feature in high availability and disaster recovery scenarios such as in an AlwaysOn solution. If the users are contained users, in case of failover, people would be able to connect to the secondary without creating logins on the instance hosting the secondary. This provides an immediate benefit.

Initial Database Development

Because a developer may not know where a new database will be deployed, limiting the deployed environmental impacts on the database lessens the work and concern for the developer. In the non-contained model, the developer must consider possible environmental impacts on the new database and program accordingly. However, by using partially contained databases, developers can detect instance-level impacts on the database and instance-level concerns for the developer.

Database Administration

Maintaining database settings in the database, instead of in the master database, lets each database owner have more control over their database, without giving the database owner sysadmin permission.

 

Example:

Creating contained database is a 3 step process: –

Step 1: – Enable contained database at SQL Server instance level.

sp_configure ‘show advanced options’,1

GO

RECONFIGURE WITH OVERRIDE

GO

sp_configure ‘contained database authentication’, 1

GO

RECONFIGURE WITH OVERRIDE

GO

Step 2 – Create database with “containment” set to “partial”.

CREATE DATABASE [MyDb]

CONTAINMENT = PARTIAL

ON PRIMARY

( NAME = N’My’, FILENAME = N’C:\My.mdf’)

LOG ON

( NAME = N’My_log’, FILENAME =N’C:\My_log.ldf’)

Step 3: – To test if “contained” database fundamental is working or not we want the user credentials to be part of the database , so we need to create user as “SQL User with password”.

CREATE USER MyUser

WITH PASSWORD = ‘[email protected]’;

GO

Now if you try to login with the user created, you get an error as shown in the below figure. This proves that the user is not available at SQL Server level.

Now click on options and specify the database name in “connect to database” , you should be able to login , which proves that user is part of database and not SQL Server

10) Incremental Statistics –

Statistics are VERY important to the SQL database engine and are what helps the optimizer make good execution plans. So, maintaining statistics is a super important part of any DBA’s job. In the past updating statistics on large tables can be very resource consuming and many times ended up with an entire large table being scanned or not enough data scanned if using sp_updatestats.

With the addition of the INCREMENTAL option statistics on large partitioned indexes, statistics maintenance can be managed in a more deliberate way. The INCREMENTAL option allows statistics creation per partition by setting the INCREMENTAL option to ON in the CREATE STATISTICS statement. So, when we update stats we can now tell SQL what partition to update meaning that we can maintain the partition that is most rapidly changing without having to scan data that does not change.

11) Lock Priority for online operations-

SQL Server 2014 introduces enhancements to two common database maintenance operations namely Partition Switching and Online Index Rebuild which greatly increases enterprise application availability by reducing maintenance downtime impact.

  • Managed Lock Priority for Table Partition Switch & Online Index Rebuild – This feature allows customers to manage the priority of table locks acquired by partition SWITCH and Online Index Rebuild (OIR), thereby allowing customers to mitigate any negative performance impact of these locks on the primary workload on the server.
  • Single Partition Online Index Rebuild – This features allows customers with large partition tables to rebuild index online for individual partitions, thereby increasing application uptime.

Partition Switching and Online Index Rebuild are executed via ALTER TABLE and ALTER INDEX respectively. Current functionality for partition switching (SWITCH) and online index rebuild (OIR) acquires an exclusive table Sch-M lock during the DDL operation impacting the database workload (DML or query operations) running concurrently and using the affected table. In case of OIR two locks are required, a table S-lock in the beginning of the DDL operation and a Sch-M lock at the end. In case of SWITCH two Sch-M locks are required one for the target and one for the destination table.

In order to execute the DDL statement for SWITCH/OIR, all active blocking transactions running on a particular table must be completed. Conversely, when the DDL for SWITCH/OIR is executed first, this will block all new transactions that require locks on the affected table. Although the duration of the lock for SWITCH/OIR is very short, waiting for all open transactions on a given table to complete and blocking the new transactions to start, may significantly affect the throughput, causing workload slow down or timeout , and significantly limiting an access to the underlying table(s).

This has an impact for 24X7 Mission Critical workloads that focus on maximizing throughput and availability with short or non-existent maintenance window.

Managed Lock Priority

This new feature will allow a database administrator (DBA) to manage the Sch-M lock for SWITCH and S-lock/Sch-M lock for OIR. The DBA will now have the ability to specify one of different manageability options to handle the DDL locks of SWITCH/OIR:

  • Enforce the SWITCH or OIR  ( kill all blockers immediately or after a specified wait time /(MAX_DURATION =n [minutes])  expires)
  • Wait for blockers and after the wait time (MAX_DURATION)  expires place the lock in the regular lock queue ( as it does today)
  • Wait for blockers and after the wait time expires (MAX_DURATION)  exit the DDL (SWITCH/OIR) w/o any action

It is important to notice that in all 3 cases if during the wait time ((MAX_DURATION =n [minutes])) there are no blocking activities, the SWITCH/OIR lock will be executed immediately w/o waiting and the DDL statement will be completed