Mixed SQL Server Triggers

Time:2022-9-22

SQL Server triggers are a very controversial subject. They provide convenience at a lower cost, but are often misused by developers, DBAs, resulting in performance bottlenecks or maintenance challenges.

This article provides a brief review of triggers and an in-depth discussion of how to use them effectively, and when they can put a developer in an inescapable dilemma.

While all of the demonstrations in this article are in SQL Server, the advice provided here is common to most databases. The challenges posed by triggers are also seen in MySQL, PostgreSQL, MongoDB, and many other applications.

what is a trigger

SQL Server triggers can be defined on a database or table, which allow code to execute automatically when a specific action occurs. This article focuses on DML triggers on tables, as they tend to be overused. In contrast, DDL triggers for databases are generally more centralized and less harmful to performance.

A trigger is a set of code that performs calculations when data in a table changes. Triggers can be defined to execute on inserts, updates, deletes, or any combination of these operations. The MERGE action can fire triggers for each action in the statement.

Triggers can be defined as INSTEAD OF or AFTER. An AFTER trigger occurs after data is written to the table and is a set of independent operations that are executed in the same transaction as the write to the table, but after the write occurs. If the trigger fails, the original operation also fails. The INSTEAD OF trigger replaces the invoked write operation. Insert, update, or delete operations never happen, instead the trigger's content executes.

Triggers allow TSQL to be executed when writes occur, regardless of the source of those writes. They are typically used to run critical operations such as logging, validation, or other DML when you want to ensure that writes are performed. This is convenient, the write can come from an API, application code, a release script, or an internal process, the trigger will fire no matter what.

what a trigger looks like

Using the Sales.Orders table in the WideWorldImporters sample database as an example, suppose you need to log all updates or deletes on that table, as well as some details about what happened. This can be done by modifying the code, but doing so requires making changes to every location in the code writing of the table. To solve this problem with triggers, the following steps can be taken:

1. Create a log table to accept written data. The following TSQL creates a simple log table, with some added data points:

CREATE TABLE Sales.Orders_log
( Orders_log_ID int NOT NULL IDENTITY(1,1) 
 CONSTRAINT PK_Sales_Orders_log PRIMARY KEY CLUSTERED,
 OrderID int NOT NULL,
 CustomerID_Old int NOT NULL,
 CustomerID_New int NOT NULL,
 SalespersonPersonID_Old int NOT NULL,
 SalespersonPersonID_New int NOT NULL,
 PickedByPersonID_Old int NULL,
 PickedByPersonID_New int NULL,
 ContactPersonID_Old int NOT NULL,
 ContactPersonID_New int NOT NULL,
 BackorderOrderID_Old int NULL,
 BackorderOrderID_New int NULL,
 OrderDate_Old date NOT NULL,
 OrderDate_New date NOT NULL,
 ExpectedDeliveryDate_Old date NOT NULL,
 ExpectedDeliveryDate_New date NOT NULL,
 CustomerPurchaseOrderNumber_Old nvarchar(20) NULL,
 CustomerPurchaseOrderNumber_New nvarchar(20) NULL,
 IsUndersupplyBackordered_Old bit NOT NULL,
 IsUndersupplyBackordered_New bit NOT NULL,
 Comments_Old nvarchar(max) NULL,
 Comments_New nvarchar(max) NULL,
 DeliveryInstructions_Old nvarchar(max) NULL,
 DeliveryInstructions_New nvarchar(max) NULL,
 InternalComments_Old nvarchar(max) NULL,
 InternalComments_New nvarchar(max) NULL,
 PickingCompletedWhen_Old datetime2(7) NULL,
 PickingCompletedWhen_New datetime2(7) NULL,
 LastEditedBy_Old int NOT NULL,
 LastEditedBy_New int NOT NULL,
 LastEditedWhen_Old datetime2(7) NOT NULL,
 LastEditedWhen_New datetime2(7) NOT NULL,
 ActionType VARCHAR(6) NOT NULL,
 ActionTime DATETIME2(3) NOT NULL,
UserName VARCHAR(128) NULL);

The table records the old and new values ​​of all columns. This is very comprehensive, we can simply log the old version of the row and be able to understand the process of the change by merging the new and old versions together. The last 3 columns are new and provide information on the type of operation performed (insert, update or delete), when and by whom.

2. Create a trigger to log table changes:

CREATE TRIGGER TR_Sales_Orders_Audit
 ON Sales.Orders
 AFTER INSERT, UPDATE, DELETE
AS
BEGIN
 SET NOCOUNT ON;
 INSERT INTO Sales.Orders_log
 (OrderID, CustomerID_Old, CustomerID_New, 
 SalespersonPersonID_Old, SalespersonPersonID_New, 
 PickedByPersonID_Old, PickedByPersonID_New,
 ContactPersonID_Old, ContactPersonID_New, 
 BackorderOrderID_Old, BackorderOrderID_New, 
 OrderDate_Old, OrderDate_New, ExpectedDeliveryDate_Old,
 ExpectedDeliveryDate_New, 
 CustomerPurchaseOrderNumber_Old, 
 CustomerPurchaseOrderNumber_New, 
 IsUndersupplyBackordered_Old, 
 IsUndersupplyBackordered_New,
 Comments_Old, Comments_New, 
 DeliveryInstructions_Old, DeliveryInstructions_New, 
 InternalComments_Old, InternalComments_New, 
 PickingCompletedWhen_Old,
 PickingCompletedWhen_New, LastEditedBy_Old, 
 LastEditedBy_New, LastEditedWhen_Old, 
 LastEditedWhen_New, ActionType, ActionTime, UserName)
 SELECT
 ISNULL(Inserted.OrderID, Deleted.OrderID) AS OrderID,
 Deleted.CustomerID AS CustomerID_Old,
 Inserted.CustomerID AS CustomerID_New,
 Deleted.SalespersonPersonID AS SalespersonPersonID_Old,
 Inserted.SalespersonPersonID AS SalespersonPersonID_New,
 Deleted.PickedByPersonID AS PickedByPersonID_Old,
 Inserted.PickedByPersonID AS PickedByPersonID_New,
 Deleted.ContactPersonID AS ContactPersonID_Old,
 Inserted.ContactPersonID AS ContactPersonID_New,
 Deleted.BackorderOrderID AS BackorderOrderID_Old,
 Inserted.BackorderOrderID AS BackorderOrderID_New,
 Deleted.OrderDate AS OrderDate_Old,
 Inserted.OrderDate AS OrderDate_New,
 Deleted.ExpectedDeliveryDate 
 AS ExpectedDeliveryDate_Old,
 Inserted.ExpectedDeliveryDate 
 AS ExpectedDeliveryDate_New,
 Deleted.CustomerPurchaseOrderNumber 
 AS CustomerPurchaseOrderNumber_Old,
 Inserted.CustomerPurchaseOrderNumber 
 AS CustomerPurchaseOrderNumber_New,
 Deleted.IsUndersupplyBackordered 
 AS IsUndersupplyBackordered_Old,
 Inserted.IsUndersupplyBackordered 
 AS IsUndersupplyBackordered_New,
 Deleted.Comments AS Comments_Old,
 Inserted.Comments AS Comments_New,
 Deleted.DeliveryInstructions 
 AS DeliveryInstructions_Old,
 Inserted.DeliveryInstructions 
 AS DeliveryInstructions_New,
 Deleted.InternalComments AS InternalComments_Old,
 Inserted.InternalComments AS InternalComments_New,
 Deleted.PickingCompletedWhen 
 AS PickingCompletedWhen_Old,
 Inserted.PickingCompletedWhen 
 AS PickingCompletedWhen_New,
 Deleted.LastEditedBy AS LastEditedBy_Old,
 Inserted.LastEditedBy AS LastEditedBy_New,
 Deleted.LastEditedWhen AS LastEditedWhen_Old,
 Inserted.LastEditedWhen AS LastEditedWhen_New,
 CASE
 WHEN Inserted.OrderID IS NULL THEN 'DELETE'
 WHEN Deleted.OrderID IS NULL THEN 'INSERT'
 ELSE 'UPDATE'
 END AS ActionType,
 SYSUTCDATETIME() ActionTime,
 SUSER_SNAME() AS UserName
 FROM Inserted
 FULL JOIN Deleted
 ON Inserted.OrderID = Deleted.OrderID;
END

The only function of this trigger is to insert data into the log table, with each row of data corresponding to a given write operation. It's simple, easy to record and maintain over time, and tables change. Additional columns can be added if additional details need to be tracked, such as database name, server name, number of rows affected by columns, or called application.

3. The final step is to test and verify that the log table is correct.

Here is a test to update the table after adding the trigger:

UPDATE Orders
 SET InternalComments = 'Item is no longer backordered',
 BackorderOrderID = NULL,
 IsUndersupplyBackordered = 0,
 LastEditedBy = 1,
 LastEditedWhen = SYSUTCDATETIME()
FROM sales.Orders
WHERE Orders.OrderID = 10;

The result is as follows:
Mixed SQL Server Triggers
Mixed SQL Server Triggers

Some columns are omitted above, but we can quickly confirm that changes have been triggered, including the addition of columns at the end of the log table.

INSERT and DELETE

In the previous example, the data used in the log table is read after insert and delete operations. This special table can be used as part of any associated write operation. INSERT will be triggered by insert operations, DELETE will be triggered by delete operations, and UPDATE will be triggered by insert and delete operations.

For INSERT and UPDATE, a snapshot of the new value of each column in the table will be included. For DELETE and UPDATE operations, a snapshot of the old value of each column in the table before the write operation will be included.

When are triggers most useful

The best use of DML triggers are short, simple, and easy-to-maintain write operations that are largely independent of application business logic.

  • Some important uses of triggers include:
  • Log changes to the history table
  • Audit users and their operations on sensitive tables.
  • Add extra values ​​to the table that may not be available to the application (due to security restrictions or other restrictions), for example:

    • Login Username
    • time of operation
    • server/database name
  • Simple verification.

The key is to keep the trigger code compact enough for easy maintenance. When triggers grow to tens of thousands of lines, they become black boxes that developers don't dare to bother with. As a result, more code is added, but older code is rarely checked. Even with documentation, this is hard to maintain.

For triggers to work effectively, they should be written to be setting-based. If stored procedures must be used in triggers, make sure they use table-valued parameters when needed so that data can be moved in a set-based fashion. Here's an example of a trigger that iterates over the ids to execute a sample stored procedure with the result order id:

CREATE TRIGGER TR_Sales_Orders_Process
 ON Sales.Orders
 AFTER INSERT
AS
BEGIN
 SET NOCOUNT ON;
 DECLARE @count INT;
 SELECT @count = COUNT(*) FROM inserted;
 DECLARE @min_id INT;
 SELECT @min_id = MIN(OrderID) FROM inserted;
 DECLARE @current_id INT = @min_id;
 WHILE @current_id < @current_id + @count
 BEGIN
 EXEC dbo.process_order_fulfillment 
 @OrderID = @current_id;
 SELECT @current_id = @current_id + 1;
 END
END

While relatively simple, the performance of INSERT operations to Sales.Orders will suffer when multiple rows are inserted at once, as SQL Server will be forced to execute one by one when executing the process_order_fulfillment stored procedure. A simple fix is ​​to rewrite the stored procedure and pass an array of Order ids into the stored procedure instead of doing this one at a time:

CREATE TYPE dbo.udt_OrderID_List AS TABLE(
 OrderID INT NOT NULL,
 PRIMARY KEY CLUSTERED 
( OrderID ASC));
GO
CREATE TRIGGER TR_Sales_Orders_Process
 ON Sales.Orders
 AFTER INSERT
AS
BEGIN
 SET NOCOUNT ON;
 DECLARE @OrderID_List dbo.udt_OrderID_List;
 EXEC dbo.process_order_fulfillment @OrderIDs = @OrderID_List;
END

The result of the change is that the complete set of ids is passed from the trigger to the stored procedure and processed. As long as stored procedures manage this data in a collection-based manner, duplicate executions can be avoided, that is, there is great value in avoiding the use of stored procedures within triggers, as they add an extra layer of encapsulation that further hides the data TSQL to execute when writing to the table. They should be considered a last resort and only used when TSQL can be rewritten multiple times in many places in the application.

When Triggers Are Dangerous

One of the biggest challenges for architects and developers is to ensure that triggers are only used when needed, without allowing them to be a one-size-fits-all solution. Adding TSQL to triggers is generally considered faster and easier than adding code to an application, but over time the cost of doing so increases with each line of code added.

Triggers can become dangerous when:

  • Keep triggers as few as possible to reduce complexity.
  • Trigger code gets complicated. If updating a row in a table causes thousands of rows of added trigger code to execute, it can be difficult for developers to fully understand what happens when data is written to the table. To make matters worse, when something goes wrong, troubleshooting can be very challenging.
  • Triggers span servers. This introduces network operations into triggers that can cause writes to slow down or fail when there are connectivity issues. Even cross-database triggers can be problematic if the target database is the object to be maintained.
  • Triggers invoke triggers. The most painful thing about triggers is that when a row is inserted, the write causes 100 triggers to execute in 75 tables. When writing trigger code, make sure the trigger can execute all the necessary logic without firing more triggers. Additional triggers are usually unnecessary.
  • The recursive trigger is set to ON. This is a database-level setting that is set to off by default. When on, it allows the contents of a trigger to invoke the same trigger. Recursive triggers can greatly hurt performance and can be very confusing when debugging. Typically, recursive triggers are used when DML in one trigger fires other triggers as part of an action.
  • Functions, stored procedures or views are all in triggers. Encapsulating more business logic in triggers can make them more complex and give the false impression that the trigger code is short and simple, when it is not. Avoid using stored procedures and functions in triggers as much as possible.
  • Iteration happens. Loops and cursors are inherently row-by-row and can cause operations on 1000 rows to fire 1000 times at a time, which greatly hurts query performance.

This is a long list, but it can often be summarized that short and simple triggers will perform better and avoid most of the pitfalls above. If triggers are used to maintain complex business logic, more and more business logic will be added over time and will inevitably violate the above best practices.

It is important to note that in order to maintain atomic, transactional, any object affected by a trigger will keep the transaction open until that trigger completes. This means that long triggers not only make transactions last longer, but also hold locks and cause longer durations. Therefore, when testing triggers, when creating or adding additional logic to existing triggers, you should understand their impact on locks, blocking, and waiting.

How to Improve Triggers

There are many ways to make triggers more maintainable, easier to understand, and more performant. Here are some tips on how to effectively manage triggers and avoid falling into pitfalls.

The trigger itself should be well documented:

  • Why does this trigger exist?
  • What can it do?
  • How does it work?
  • Are there any exceptions or caveats to how triggers work?

Also, if the TSQL in the trigger is difficult to understand, inline comments can be added to help developers looking at it for the first time.

The following is a sample trigger document:

/* 12/29/2020 EHP
 This trigger logs all changes to the table to the Orders_log
 table that occur for non-internal customers.
 CustomerID = -1 signifies an internal/test customer and 
 these are not audited.
*/
CREATE TRIGGER TR_Sales_Orders_Audit
 ON Sales.Orders
 FOR INSERT, UPDATE, DELETE
AS
BEGIN
 SET NOCOUNT ON;
 INSERT INTO Sales.Orders_log
 (OrderID, CustomerID_Old, CustomerID_New, 
 SalespersonPersonID_Old, SalespersonPersonID_New,
 PickedByPersonID_Old, PickedByPersonID_New,
 ContactPersonID_Old, ContactPersonID_New, 
 BackorderOrderID_Old, BackorderOrderID_New, 
 OrderDate_Old, OrderDate_New, 
 ExpectedDeliveryDate_Old,
 ExpectedDeliveryDate_New, 
 CustomerPurchaseOrderNumber_Old, 
 CustomerPurchaseOrderNumber_New, 
 IsUndersupplyBackordered_Old, 
 IsUndersupplyBackordered_New,
 Comments_Old, Comments_New, 
 DeliveryInstructions_Old, DeliveryInstructions_New, 
 nternalComments_Old, InternalComments_New, 
 PickingCompletedWhen_Old, PickingCompletedWhen_New, 
 LastEditedBy_Old, LastEditedBy_New, 
 LastEditedWhen_Old, LastEditedWhen_New, 
 ActionType, ActionTime, UserName)
 SELECT
 ISNULL(Inserted.OrderID, Deleted.OrderID) AS OrderID, 
 -- The OrderID can never change. 
 --This ensures we get the ID correctly, 
 --regardless of operation type.
 Deleted.CustomerID AS CustomerID_Old,
 Inserted.CustomerID AS CustomerID_New,
 Deleted.SalespersonPersonID AS SalespersonPersonID_Old,
 Inserted.SalespersonPersonID AS SalespersonPersonID_New,
 Deleted.PickedByPersonID AS PickedByPersonID_Old,
 Inserted.PickedByPersonID AS PickedByPersonID_New,
 Deleted.ContactPersonID AS ContactPersonID_Old,
 Inserted.ContactPersonID AS ContactPersonID_New,
 Deleted.BackorderOrderID AS BackorderOrderID_Old,
 Inserted.BackorderOrderID AS BackorderOrderID_New,
 Deleted.OrderDate AS OrderDate_Old,
 Inserted.OrderDate AS OrderDate_New,
 Deleted.ExpectedDeliveryDate AS ExpectedDeliveryDate_Old,
 Inserted.ExpectedDeliveryDate AS ExpectedDeliveryDate_New,
 Deleted.CustomerPurchaseOrderNumber 
 AS CustomerPurchaseOrderNumber_Old,
 Inserted.CustomerPurchaseOrderNumber 
 AS CustomerPurchaseOrderNumber_New,
 Deleted.IsUndersupplyBackordered 
 AS IsUndersupplyBackordered_Old,
 Inserted.IsUndersupplyBackordered 
 AS IsUndersupplyBackordered_New,
 Deleted.Comments AS Comments_Old,
 Inserted.Comments AS Comments_New,
 Deleted.DeliveryInstructions 
 AS DeliveryInstructions_Old,
 Inserted.DeliveryInstructions 
 AS DeliveryInstructions_New,
 Deleted.InternalComments AS InternalComments_Old,
 Inserted.InternalComments AS InternalComments_New,
 Deleted.PickingCompletedWhen AS PickingCompletedWhen_Old,
 Inserted.PickingCompletedWhen 
 AS PickingCompletedWhen_New,
 Deleted.LastEditedBy AS LastEditedBy_Old,
 Inserted.LastEditedBy AS LastEditedBy_New,
 Deleted.LastEditedWhen AS LastEditedWhen_Old,
 Inserted.LastEditedWhen AS LastEditedWhen_New,
 CASE -- Determine the operation type based on whether 
 --Inserted exists, Deleted exists, or both exist.
 WHEN Inserted.OrderID IS NULL THEN 'DELETE'
 WHEN Deleted.OrderID IS NULL THEN 'INSERT'
 ELSE 'UPDATE'
 END AS ActionType,
 SYSUTCDATETIME() ActionTime,
 SUSER_SNAME() AS UserName
 FROM Inserted
 FULL JOIN Deleted
 ON Inserted.OrderID = Deleted.OrderID
 WHERE Inserted.CustomerID <> -1 
 -- -1 indicates an internal/non-production 
 --customer that should not be audited.
 OR Deleted.CustomerID <> -1; 
 -- -1 indicates an internal/non-production 
 --customer that should not be audited.
END

Note that the documentation is not comprehensive, but includes a short header and explains some of the key parts of TSQL inside triggers:

  • Exclude the case of CustomerID = -1. This is not obvious to someone who doesn't know, so it's a good note.
  • What the ActionType's CASE statement is used for.
  • Why use ISNULL on OrderID column between insert and delete.

Use IF UPDATE

In triggers, UPDATE provides the ability to determine whether to write data to a given column. This can allow the trigger to check if the column has changed before performing the action. Here is an example of this syntax:

CREATE TRIGGER TR_Sales_Orders_Log_BackorderID_Change
 ON Sales.Orders
 AFTER UPDATE
AS
BEGIN
 SET NOCOUNT ON;
 IF UPDATE(BackorderOrderID)
 BEGIN
 UPDATE OrderBackorderLog
 SET BackorderOrderID = Inserted.BackorderOrderID,
 PreviousBackorderOrderID = Deleted.BackorderOrderID
 FROM dbo.OrderBackorderLog
 INNER JOIN Inserted
 ON Inserted.OrderID = OrderBackorderLog.OrderID
 END
END

By first checking to see if the BackorderID was updated, the trigger can bypass subsequent operations when not needed. This is a great way to improve performance, and it allows triggers to skip code entirely based on the updated value of the desired column.

COLUMNS_UPDATED indicates which columns in the table were updated as part of a write operation and can be used in triggers to quickly determine whether a specified column is affected by an insert or update operation. While documented, it is complex to use and difficult to document. I generally don't recommend using it as it will almost certainly confuse developers unfamiliar with it.

Note that for UPDATE or COLUMNS_UPDATED it doesn't matter if the column changed. A write operation to a column, even if the value has not changed, still returns 1 for UPDATE operations and 1 for COLUMNS_UPDATED operations. They only track whether the specified column is the target of a write operation, not whether the value itself has changed.

One trigger per action

Keep the trigger code as simple as possible. The growth of the number of triggers on a database table greatly increases the complexity of the table, making it more difficult to understand its operation. .

For example, consider the following way of defining a table trigger:

CREATE TRIGGER TR_Sales_Orders_I
 ON Sales.Orders
 AFTER INSERT
CREATE TRIGGER TR_Sales_Orders_IU
 ON Sales.Orders
 AFTER INSERT, UPDATE
CREATE TRIGGER TR_Sales_Orders_UD
 ON Sales.Orders
 AFTER UPDATE, DELETE
CREATE TRIGGER TR_Sales_Orders_UID
 ON Sales.Orders
 AFTER UPDATE, INSERT, DELETE
CREATE TRIGGER TR_Sales_Orders_ID
 ON Sales.Orders
 AFTER INSERT, DELETE

What happens when a row is inserted? What is the order in which the triggers fire? Answers to these questions require research. Maintaining fewer triggers is an easy solution and takes the guesswork out of how writes happen in a given table. For reference, the trigger order can be modified using the system stored procedure sp_settriggerorder, but this only applies to AFTER triggers.

Simpler

The best practice for triggers is that they are simple to operate, execute quickly, and don't fire more triggers because of their execution. There are no clear rules for how complex a trigger can be, but a simple guideline is that the ideal trigger should be simple enough that if the logic contained in the trigger has to be moved elsewhere, the migration won't be prohibitively expensive It was prohibitive. That said, if the business logic in the triggers is so complex that the cost of moving it is too high to consider, then those triggers are likely to become overly complex.

Using our previous example, consider a trigger for change auditing. This can easily be moved from triggers to stored procedures or code without much work. The ease of logging in triggers makes it worthwhile, but at the same time, we should know how many hours it will take for developers to migrate TSQL from triggers to another location.

The computation of time can be seen as part of the maintainability cost of triggers. That is, the price that must be paid to get rid of the trigger mechanism, if necessary. It may sound abstract, but database migrations between platforms are common. A set of triggers that perform well in SQL Server may not be efficient in Oracle or PostgreSQL.

Optimize table variables

Sometimes temporary tables are required in a trigger to allow multiple updates to the data. Temporary tables are stored in tempdb and are subject to tempdb database size, speed, and performance constraints.

For frequently accessed temporary tables, optimizing table variables is a good way to maintain temporary data in memory (rather than in tempdb).

The following TSQL configures a database for memory-optimized data (if required):

ALTER DATABASE WideWorldImporters 
SET MEMORY_OPTIMIZED_ELEVATE_TO_SNAPSHOT = ON;
ALTER DATABASE WideWorldImporters ADD FILEGROUP WWI_InMemory_Data 
 CONTAINS MEMORY_OPTIMIZED_DATA;
ALTER DATABASE WideWorldImporters ADD FILE 
 (NAME='WideWorldImporters_IMOLTP_File_1', 
 FILENAME='C:SQLDataWideWorldImporters_IMOLTP_File_1.mem') 
 TO FILEGROUP WWI_InMemory_Data;

Once configured, a memory-optimized table type can be created:

CREATE TYPE dbo.SalesOrderMetadata
AS TABLE
( OrderID INT NOT NULL PRIMARY KEY NONCLUSTERED,
 CustomerID INT NOT NULL,
 SalespersonPersonID INT NOT NULL,
 ContactPersonID INT NOT NULL,
 INDEX IX_SalesOrderMetadata_CustomerID NONCLUSTERED HASH 
 (CustomerID) WITH (BUCKET_COUNT = 1000))
WITH (MEMORY_OPTIMIZED = ON);

This TSQL creates the tables needed for the demo trigger:

CREATE TABLE dbo.OrderAdjustmentLog
( OrderAdjustmentLog_ID int NOT NULL IDENTITY(1,1) 
 CONSTRAINT PK_OrderAdjustmentLog PRIMARY KEY CLUSTERED,
 OrderID INT NOT NULL,
 CustomerID INT NOT NULL,
 SalespersonPersonID INT NOT NULL,
 ContactPersonID INT NOT NULL,
CreateTimeUTC DATETIME2(3) NOT NULL); 

Here's a demo of a trigger using a memory-optimized table:

CREATE TRIGGER TR_Sales_Orders_Mem_Test
 ON Sales.Orders
 AFTER UPDATE
AS
BEGIN
 SET NOCOUNT ON;
 DECLARE @OrderData dbo.SalesOrderMetadata;
 INSERT INTO @OrderData
 (OrderID, CustomerID, SalespersonPersonID, 
 ContactPersonID)
 SELECT
 OrderID,
 CustomerID,
 SalespersonPersonID,
 ContactPersonID
 FROM Inserted;
 
 DELETE OrderData
 FROM @OrderData OrderData
 INNER JOIN sales.Customers
 ON Customers.CustomerID = OrderData.CustomerID
 WHERE Customers.IsOnCreditHold = 0;
 UPDATE OrderData
 SET ContactPersonID = 1
 FROM @OrderData OrderData
 WHERE OrderData.ContactPersonID IS NULL;
 
 INSERT INTO dbo.OrderAdjustmentLog
 (OrderID, CustomerID, SalespersonPersonID, 
 ContactPersonID, CreateTimeUTC)
 SELECT
 OrderData.OrderID,
 OrderData.CustomerID,
 OrderData.SalespersonPersonID,
 OrderData.ContactPersonID,
 SYSUTCDATETIME()
 FROM @OrderData OrderData;
END

The more operations you need inside the trigger, the more time you save, because memory-optimized table variables don't require IO to read/write.

Once the initial data from the inserted table has been read, the rest of the trigger can leave tempdb unprocessed, reducing the overhead of using standard table variables or temporary tables.

The following code sets up some test data and runs an update to demonstrate the results of the above code:

UPDATE Customers
 SET IsOnCreditHold = 1
FROM Sales.Customers
WHERE Customers.CustomerID = 832;
UPDATE Orders
 SET SalespersonPersonID = 2
FROM sales.Orders
WHERE CustomerID = 832;

Once executed, the contents of the OrderAdjustmentLog table can be verified:

Mixed SQL Server Triggers

The results were expected. By reducing the reliance on standard storage and moving intermediate tables into memory, memory-optimized tables provide a way to greatly increase trigger speed. This is limited to scenarios with lots of calls to temporary objects, but is also useful in stored procedures or other procedural TSQL.

Alternative triggers

Like all tools, triggers can be abused and become a source of confusion, performance bottlenecks, and maintainability nightmares. There are many preferable alternatives to triggers, and they should be considered before implementing (or adding to an existing) trigger.

Temporal tables

Introduced in SQL Server 2016, Temporal tables provide an easy way to add versioning to tables without having to build your own data structures and ETL. This logging is invisible to applications and provides full version support compliant with ANSI standards, making it an easy way to get around the problem of saving old versions of data.

Check constraint

For simple data validation, Check constraints provide what you need without the need for functions, stored procedures, or triggers. Define Check constraints on columns and automatically validate data as it is created.

Here is an example of a Check constraint:

ALTER TABLE Sales.Invoices WITH CHECK ADD CONSTRAINT
CK_Sales_Invoices_ReturnedDeliveryData_Must_Be_Valid_JSON
CHECK ([ReturnedDeliveryData] IS NULL OR 
ISJSON([ReturnedDeliveryData])<>(0))

This code checks if a column is valid JSON. If so, execution proceeds normally. If not, then SQL Server will throw an error and the write operation will fail. Check constraints can check any combination of columns and values, so simple or complex validation tasks can be managed.

Creating Check constraints is inexpensive and easy to maintain. They're also easier to document and understand, since the scope of Check constraints is limited to validating incoming data and ensuring data integrity, whereas triggers can actually do anything imaginable!

unique constraint

If a column requires unique values ​​and is not a primary key on the table, the unique constraint is an easy and efficient way to accomplish that. A unique constraint is a combination of indexing and uniqueness. In order to effectively verify uniqueness, an index is required.

Here is an example of a unique constraint:

ALTER TABLE Warehouse.Colors ADD CONSTRAINT 
UQ_Warehouse_Colors_ColorName UNIQUE NONCLUSTERED (ColorName ASC);

Every time a row is inserted into the Warehouse.Colors table, ColorName is checked for uniqueness. If the write happens to result in a duplicate color, the statement will fail and the data will not be changed. Unique constraints are built for this purpose, which is the easiest way to enforce uniqueness on a column.

A built-in solution will be more efficient, easier to maintain, and easier to document. Any developer who sees a unique constraint will immediately understand what it does, without needing to dig deep into TSQL to figure out how things work, this simplicity makes it an ideal solution.

foreign key constraints

Like Check constraints and unique constraints, foreign key constraints are another way to verify data integrity before writing it. Foreign keys link columns from one table to another. When data is inserted into the target table, its value is checked against the referenced table. If the value exists, the write operation proceeds normally. If not, an error is thrown and the statement fails.

Here is a simple foreign key example:

ALTER TABLE Sales.Orders WITH CHECK ADD CONSTRAINT
FK_Sales_Orders_CustomerID_Sales_Customers FOREIGN KEY (CustomerID)
REFERENCES Sales.Customers (CustomerID);

When data is written to Sales.Orders, the CustomerID column will be checked against the CustomerID column in Sales.Customers.

Similar to unique constraints, foreign keys have only one purpose: to verify that data written to one table exists in another table. It is easy to document, easy to understand, and efficient to implement.

Triggers are not the right place to perform these validation checks and are a less efficient solution than using foreign keys.

stored procedure

Logic implemented in triggers can often be easily moved into stored procedures. This removes the complexity that a lot of trigger code can cause, while allowing better maintenance by developers. Stored procedures are free to structure operations to ensure as much atomicity as possible.

One of the basic principles of implementing flip-flops is to ensure that a set of operations is consistent with write operations. All success or failure is done as part of an atomic transaction. Applications don't always need this level of atomicity. If necessary, use the appropriate isolation level or table locking in the stored procedure to guarantee transactional integrity.

While SQL Server (and most RDBMSs) provide ACID guarantees that transactions will be atomic, consistent, isolated, and durable, transactions in our own code may or may not need to follow the same rules. Real-world applications have different requirements for data integrity.

Stored procedures allow custom code to achieve the data integrity required by the application, ensuring that performance and computing resources are not wasted on unwanted data integrity.

For example, a social media application that allows users to post photos is unlikely to require its transactions to be fully atomic and consistent. No one cares if my picture appears a second before or after yours. Likewise, if you comment on my photos while I'm editing them, time may not matter to the person using the data. On the other hand, a banking application that manages currency transactions needs to ensure that transactions are executed carefully so that there are no cases of lost funds or errors in digital reporting. If I have a bank account with $20 in it, and I withdraw $20 at the same time someone else withdraws $20, we can't both be successful. One of us got $20 first, the other ran into a proper error message about a $0 balance.

function

Functions provide an easy way to encapsulate important logic into a single location. A single function reused across 50 table inserts is much easier to do the same logic than 50 triggers (one for each table).

Consider the following function:

CREATE FUNCTION Website.CalculateCustomerPrice
 (@CustomerID INT, @StockItemID INT, @PricingDate DATE)
RETURNS DECIMAL(18,2)
WITH EXECUTE AS OWNER
AS
BEGIN
 DECLARE @CalculatedPrice decimal(18,2);
 DECLARE @UnitPrice decimal(18,2);
 DECLARE @LowestUnitPrice decimal(18,2);
 DECLARE @HighestDiscountAmount decimal(18,2);
 DECLARE @HighestDiscountPercentage decimal(18,3);
 DECLARE @BuyingGroupID int;
 DECLARE @CustomerCategoryID int;
 DECLARE @DiscountedUnitPrice decimal(18,2);
 SELECT @BuyingGroupID = BuyingGroupID,
 @CustomerCategoryID = CustomerCategoryID
 FROM Sales.Customers
 WHERE CustomerID = @CustomerID;
 SELECT @UnitPrice = si.UnitPrice
 FROM Warehouse.StockItems AS si
 WHERE si.StockItemID = @StockItemID;
 SET @CalculatedPrice = @UnitPrice;
 SET @LowestUnitPrice = (
 SELECT MIN(sd.UnitPrice)
 FROM Sales.SpecialDeals AS sd
 WHERE ((sd.StockItemID = @StockItemID) 
 OR (sd.StockItemID IS NULL))
 AND ((sd.CustomerID = @CustomerID) 
 OR (sd.CustomerID IS NULL))
 AND ((sd.BuyingGroupID = @BuyingGroupID) 
 OR (sd.BuyingGroupID IS NULL))
 AND ((sd.CustomerCategoryID = @CustomerCategoryID) 
 OR (sd.CustomerCategoryID IS NULL))
 AND ((sd.StockGroupID IS NULL) OR EXISTS (SELECT 1 
 FROM Warehouse.StockItemStockGroups AS sisg
 WHERE sisg.StockItemID = @StockItemID
 AND sisg.StockGroupID = sd.StockGroupID))
 AND sd.UnitPrice IS NOT NULL
 AND @PricingDate BETWEEN sd.StartDate AND sd.EndDate);
 IF @LowestUnitPrice IS NOT NULL AND @LowestUnitPrice < @UnitPrice
 BEGIN
 SET @CalculatedPrice = @LowestUnitPrice;
 END;
 SET @HighestDiscountAmount = (
 SELECT MAX(sd.DiscountAmount)
 FROM Sales.SpecialDeals AS sd
 WHERE ((sd.StockItemID = @StockItemID) 
 OR (sd.StockItemID IS NULL))
 AND ((sd.CustomerID = @CustomerID) 
 OR (sd.CustomerID IS NULL))
 AND ((sd.BuyingGroupID = @BuyingGroupID) 
 OR (sd.BuyingGroupID IS NULL))
 AND ((sd.CustomerCategoryID = @CustomerCategoryID) 
 OR (sd.CustomerCategoryID IS NULL))
 AND ((sd.StockGroupID IS NULL) OR EXISTS 
 (SELECT 1 FROM Warehouse.StockItemStockGroups AS sisg 
 WHERE sisg.StockItemID = @StockItemID
 AND sisg.StockGroupID = sd.StockGroupID))
 AND sd.DiscountAmount IS NOT NULL
 AND @PricingDate BETWEEN sd.StartDate AND sd.EndDate);
 IF @HighestDiscountAmount IS NOT NULL AND (
 @UnitPrice - @HighestDiscountAmount) < @CalculatedPrice
 BEGIN
 SET @CalculatedPrice = @UnitPrice - @HighestDiscountAmount;
 END;
 SET @HighestDiscountPercentage = (
 SELECT MAX(sd.DiscountPercentage)
 FROM Sales.SpecialDeals AS sd
 WHERE ((sd.StockItemID = @StockItemID)
 OR (sd.StockItemID IS NULL))
 AND ((sd.CustomerID = @CustomerID) 
 OR (sd.CustomerID IS NULL))
 AND ((sd.BuyingGroupID = @BuyingGroupID) 
 OR (sd.BuyingGroupID IS NULL))
 AND ((sd.CustomerCategoryID = @CustomerCategoryID) 
 OR (sd.CustomerCategoryID IS NULL))
 AND ((sd.StockGroupID IS NULL) OR EXISTS 
 (SELECT 1 FROM Warehouse.StockItemStockGroups AS sisg
 WHERE sisg.StockItemID = @StockItemID
 AND sisg.StockGroupID = sd.StockGroupID))
 AND sd.DiscountPercentage IS NOT NULL
 AND @PricingDate BETWEEN sd.StartDate AND sd.EndDate);
 IF @HighestDiscountPercentage IS NOT NULL
 BEGIN
 SET @DiscountedUnitPrice = ROUND(@UnitPrice * 
 @HighestDiscountPercentage / 100.0, 2);
 IF @DiscountedUnitPrice < @CalculatedPrice 
 SET @CalculatedPrice = @DiscountedUnitPrice;
 END;
 RETURN @CalculatedPrice;
END;

In terms of complexity, this is an absolute beast. While it accepts scalar arguments to determine the calculated price, the operations it performs are quite large, even including additional reads of Warehouse.StockItemStockGroups, Warehouse.StockItems and Sales.Customers. If this is a critical calculation that is often used against a single row of data, then encapsulating it in a function is an easy way to get the desired calculation without adding complexity to the trigger. Use functions with care and be sure to test with large datasets. Simple scalar functions usually scale well for larger data, but more complex functions may perform poorly.

coding

When modifying data in a table from an application, it is also possible to perform additional data manipulation or validation before writing the data. This is usually inexpensive, performs well, and helps reduce the negative impact of runaway triggers on the database.

A common reason for putting code in triggers is that doing so avoids modifying the code, pushing builds, that would otherwise cause changes to the application. This is in direct contrast to any risk associated with making changes in the database. This is usually a discussion between the application developer and the database developer about who will be responsible for the new code.

This is a rough guideline, but helps measure maintainability and risk after code is added to an application or trigger.

Calculated column

Computed columns can include calculations through a variety of arithmetic operations and functions to get results when other columns are changed. They can be included in indexes, unique constraints, or even primary keys.

SQL Server automatically maintains computed columns when any of the underlying values ​​change. Note that each calculated column is ultimately determined by the values ​​of other columns in the table.

This is a good alternative to using triggers to maintain specified column values. Computed columns are efficient, automatic, and require no maintenance. They simply work and even allow complex calculations to be integrated directly into a table with no additional code required in the application or SQL Server.

Using SQL Server triggers

Triggers are a useful feature in SQL Server, but like all tools, it can be misused or abused. When deciding whether to use a trigger, be sure to consider the purpose of the trigger.

If a trigger is used to write brief transactional data to a log table, then it is likely to be a good trigger. If triggers are used to enforce complex business rules, it is likely that the best way to handle such operations will need to be reconsidered.

There are many tools available as viable alternatives to triggers such as check constraints, computed columns, etc. There is no shortage of ways to solve the problem. The success of database architecture lies in choosing the right tool for the job.

Original link:https://www.red-gate.com/simp…
Mixed SQL Server Triggers