Quantcast
Channel: SQL Security
Viewing all 71 articles
Browse latest View live

SQL Azure Security Services

$
0
0

Last week, we released SQL Azure Security Services through SQL Azure Labs. In this initial version of our labs, you can

  • Scan your SQL Azure server or individual databases for security issues - We look for design issues, elevation issues and etc.
  • Get a report of your database security model - You can quickly know which users exist in a database, role memberships, permissions on various objects and etc, to reason over presence of user accounts or permissions on various objects.
  • Scan your data for malware presence (Currently we only check for Mass SQL Injection Attacks) - We have been observing Automated Mass SQL Injection attacks for over 4 years now, we scan for presence of malicious javascript in your data.

Please try the service here and let us know your feedback.

- Bala Neerumalla.

 


Azure Trust Services

$
0
0

Microsoft is working on a new Windows Azure service through SQL Azure Labs, called Trust Services. It is an application-level encryption framework that can be used to protect sensitive data stored on the Windows Azure Platform. By using Trust Services you can store keys, authorizations and encryption policies in the cloud, and use them to encrypt and decrypt sensitive data.

Trust Services provides a API that simplifies the development process and enables easy integration with data driven applications.

Check it out at Microsoft Codename "Trust Services". We are looking forward for your feedback.

Security Best Practice and Label Security Whitepapers

SQL Server 2012 Best Practices Analyzer

$
0
0

Copied from an internal email from a PM on the team, Jakub -

I’m pleased to announce that SQL Server 2012 Best Practices Analyzer (BPA) has been released and is available for download at http://www.microsoft.com/download/en/details.aspx?id=29302.

Customer Value

The Microsoft SQL Server 2012 BPA is a diagnostic tool that
performs the following functions:


  • Gathers information about a Server and a
    Microsoft SQL Server 2012 instance installed on that Server.

  • Determines if the configurations are set
    according to the recommended best practices.

  • Reports on all configurations, indicating
    settings that differ from recommendations.

  • Indicates potential problems in the installed
    instance of SQL Server.

  • Recommends solutions to potential problems.

 

Filter SQL Server Audit on action_id / class_type predicate

$
0
0

In SQL Server 2012, Server Audit can be created with a predicate expression (refer to MSDN). This predicate expression is evaluated before audit events are written to the audit target. If the evaluation returns TRUE the event is written to the audit target else it's not. Hence one can filter audit records going to the audit target based on the predicate expression.

Predicate can refer to any of the audit fields described in sys.fn_get_audit_file (Transact-SQL) except file_name and audit_file_offset.

For example:

Consider a server principal ‘foo’ that already exists in SQL Server. This principal has server_principal_id of 261. Now following server audit will write all the audit events (configured in audit specification) generated by this principal (with id 261) to file target. It will not write audit events generated by other principals in SQL Server to the target.

CREATE SERVER AUDIT AuditDataAccessByPrincipal

    TO FILE (FILEPATH ='C:\SQLAudit\' )

    WHERE SERVER_PRINCIPAL_ID = 261

 GO

Now, in order to use action_id field as a predicate in the predicate expression, one has to provide integer value of action_id. Specifying a character code value for action_id results in following error:

CREATE SERVER AUDIT AuditDataAccessByAction_Id

    TO FILE ( FILEPATH ='C:\SQLAudit\' )

    WHERE ACTION_ID = 'SL'

GO 

Error:

Msg 25713, Level 16, State 23, Line 1

The value specified for event attribute or predicate source, "ACTION_ID", event, "audit_event", is invalid.

This is because internally action_id is stored as an integer value. sys.fn_get_audit_file DMV converts the integer value to a character code value for two main reasons:

1)      Readability: Character code is more readable then integer value

2)      Consistency with our internal metadata layer where we define such mapping between integer value and character code.

The above explanation also applies for class_type field that we have in sys.fn_get_audit_file.

Following functions will help to get around above mentioned problem with action_id and class_type fields.

1)       This function converts action_id string value of varchar(4) to an integer value which can be used in the predicate expression.

create function dbo.GetInt_action_id ( @action_id varchar(4)) returns int

begin

declare @x int

SET @x = convert(int, convert(varbinary(1), upper(substring(@action_id, 1, 1))))

if LEN(@action_id)>=2

SET @x = convert(int, convert(varbinary(1), upper(substring(@action_id, 2, 1)))) * power(2,8) + @x

else

SET @x = convert(int, convert(varbinary(1), ' ')) * power(2,8) + @x

if LEN(@action_id)>=3

SET @x = convert(int, convert(varbinary(1), upper(substring(@action_id, 3, 1)))) * power(2,16) + @x

else

SET @x = convert(int, convert(varbinary(1), ' ')) * power(2,16) + @x

if LEN(@action_id)>=4

SET @x = convert(int, convert(varbinary(1), upper(substring(@action_id, 4, 1)))) * power(2,24) + @x

else

SET @x = convert(int, convert(varbinary(1), ' ')) * power(2,24) + @x

return @x

end

 

Select dbo.GetInt_action_id ('SL') as Int_Action_Id 

Int_Action_Id

------------------

    538987603

Following command will now succeed.

CREATE SERVER AUDIT AuditDataAccessByAction_Id

    TO FILE ( FILEPATH ='C:\SQLAudit\' )

    WHERE ACTION_ID = 538987603

GO

2)      This function converts class_type string value of varchar(2) to an integer value which can be used in the predicate expression.

create function dbo.GetInt_class_type ( @class_type varchar(2)) returns int

begin

declare @x int

SET @x = convert(int, convert(varbinary(1), upper(substring(@class_type, 1, 1))))

if LEN(@class_type)>=2

SET @x = convert(int, convert(varbinary(1), upper(substring(@class_type, 2, 1)))) * power(2,8) + @x

else

SET @x = convert(int, convert(varbinary(1), ' ')) * power(2,8) + @x

return @x

end

go

Select dbo.GetInt_class_type ('A') as Int_class_type 

Int_class_type

-------------

    8257

       

Following command will now succeed.

CREATE SERVER AUDIT ClasstypeAuditDataAccess

    TO FILE ( FILEPATH ='C:\SQLAudit\' )

    WHERE CLASS_TYPE = 8257

GO 

Following audit record will be generated for Server Audit (‘A’) class type. 

ALTER SERVER AUDIT ClasstypeAuditDataAccess

WITH (STATE = ON)

...

PVKConverter

Auditing in Azure SQL Database

SQL Application Column Encryption Sample (Codeplex) available

$
0
0

To achieve many compliance guidelines on Azure SQL Database, the application needs to encrypt the data. The intent of this article is provide some guidelines and an example library for encrypting data at rest for relational databases.

We just published the source code for a library at “SQL Application Column Encryption Sample” in Codeplex (https://sqlcolumnencryption.codeplex.com/) that can help developers to encrypt data (columns) at rest in SQL Azure database. This library is intended to work as sample code and published as open source with the goal to allow the community to improve it while we make a better solution available for Azure SQL Server.

We will appreciate your comments & feedback on this library as it will help us make it better as well to make sure we can make future solutions better.

Please use the Discussion section on the Codeplex library or leave a comment in this forum for feedback & comments.


Row-Level Security for Azure SQL Database

$
0
0

I'm so excited to announce that we are deploying Row-Level Security, a programmability feature to ease the writing of business security logic in the database, to Azure SQL Database. Coming to a region near you as the deployment propagates around the world, it will be available in all V12 server once deployment completes. See the main SQL Server team blog for more details. Technical details should start showing up on MSDN today as those sites are updated.

Updated MSDN Documentation for Azure SQL Database Row-Level Security

Row-Level Security for Middle-Tier Apps – Using Disjunctions in the Predicate

$
0
0

In Building More Secure Middle-Tier Applications with Azure SQL Database using Row-Level Security, we discussed how CONTEXT_INFO could be used for middle-tier based RLS predicate definitions.

In many occasions it is necessary to introduce a disjunction to the predicate definition for scenarios that need to distinguish between filtered queries for some users and cases where a user must not be subject to filtering (i.e. administrator, etc.), and such disjunctions may potentially affect performance significantly.

The reason for this performance impact is that, once the RLS predicate is applied to a query, it will be applied as a predicate to the query. Because of the disjunction, the query may result in a scan. For details on the difference between scan and seek, I would recommend reading Craig Freedman’s “scans vs. seeks” article.

We are working on trying to optimize some of these scenarios for RLS usage, but we also know we may not be able to address all possible scenarios right away. Because of that, we would like to share an example on how to improve performance under similar circumstances on your own.

The scenario we will analyze is a slight modification to the scenario from the previous RLS blog post, but with one addition: The application needs to allow a super-user/administrator to access all rows.

The way we will identify the super-user in our application, is by not setting CONTEXT_INFO to any value (i.e. CONTEXT_INFO returns null). So we decide to modify the SECURITY POLICY to add the new logic:

CREATE FUNCTION [rls].[fn_userAccessPredicate_with_superuser](@TenantId int) 
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS fn_accessResult
WHERE DATABASE_PRINCIPAL_ID() = DATABASE_PRINCIPAL_ID ('AppUser')
AND
( CONVERT(int, CONVERT( varbinary(4), CONTEXT_INFO())) = @TenantId
OR CONTEXT_INFO() is null )
GO

ALTER SECURITY POLICY [rls].[tenantAccessPolicy]
ALTER FILTER PREDICATE [rls].[fn_userAccessPredicate_with_superuser]([TenantId]) on [dbo].[Sales]
GO


Unfortunately, this seemingly simple change seems to have triggered a regression in your application performance, and you decide to investigate, comparing the plan for the new predicate against the old one.

 
Fig 1. Plan when using [rls].fn_userAccessPredicate] as a predicate.


 
Fig 2. Plan when using [rls].fn_userAccessPredicate_with_superuser] as a predicate.

And after the analysis, the reason seems obvious: the disjunction you just added is transforming the query from a seek to a scan.  

You also realized that this particular disjunction has a particularity: one side would expect a seek (i.e. TenantId = value ) and the other side of the disjunction would result in a scan (Administrator case), so in this case it may be possible to get better performance by trying to change this particular characteristic and transform both sides of the disjunction into seeks.

How to address this problem? One possibility in a scenario like this one is to transform the disjunction into a range. How would we accomplish it? By transforming the notion of null into a range that encompasses all values.

First, we alter the security policy to use the older version, after all we don’t want to leave our table unprotected while we fix the new predicate:

ALTER SECURITY POLICY [rls].[userAccessPolicy] 
ALTER FILTER PREDICATE [rls].[fn_userAccessPredicate]([TenantId]) on [dbo].[Sales]
GO

Then we create a couple of functions that will help us define the min and max for our range based on the current state of CONTEXT_INFO. Please notice that these functions will be data type-specific:

-- If context_info is not set, return MIN_INT, otherwise return context_info value as int
CREATE FUNCTION [rls].[int_lo]() RETURNS int
WITH SCHEMABINDING
AS BEGIN
RETURN CASE WHEN context_info() is null THEN -2147483648 ELSE convert(int, convert(varbinary(4), context_info())) END
END
GO

-- If context_info is not set, return MAX_INT, otherwise return context_info value as int
CREATE FUNCTION [rls].[int_hi]() RETURNS int
WITH SCHEMABINDING
AS BEGIN
RETURN CASE WHEN context_info() is null THEN 2147483647 ELSE convert(int, convert(varbinary(4), context_info())) END
END
GO

And then we proceed to redefine the predicate function and security policy using a range:

-- Now rewrite the predicate
ALTER FUNCTION [rls].[fn_userAccessPredicate_with_superuser](@TenantId int)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS fn_accessResult
WHERE DATABASE_PRINCIPAL_ID() = DATABASE_PRINCIPAL_ID ('AppUser') -- the shared application login
AND
-- tenant info within the range:
-- If context_info is set, the range will point only to one value
-- If context_info is not set, the range will include all values
@TenantId BETWEEN [rls].[int_lo]() AND [rls].[int_hi]()
GO

-- Replace the predicate with the newly written one
ALTER SECURITY POLICY [rls].[tenantAccessPolicy]
ALTER FILTER PREDICATE [rls].[fn_userAccessPredicate_with_superuser]([TenantId]) on [dbo].[Sales]
GO


To finalize let’s look at the new actual execution plans:

 
Fig 3. Plan when using [rls].fn_userAccessPredicate_with_superuser] as a predicate.

This new function will allow a ranged scan in both circumstances. In the case of CONTEXT_INFO being set, this range will be “between @min_value and @max_value”, which will allow the query optimizer to take advantage of the index on TenantID.

NOTE: When you test this functionality with a small table, you may see a scan instead of a seek, even though you have a nonclustered index on the tenantId Column. The reason for this is that the query optimizer may be calculating that for a particular table a scan may be faster than a seek. If you hit this behavior, try using “WITH (FORCESEEK)” at the end of your SELECT statement to give the optimizer a hint that a seek is preferred.

Obviously this is not the only scenario where you may need to rewrite a security predicate in order to improve performance, and this is certainly not the only workaround, but hopefully this example will serve to give you an example to follow for similar scenarios and ideas for other scenarios.

To conclude, I would like to reiterate that we are currently investigating how to improve performance on predicates similar to the one I showed here with a disjunction being used to distinguish between filtered queries and cases where a user must not be subject to filtering. We will update you with news on the potential solution once it becomes available.

*** Update. Sample source code available at https://rlssamples.codeplex.com/SourceControl/latest#RLS-Middle-Tier-Apps-Demo-using_disjunctions.sql

Row-Level Security: Blocking unauthorized INSERTs

$
0
0

Row-Level Security (RLS) for Azure SQL Database enables you to transparently filter all “get” operations (SELECT, UPDATE, DELETE) on a table according to some user-defined criteria.

Today, however, there is no built-in support for blocking “set” operations (INSERT, UPDATE) according to the same criteria, so it is possible to insert or update rows such that they will subsequently be filtered to you. In a multi-tenant middle-tier application, for instance, an RLS policy in your database can automatically filter results returned by “SELECT * FROM table,” but it cannot block the application from accidentally inserting rows for the wrong tenant. For additional protection against mistakes in application code, developers may want to implement constraints in their database so that an error is thrown if the application tries to insert rows that violate an RLS filter predicate. This post describes how to implement this blocking functionality using check and default constraints.

We’ll expand upon the example in a prior post, Building More Secure Middle-Tier Applications with Azure SQL Database using Row-Level Security. As a recap, we have a Sales table where each row has a TenantId, and upon opening a connection, our application sets the connection's CONTEXT_INFO to the TenantId of the current application user. After that, an RLS security policy automatically applies a predicate function to all queries on our Sales table to filter out results where the TenantId does not match the current value of CONTEXT_INFO.

Right now there is nothing preventing the application from errantly inserting a row with an incorrect TenantId or updating the TenantId of a visible row to a different value. For peace of mind, we’ll create a check constraint that prevents the application from accidentally inserting or updating rows to violate our filter predicate in this way:

-- Create scalar version of predicate function so it can be used in check constraints
CREATE FUNCTION rls.fn_tenantAccessPredicateScalar(@TenantId int)
RETURNS bit
AS
BEGIN
IF EXISTS(SELECT 1 FROM rls.fn_tenantAccessPredicate(@TenantId))
RETURN 1
RETURN 0
END
go

-- Add this function as a check constraint on our Sales table
ALTER TABLE Sales
WITH NOCHECK -- don't check data already in table
ADD CONSTRAINT chk_blocking_Sales -- needs a unique name
CHECK(rls.fn_tenantAccessPredicateScalar(TenantId) = 1)
go

Now if we grant our shared AppUser INSERT permissions on our Sales table and simulate inserting a row that violates the predicate function, the appropriate error will be raised:

GRANT INSERT ON Sales TO AppUser
go
EXECUTE AS USER = 'AppUser' -- simulate app user
go
EXECUTE rls.sp_setContextInfoAsTenantId 2 -- tenant 2 is logged in
go
INSERT INTO Sales (OrderId, SKU, Price, TenantId) VALUES (100, 'Movie000', 100, 1); -- fails: "The INSERT statement conflicted with CHECK constraint"
go
INSERT INTO Sales (OrderId, SKU, Price, TenantId) VALUES (101, 'Movie111', 5, 2); -- succeeds because correct TenantId
go
SELECT * FROM Sales -- now Movie001, Movie002, and Movie111
go
REVERT
go

Likewise for UPDATE, the app cannot inadvertently update the TenantId of a row to new value:

GRANT UPDATE ON Sales TO AppUser
go
EXECUTE AS USER = 'AppUser'
go
UPDATE Sales SET TenantId = 99 WHERE OrderID = 2 -- fails: "The UPDATE statement conflicted with CHECK constraint"
go
REVERT
go

Note that while our application doesn’t need to specify the current TenantId for SELECT, UPDATE, and DELETE queries (this is handled automatically via CONTEXT_INFO), right now it does need to do so for INSERTs. To make tenant-scoped INSERT operations transparent for the application just like these other operations, we can use a default constraint to automatically populate the TenantId for new rows with the current value of CONTEXT_INFO.

To do this, we’ll need to slightly modify the schema of our Sales table:

ALTER TABLE Sales
ADD CONSTRAINT df_TenantId_Sales DEFAULT CONVERT(int, CONVERT(varbinary(4), CONTEXT_INFO())) FOR TenantId
go

And now our application no longer needs to specify the TenantId when inserting rows:

EXECUTE AS USER = 'AppUser'
go
EXECUTE rls.sp_setContextInfoAsTenantId 2
go
INSERT INTO Sales (OrderId, SKU, Price) VALUES (102, 'Movie222', 5); -- don't specify TenantId
go
SELECT * FROM Sales -- Movie222 has been inserted with the current TenantId
go
REVERT
go

At this point, our application code just needs to set CONTEXT_INFO to the current TenantId after opening a connection. After that, no the application no longer needs to specify the TenantId; SELECTs, INSERTs, UPDATEs, and DELETEs will automatically apply only to the current tenant. Even if the application code does accidentally specify a bad TenantId on an INSERT or UPDATE, no rows will be inserted or updated and the database will return an error.

In sum, this post has shown how to complement existing RLS filtering functionality with check and default constraints to block unauthorized inserts and updates. Implementing these constraints provides additional safeguards to ensure that your application code doesn’t accidentally insert rows for the wrong users. We’re working to add built-in support for this blocking functionality in future iterations of RLS, so that you won’t need to maintain the check constraints yourself. We’ll be sure to post here when we have updates on that. In the meantime, if you have any questions, comments, or feedback, please let us know in the comments below.

Apply Row-Level Security to all tables -- helper script

$
0
0

Developing multi-tenant applications with Row-Level Security (RLS) just got a little easier. This post makes available a script that will automatically apply an RLS predicate to all tables in a database.

Applications with multi-tenant databases, including those using Elastic Scale for sharding, commonly have a “TenantId” column in every table to indicate which rows belong to each tenant. As described in Building More Secure Middle-Tier Applications with Azure SQL Database using Row-Level Security, the recommended approach for filtering out rows that don't belong to a tenant querying the database is to create an RLS security policy that filters out rows whose TenantId doesn't match the current value of CONTEXT_INFO. However, for large applications with perhaps hundreds of tables, it can be tedious to write out "ADD FILTER PREDICATE..." for every table when creating or altering the RLS security policy.

To streamline this common RLS use case, we’ve created a helper stored procedure to automatically generate a security policy that adds a filter predicate on all tables with a TenantId column. See below for syntax and three common usage examples.

Script available here: http://rlssamples.codeplex.com/SourceControl/latest#RLS-Auto-Enable.sql

 

Syntax:

CREATE PROCEDURE dbo.sp_enable_rls_auto (
/* The type for the tenant ID column. It could be short, int or bigint. */
@rlsColType sysname,

/* The name for the tenant ID column. All tables that match the column name & type will be affected. */
@rlsColName sysname,

/* The schema name where the policy will be applied.
If null (default), the policy will be applied to tables in all schemas in the database. */
@applyToSchema sysname = null,

/* Set to 1 to disable all existing policies that affect the identified target tables.
If set to 0 (default), this function will fail if there is an existing policy on any of these tables. */
@deactivateExistingPolicies bit = 0,

/* Schema name for new RLS objects. If it does not exist, it will be created. */
@rlsSchemaName sysname = N'rls',

/* The name of an existing function in the RLS schema that will be used as the predicate.
If null (default), a new function will be created with a simple CONTEXT_INFO = tenant ID filter. */
@rlsPredicateFunctionName sysname = null,

/* Set to 1 to allow CONTEXT_INFO = null to have access to all rows. Default is 0.
Not applicable if @rlsPredicateFunctionName is set with a custom predicate function. */
@isNullAdmin bit = 0,

/* If @isNullAdmin = 1, set to 1 to optimize the CONTEXT_INFO = null disjunction into a range query.
Not applicable if @rlsPredicateFunctionName is set with a custom predicate function. */
@isNullAdminOptimized bit = 1,

/* If set, the predicate function will allow only this user to access rows.
Use only for middle-tier scenarios, where this is the shared application user name.
Not applicable if @rlsPredicateFunctionName is set with a custom predicate function. */
@restrictedToAppUserName sysname = null,

/* Set to 1 to print the commands (on by default). */
@printCommands bit = 1,

/* Set to 1 to execute the commands (off by default). */
@runCommands bit = 0
)

 

Examples:

Example 1: Typical CONTEXT_INFO usage

Generate a security policy that adds a new filter predicate (using CONTEXT_INFO as described in Building More Secure Middle-Tier Applications with Azure SQL Database using Row-Level Security) on all tables with a "TenantId" column of type "int." Only allow access to "AppUser," the shared application user in our app's connection string. If CONTEXT_INFO is null, filter all rows by default.

EXEC sp_enable_rls_auto
@rlsColType = 'int',
@rlsColName = 'TenantId',
@applyToSchema = null,
@deactivateExistingPolicies = 1,
@rlsSchemaName = N'rls',
@rlsPredicateFunctionName = null,
@isNullAdmin = 0,
@isNullAdminOptimized = 0,
@restrictedToAppUserName = 'AppUser',
@printCommands = 1,
@runCommands = 0 -- set to 1 to execute output
go

Sample output:

CREATE FUNCTION [rls].[fn_predicate_TenantId_2015-03-30T17:36:51.010](@TenantId [int] )
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS fn_accessResult
WHERE
DATABASE_PRINCIPAL_ID() = DATABASE_PRINCIPAL_ID ('AppUser')
AND CONVERT([int], CONVERT(varbinary(4), CONTEXT_INFO())) = @TenantId
go

CREATE SECURITY POLICY [rls].[secpol_TenantId_2015-03-30T17:36:51.073]
ADD FILTER PREDICATE [rls].[fn_predicate_TenantId_2015-03-30T17:36:51.010]([TenantId]) ON [dbo].[Sales],
ADD FILTER PREDICATE [rls].[fn_predicate_TenantId_2015-03-30T17:36:51.010]([TenantId]) ON [dbo].[Products],
ADD FILTER PREDICATE [rls].[fn_predicate_TenantId_2015-03-30T17:36:51.010]([TenantId]) ON [dbo].[PriceHistory],
ADD FILTER PREDICATE [rls].[fn_predicate_TenantId_2015-03-30T17:36:51.010]([TenantId]) ON [dbo].[OrderDetails]
go


Example 2: Custom predicate function

Generate a security policy that adds a custom predicate function as a filter predicate on all tables with a "TenantId" column of type "int." 

CREATE FUNCTION rls.customTenantAccessPredicate(@TenantId int)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS accessResult WHERE
(
DATABASE_PRINCIPAL_ID() = DATABASE_PRINCIPAL_ID('AppUser') -- shared app user
AND CONVERT(int, CONVERT(varbinary(4), CONTEXT_INFO())) = @TenantId
)
OR
DATABASE_PRINCIPAL_ID() = DATABASE_PRINCIPAL_ID('ReportUser') -- reporting user can see all rows
go

EXEC sp_enable_rls_auto
@rlsColType = 'int',
@rlsColName = 'TenantId',
@applyToSchema = null,
@deactivateExistingPolicies = 1,
@rlsSchemaName = N'rls',
@rlsPredicateFunctionName = N'customTenantAccessPredicate',
@isNullAdmin = 0, -- n/a
@isNullAdminOptimized = 0, -- n/a
@restrictedToAppUserName = null, -- n/a
@printCommands = 1,
@runCommands = 0 -- set to 1 to execute output
go

Sample output:

CREATE SECURITY POLICY [rls].[secpol_TenantId_2015-03-30T18:22:14.213]
ADD FILTER PREDICATE [rls].[customTenantAccessPredicate]([TenantId]) ON [dbo].[Sales],
ADD FILTER PREDICATE [rls].[customTenantAccessPredicate]([TenantId]) ON [dbo].[Products],
ADD FILTER PREDICATE [rls].[customTenantAccessPredicate]([TenantId]) ON [dbo].[PriceHistory],
ADD FILTER PREDICATE [rls].[customTenantAccessPredicate]([TenantId]) ON [dbo].[OrderDetails]
go


Example 3: Optimized "superuser" if CONTEXT_INFO is null

Same as Example 1, but if CONTEXT_INFO is null, make all rows visible to the application and utilize the performance optimization for disjunctions described in Row-Level Security for Middle-Tier Apps – Using Disjunctions in the Predicate.

EXEC sp_enable_rls_auto
@rlsColType = 'int',
@rlsColName = 'TenantId',
@applyToSchema = null,
@deactivateExistingPolicies = 1,
@rlsSchemaName = N'rls',
@rlsPredicateFunctionName = null,
@isNullAdmin = 1,
@isNullAdminOptimized = 1,
@restrictedToAppUserName = 'AppUser',
@printCommands = 1,
@runCommands = 0 -- set to 1 to execute output
go

Sample output:

CREATE FUNCTION [rls].[int_lo_2015-03-30T18:30:46.993]() RETURNS [int]
WITH SCHEMABINDING
AS BEGIN
RETURN CASE WHEN context_info() is null THEN
-2147483648 ELSE
convert([int], convert(varbinary(4), context_info())) END
END
go

CREATE FUNCTION [rls].[int_hi_2015-03-30T18:30:46.993]() RETURNS [int]
WITH SCHEMABINDING
AS BEGIN
RETURN CASE WHEN context_info() is null THEN
2147483647 ELSE
convert([int], convert(varbinary(4), context_info())) END
END
go

CREATE FUNCTION [rls].[fn_predicate_TenantId_2015-03-30T18:30:46.993](@TenantId [int] )
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS fn_accessResult
WHERE
DATABASE_PRINCIPAL_ID() = DATABASE_PRINCIPAL_ID ('AppUser') AND (
(@TenantId BETWEEN [rls].[int_lo_2015-03-30T18:30:46.993]() AND [rls].[int_hi_2015-03-30T18:30:46.993]())
go

CREATE SECURITY POLICY [rls].[secpol_TenantId_2015-03-30T18:30:47.047]
ADD FILTER PREDICATE [rls].[fn_predicate_TenantId_2015-03-30T18:30:46.993]([TenantId]) ON [dbo].[Sales],
ADD FILTER PREDICATE [rls].[fn_predicate_TenantId_2015-03-30T18:30:46.993]([TenantId]) ON [dbo].[Products],
ADD FILTER PREDICATE [rls].[fn_predicate_TenantId_2015-03-30T18:30:46.993]([TenantId]) ON [dbo].[PriceHistory],
ADD FILTER PREDICATE [rls].[fn_predicate_TenantId_2015-03-30T18:30:46.993]([TenantId]) ON [dbo].[OrderDetails]
go

Row-Level Security: Performance and common patterns

$
0
0

This post demonstrates three common patterns for implementing Row-Level Security (RLS) predicates:

  1. Rows assigned directly to users
  2. Row assignments in a lookup table
  3. Row assignments from a JOIN

In addition, this post shows how RLS has performance comparable to what you’d get with view-based workarounds for row-level filtering. The benefits of using RLS instead of views include:

  • RLS reduces code complexity by centralizing access logic in a security policy and eliminating the need for an extra view on top of every base table
  • RLS avoids common runtime errors by requiring schemabinding and performing all permission checks when the policy is created, rather than when users query
  • RLS simplifies application maintenance by allowing users and applications to query base tables directly

To demonstrate the three common patterns, we’ll use RLS to filter rows in a Sales table based on increasingly complex criteria. To enable reasonable performance comparisons, we've populated this table with 50,000 rows of random data. 

Full demo script: https://rlssamples.codeplex.com/SourceControl/latest#RLS-Performance-Common-Patterns.sql 

 

Pattern 1: Rows assigned directly to users

The simplest way to use RLS is to assign each row directly to a user ID. A security policy can then ensure that rows can only be accessed by the assigned user. As described in Building More Secure Middle-Tier Applications with Azure SQL Database using Row-Level Security, it is common to use CONTEXT_INFO to store the user ID connecting to the database, and use RLS to filter out rows whose assigned user ID does not match.

In this example, we create a security policy that filters our rows whose SalesRepId does not match CONTEXT_INFO (using the appropriate type conversions):

CREATE FUNCTION rls.staffAccessPredicateA(@SalesRepId int)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS accessResult
WHERE CONVERT(int, CONVERT(varbinary(4), CONTEXT_INFO())) = @SalesRepId -- @SalesRepId (int) is 4 bytes
go

CREATE SECURITY POLICY rls.staffPolicyA
ADD FILTER PREDICATE rls.staffAccessPredicateA(SalesRepId) ON dbo.Sales
go

We could have achieved equivalent functionality using a view:

CREATE VIEW vw_SalesA
AS
SELECT * FROM Sales
WHERE CONVERT(int, CONVERT(varbinary(4), CONTEXT_INFO())) = SalesRepId
go

Now SELECT * FROM Sales with RLS enabled returns the same results as SELECT * FROM vw_SalesA without RLS enabled. Moreover, if we examine the Actual Execution Plans for both queries using SSMS, we see that the query optimizer has chosen a very similar plan for both. Sometimes the RLS plan will be slightly better, other times it will be slightly worse. The specific plan can depend on a multitude of exogenous factors, but in general the plans for RLS and for views will be very similar. (Note: We’ll ignore the missing index recommendations here, since these queries are artificially simple and the recommendations would have us place an index on every column.)

 

 

Pattern 2: Row assignments in a lookup table

A slightly more complex way to use RLS is to filter rows by looking up assignments in a helper table. For instance, we might have a helper table (“RegionAssignments”) mapping users to Regions. In our filtering logic, we can look up whether the current user should have access to each row based on the assignments stored in RegionAssignments. If one or more rows in RegionAssignments match the criteria, the corresponding row in the base table will be visible:

CREATE FUNCTION rls.staffAccessPredicateB(@Region nvarchar(50))
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS accessResult FROM dbo.RegionAssignments
WHERE CONVERT(int, CONVERT(varbinary(4), CONTEXT_INFO())) = SalesRepId
AND Region = @Region
go

CREATE SECURITY POLICY rls.staffPolicyB
ADD FILTER PREDICATE rls.staffAccessPredicateB(Region) ON dbo.Sales
go

Or equivalently, with a view:

CREATE VIEW vw_SalesB
AS
SELECT Sales.* FROM Sales, RegionAssignments
WHERE CONVERT(int, CONVERT(varbinary(4), CONTEXT_INFO())) = RegionAssignments.SalesRepId
AND Sales.Region = RegionAssignments.Region
go

Again, selecting from Sales with RLS enabled yields the same rowset as selecting from vw_SalesB without RLS enabled. In this particular case, the query plans are identical:

 

 

Pattern 3: Row assignments from a JOIN 

A more complicated RLS pattern is to look up row assignments by joining multiple helper tables in the filtering logic. For instance, we might have one helper table (“RegionAssignments”) mapping users to Regions, and another (“DateAssignments”) mapping users to a StartDate and an EndDate. To filter so that users can only see rows in their assigned region and date interval, we could create the following predicate function and policy:

CREATE FUNCTION rls.staffAccessPredicateC(@Region nvarchar(50), @Date date)
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN SELECT 1 AS accessResult FROM dbo.RegionAssignments ra
INNER JOIN dbo.DateAssignments da ON ra.SalesRepId = da.SalesRepId
WHERE CONVERT(int, CONVERT(varbinary(4), CONTEXT_INFO())) = ra.SalesRepId
AND @Region = ra.Region
AND @Date >= da.StartDate
AND @Date <= da.EndDate
go

CREATE SECURITY POLICY rls.staffPolicyC
ADD FILTER PREDICATE rls.staffAccessPredicateC(Region, Date) ON dbo.Sales
go

Or equivalently, with a view:

CREATE VIEW vw_SalesC
AS
SELECT Sales.* FROM Sales, RegionAssignments
INNER JOIN DateAssignments on RegionAssignments.SalesRepId = DateAssignments.SalesRepId
WHERE CONVERT(int, CONVERT(varbinary(4), CONTEXT_INFO())) = RegionAssignments.SalesRepId
AND Sales.Region = RegionAssignments.Region
AND Sales.Date >= DateAssignments.StartDate
AND Sales.Date <= DateAssignments.EndDate
go

Once again, selecting from the base table with RLS enabled yields the same rowset as selecting from the view without RLS enabled. And once again, in this case the query plans are identical:

 

Summary

RLS allows you to implement filtering logic of arbitrary complexity; however, there are a handful of particularly common patterns as shown in this post. In general, RLS will have performance comparable to what you’d get if using views, while affording a number of benefits around security, maintenance, and convenience.

We’ll have more guidance around performance in future blog posts, but for now we recommend the following best practices:

  • Avoid joining too many helper tables in your predicate function: the more joins you have, the worse the performance.
  • If your predicate does a lookup in a helper table, try to put an index on the lookup column.
  • Avoid using disjunctions (logical OR) in your predicate where possible, as there is a known performance issue described in Row-Level Security for Middle-Tier Apps – Using Disjunctions in the Predicate.

 

 

Announcing Transparent Data Encryption for Azure SQL Database

$
0
0

Available today, SQL Database Transparent Data Encryption (preview) protects your data and helps you meet compliance requirements by encrypting your database, associated backups, and transaction log files at rest without requiring changes to your application.

SQL Database TDE is based on SQL Server’s TDE technology which encrypts the storage of an entire database by using an industry standard AES-256 symmetric key called the database encryption key. SQL Database protects this database encryption key with a service managed certificate. All key management for database copying, Geo-Replication, and database restores anywhere in SQL Database is handled by the service – just enable it on your database with 2 clicks on the Azure Preview Portal: click ON, click Save, done.

 

Transparent Data Encryption for Azure SQL Database is built on top of the same Transparent Data Feature that has been running reliably on SQL Server since 2008. We have made updates to this core technology that will be available cloud first on Azure SQL Database, including support for Intel AES-NI hardware acceleration of encryption. This will reduce the CPU/DTU overhead of turning on Transparent Data Encryption.

Additionally, Transparent Data Encryption is built on top of the secret management service we use to isolate Azure SQL Database Tenants. Azure SQL Database has for years securely managed the hardware and virtual machines that Azure SQL Database runs on top of, as evidenced by the many certifications and audits we have passed - see the Microsoft Azure Trust Center for more details. We use this infrastructure to manage unique certificates per Azure SQL Database Server to protect your database encryption keys. We also use it to distribute these certificates as needed when you restore your database to a new server or setup a Geo-Replication - and update these when the certificate is rotated every 90 days for you. This enables a seamless experience where you just turn on Transparent Data Encryption and use the service without having to think about certificates or keys.

We hope this meets many of your needs for Encryption at Rest in a manner that lets you focus on the work that is important to you. For more information, see MSDN.


How to: Scale out multi-tenant apps using RLS and Elastic Database Tools

$
0
0

In response to a common customer ask, we've published guidance for developing multi-tenant applications on Azure SQL Database using row-level security (RLS) for tenant isolation and elastic database tools (formerly "Elastic Scale") for sharding. These technologies can be used together to flexibly and efficiently scale the data tier of a multi-tenant application, with support for both single- and multi-tenant shards. 

Walkthrough and demo here: Multi-tenant applications with elastic database tools and row-level security

 

 

Recommendations for using Cell Level Encryption in Azure SQL Database

$
0
0

When we introduced Transparent Data Encryption (TDE) to Azure SQL Database, we also introduced Cell-Level Encryption (CLE, also known as SQL Server key hierarchy).

For more details on TDE on Azure SQL Database, I would recommend visiting the Channel9 show for an excellent introduction: https://channel9.msdn.com/Shows/Data-Exposed/TDE-in-Azure-SQL-Database, and to BOL: https://msdn.microsoft.com/en-US/library/dn948096.aspx.

The main reason to introduce CLE into Azure SQL Database is to increase compatibility between on-premises SQL Server functionality and Azure SQL Database; and while the functionality is basically the same (with a few exceptions), there is also a potential for misuse that we want to describe and give recommendation on how to better use this functionality in the Cloud or in mixed environments.

The first thing to notice is that in Azure, the key hierarchy is no longer based on an instance-specific Service Master Key (SMK), instead, the root is a certificate controlled and managed by the Azure SQL Database service, which means that management for the keys is simplified to the database-scoped key hierarchy.

The Master Key (MK) no longer needs to have a password protection. This will simplify applications that are designed to be in the Cloud only, where disaster recovery strategies do not necessarily need to recover the MK using the password.

Another change is that all the syntax that would make a reference to a file or executable file is not supported on the Azure SQL Database version. This includes creating backup of the MK or certificates, restoring backups or importing certificates or asymmetric keys from files.

Additionally, it is worth emphasizing that because the symmetric key & asymmetric key objects cannot be exported, data encrypted or signed can  be lost when copied to a different database using the Import/Export (I/E) functionality in SQL Database which is based on logical data movementusing bacpac files and the DACFx API. This limitation does not affect physical data replication scenarios such as backup files, database copy, or Geo-Replication.

The reason for this potential data loss is that for CLE, there is no actual metadata linking a key to the data being protected by it. Such mapping is typically part if the application’s logic and remains transparent to a service such as Import/Export. When exporting data, services or applications that rely on logical data movement such as I/E will successfully copy all the ciphertext data that will likely be stored in varbinary columns, but without the actual key material, this ciphertext will no longer be useful.

To avoid this situation we would strongly recommend using certificates, which can still be extracted and recreated using a binary representation, and symmetric keys using CREATE SYMMETRIC KEY with the KEY_SOURCE & IDENTITY_VALUE fields in such a way that the exact same key can be re-created elsewhere, avoiding a potential data loss.

Example:

Connecting to a V12 Azure SQL Database:

declare @pvk varbinary(MAX)
declare @cmd nvarchar(MAX)
declare @certName sysname = 'my_cert'
declare @pwd sysname = '<<Use a strong password, keep it safe!>>'
-- Extract the public certificate
select @cer = CERTENCODED(cert_id(@certName))
-- Extract the private key, encrypted by the password certificate
select @pvk = CERTPRIVATEKEY(cert_id(@certName), @pwd)
-- Construct the T-SQL statement to recreate the certificate
SET @cmd = 'CREATE CERTIFICATE ' + quotename(@certName) + ' FROM BINARY = ' + sys.fn_varbintohexstr(@cer) +
' WITH PRIVATE KEY ( BINARY = ' + sys.fn_varbintohexstr(@pvk) + ', DECRYPTION BY PASSWORD = ''' + replace(@pwd, '''', '''''') + ''');'
print @cmd
go

declare @signature varbinary(max)
declare @certName sysname = 'my_cert'
declare @cmd nvarchar(MAX)
declare @data nvarchar(256) = 'Some value to sign'
-- Try signing some data & verify the signature on a database where you recreate the certificate
SELECT @signature = SIGNBYCERT(cert_id(@certName), HASHBYTES('SHA2_256', @data))
-- Construct the statement to verify the signature on a database where the cert was recreated
SET @cmd = 'SELECT VERIFYSIGNEDBYCERT(cert_id(''' + replace(@certName, '''', '''''') + '''), HASHBYTES(''SHA2_256'', N''' +
replace (@data, '''', '''''') +'''), '+ sys.fn_varbintohexstr(@signature) +')';
print @cmd
go

-- Create a symmetric key with a key_source & Identity_value arguments in order to be able to recreate the key in a different database
CREATE SYMMETRIC KEY [my_key] WITH ALGORITHM = AES_256,
KEY_SOURCE = '<<This secret will be used to derive the key, keep it secret, keep it safe>>',
IDENTITY_VALUE = '<<This value will be used to derive the key GUID, does not need to be a secret>>'
ENCRYPTION BY CERTIFICATE [my_cert];
GO

-- Encrypt some data and generate a statement that can be decrypted on a different database where the key was recreated
OPEN SYMMETRIC KEY [my_key] DECRYPTION BY CERTIFICATE [my_cert]
go
declare @ciphertext varbinary(max)
declare @certName sysname = 'my_cert'
declare @symkeyName sysname = 'my_key'
declare @symkeyGuid uniqueidentifier
declare @cmd nvarchar(MAX)
declare @data nvarchar(256) = 'Some value to encrypt'
SELECT @symkeyGuid = key_guid(@symkeyName)
SELECT @ciphertext = encryptbykey(@symkeyGuid, @data)
SET @cmd = 'SELECT CAST( decryptbykeyautocert(cert_id(''' + replace(@certName, '''', '''''') + '''), null, ' +
sys.fn_varbintohexstr(@ciphertext) + ') AS nvarchar(256));'
print @cmd
go
CLOSE SYMMETRIC KEY [my_key]
go

Connecting to a different V12 Azure SQL Database or an on-premises SQL Server:

-- 2nd Database (in premise or another SQL Azure DB)
CREATE MASTER KEY ENCRYPTION BY PASSWORD = '<<Some password for DR>>'
go

-- recreate the certificate from the code generated on the original DB
-- The sample code here should also work, but feel free to replace the values
CREATE CERTIFICATE [my_cert] FROM BINARY = 0x308202aa30820192a003020102021061fcc48bf3dd0c8e427387e4263f3ab0300d06092a864886f70d01010505003011310f300d060355040313066d7963657274301e170d3135303530373230353435325a170d3136303530373230353435325a3011310f300d060355040313066d796365727430820122300d06092a864886f70d01010105000382010f003082010a028201010095df77a32b92c3acad76aa54998f6d24f8460ef0b071f72dfd56bcea0c5e1060026a794e4c08ad8ba3066142bd4f967ff9ed84456d7ea67eadfcbbe6a4becd60ad9bd8887af87eda064c6678399c3ea58627c2549ba3619e8490d902539109c762cbf47fcf66b85bcf3dc504cb437f4e837ee769d86b9c53fa4464f7b38e4066ab0882631a0bc83f36310a49ebb1ebd39799d0669e09c9697c08d38210501a8ebd64bea06f5dc869d5396a7c8b100c4dd96df6857f79c08abf275da207715a4d2e0b3442a206346ed4b18cd54e7bad8575d9f2dc53499ef5cdb683e97856bea4debecdccf2b7f481fe43a7782bea9bf66c92c6417b6e786aa174355091caeb670203010001300d06092a864886f70d0101050500038201010082668b3866e36c95c7ed6debc28f9fd2c5a0e1f48c934c28c32ee6f2e78be45a59529aeafc3e3b0033aecdc53fd62420fa575e18b6e19251231e20df64ba90c3b62c383c5b892f8ebda74be3b3532e388dad079be6d76c8929df8bf02a023f191215cb18d02aa94a4dbb626c6bbeb0bf965d3d433b4ced8cef2dfd457cf367b32945c7c253685d488dccd3030e0329c4ff268abf131e92ac5c3dd406ab20a6a648378f8cf5c98cb64043b85f5d1312ff9bb999f3a7c48bbdf59a591bbe0de4296ec3b27d04d723d8ea8d0bcbe30d7a297513cf7943b764dd7a1ddd0233522aea03b002495e203a2fee28959238ef58991a6236584ec8c3a54709364bf1629620 WITH PRIVATE KEY ( BINARY = 0x1ef1b5b00000000001000000010000001000000094040000c2e2e0609fb9683e64eeef0faee225cf0702000000a40000fe53960114c71cefa5c841ee24aca3bcedf01d4c1f3a2cd16700bedb83831995354b3b2fd9409ffa394c47bbd095bffa4eb4df0af145b2c53c2caf76c731b6a432e92d884d0645af634d32beafb7033b812a8dc8cbd355fe6e542104a61d2a84545ef1126bca1ab7356431794767b19575be77b8488732c350abb4d962e4ef8b298dbc79d634a0e441ef095835bbfce095a2a7c16b76fa86170f05564b0d0fb13c146c168f16be708dbf8396f1f7fe199edb5cdc134f547ed9cfe791598481664e10fbf687528b2e7864b625a15b12a04682ba6a0c4f1087ece53e52c0ef4568fd5200f3a0b289a367fc407549f15f93505dcf69603381f69a0b175c584470e7db8e301249718ee291c43fa8c94f742090ad06fccf538fd673a84b21e23bdd57edebf45717cec9209660e7d977c66c38a945ad0461cc870a1eb50eb94711f96604d4b76666bd7da7fe2ffeb8da7b814278a6c875a95c39ffe4e402b14d4d2b512d0b2f73f1e01d7e51e98b55a77f3afd33ef5d946a138a93b14bcedf6f86ef854dfe19bb88d7eae0366d9c9b0e57bbae50344a7e40acbcc6aeb9c9c44933efd31ac5c4a4e935b6c0e3ad362279868170959deae06f44a1af0d2e8c1e6ae8e7066c71bbdad05951129f6caf13471613e6ce78c5e20e7a792bdbc578c850eaf22c72d18233234178d1f285bff70960029a913280aa9727c485426fc1126e61570bb30b5d552eadf525b6e11d5d89477441c72d160c51e4a782e1b4dfe36fba1e414b13bf3d4b6b1511921546cda12ab8a3a222551fa2caeba0ea5a29ef2df53dd07624a6185b2eef081a123af21d3c0434960c4048609600623910885e64451bd2caa20e13a58016bdf250401935af7ee2e52d02c2af2c44af6c5ef06dca7e1a612528f7545a7d4e36e20d8591bc522611713fa1c1e623a11cd460cdc5cb663ce674a72b3799fcec76a4a63fe0df4f8d64dfc4b304aa27e251f5b7813e1ffe50c3201354231b6cf8ab2237fc45c1eeae7a57364a615f7ef00720ac7589cef8ea31e53f418883a059421f0945f50109e70963acde5b534c49d38cd13c34b6c1271f84dd4ecae04f1eed6a3855b3ff879d10c907d2946a1e62118d9feec28fcd6d7b00c893f565919d82ba54be3734d08a38f8ed2f1e73b5c6b0393ac015f0edf2dcfc8378d078537f7697be07ac0417b3597c1741629b33503779adc3a35c6152fb83b7c20eb3bea780afa4472d993a9ebc858fcb28a63687fe1dfc52449b01a3a19c6885d390635880c32581dbe2cf591e28a20051901da05ba12daecdc5630e41855e43c5991e337f0c81a07e37f21d44622dc50ff5c9c7231e0f77ad957c769411239cf5f245d469ffeb380572c616fed2606dc6ead65fa7268c614cdeb9cb5e164ce9afa944a9b22d5d6fb2840855aa583c294a504cd0d03fa98ebb05d20b37e4cece534dcc3516908b48894cf4048b7e9334eca24a3df527289df41e16462249005cdc3c9e707207f66d52a5ff1318570c0af9a163bbc791dcf239a8aeec3d994c36d2dbb07c0b6dfc77336ea190f367f2361235e8bdfcff7fd928dbfbe6062df8edd6262447605e496fffef30a47c21ba6a365ed0e1ea78cbb0d02d25dccf1a7f99de7c790d0d8785c197, DECRYPTION BY PASSWORD = '<<Use a strong password, keep it safe!>>');
go
SELECT VERIFYSIGNEDBYCERT(cert_id('my_cert'), HASHBYTES('SHA2_256', N'Some value to sign'), 0x8439b62f9e2e10f6f9a5868034b773a020951661169434029ce1ad51f2fb60f6df7872f73d13510713f4584fef8e4dc73eb29c481cbcbd234eb118c8ba8003ae12b512f6be40af54d8bb9ae201a1d480da99532c0fc5f46d537107fd37f3e47d2124612ef1d9a5b7d8a448f1d847f85672235cd05e1a7e8ec1c23ebb4e5d288bfb53cc8639abbfa8ed3f32f2508bcb5b2417297553c567dc410a3c20ad5d471509aaf55a15306fe436c0dbf44c52cb1b7977bd59c22bceb68094879dd61da97eeed47a8b9be997c0b52f0ef57db638c7910c69ac8922f255444a0ed4088b191ed285c5a9cd507a724664714708d5b30552b74e858f8fa99cb50cabd7348dc191)
go

-- recreate the key using the same values as on the original DB
-- The sample code here should also work, but feel free to replace the values
CREATE SYMMETRIC KEY [my_key] WITH ALGORITHM = AES_256,
KEY_SOURCE = '<<This secret will be used to derive the key, keep it secret, keep it safe>>',
IDENTITY_VALUE = '<<This value will be used to derive the key GUID, does not need to be a secret>>'
ENCRYPTION BY CERTIFICATE [my_cert];
GO

-- Decrypt some data using the code generated on the original DB
SELECT CAST( decryptbykeyautocert(cert_id('my_cert'), null, 0x00ba1df9d003f502a8986a10da279efe0100000056e0ec1e2d6b2537a8258c1c990d750a8451aba962dc830dc5a3801e1681b7459e6abeb084854a75eadf0b5adec4fb9950bedddef14214825c82d9fb2f2fa70d27ed1311c58cbe1c0105164780475d1a) AS nvarchar(256));
go

As you can see on the samples above, using CERTIFICATE objects in CLE will give you an opportunity to export & recreate the certificate on any other system for working on scenarios where logical data migration is needed. The same is true when using the KEY_SOURCE & KEY_IDENTITY fields with CREATE SYMMETRIC KEY. Using these techniques will allow you to recreate the same CLE objects on a different database and move data between them without data loss.

If your scenario does not involve any logical data movement (i.e. all data is and will always be in the same database), you may not need to use the techniques shown above to be able to transport the CLE objects (CERTIFCATE & SYMMETRIC KEY) from one database to another; but if you know logical data movement will be necessary, or you are simply not sure, we strongly recommend using CLE with caution and use the techniques shown here to protect against an accidental data loss.

Using CLR to replace xp_cmdshell for specific tasks

$
0
0

As we have discussed before, xp_cmdshell is a mechanism to execute arbitrary calls into the system and because of the flexibility of its nature, it is typically abused and leads to serious security problems in the system.

  In most cases, what the sysadmin really wants to do is to enable only a handful of specific tasks on the system, without the whole flexibility that comes from running xp_cmdshell directly.

  One approach to achieve this constraint access to specific tasks on the system is to enable xp_cmdshell through a signed module. For detailed information on this approach, we have some articles in the SQL Server Security blog, but Erland Sommarskog also has a very nice article on the subject that I would recommend: http://www.sommarskog.se/grantperm.html.

  An alternative that I personally prefer is to create SQL CLR modules that help to accomplish the specific tasks. Using SQL CLR modules it is possible to create a finely targeted escalation path that enables users to do exactly what they need, enabling at the same time ease to write parameter check verification and a clear parameterization that would help you avoid command injections.

For example, let’s create a library that help to copy & delete files in the OS, you could easily add checks for specific paths and make sure that the behavior of the module is exactly what you would expect:

 using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.IO;

public class SqlClrUserDefinedModules
{
readonly private static string _approvedDirectory = @"C:\temp";

[SqlProcedure()]
public static void DeleteFile(SqlString filename)
{
FileInfo fi = new FileInfo(filename.Value);

if(!fi.DirectoryName.Equals(_approvedDirectory, StringComparison.InvariantCultureIgnoreCase ))
{
throw new Exception(@"File is not located in an approved directory");
}

File.Delete(filename.Value);
}

public static void CopyFile( SqlString filename, SqlString destinationFilename)
{
FileInfo fi = new FileInfo(filename.Value);
if(!fi.DirectoryName.Equals(_approvedDirectory, StringComparison.InvariantCultureIgnoreCase))
{
throw new Exception(@"Source file is not located in an approved directory");
}

fi = new FileInfo(destinationFilename.Value);
if (!fi.DirectoryName.Equals(_approvedDirectory, StringComparison.InvariantCultureIgnoreCase))
{
throw new Exception(@"Destination file is not located in an approved directory");
}

File.Copy(filename.Value, destinationFilename.Value);
}
}

Because it is accessing the OS file system, in order to use it on your database, you will need to use EXTERNAL ACCESS permission. This gives you two choices:

1)      Trust the DB where you are storing it

2)      Trust the module via a strong name or Authenticode

In our example, I decided to use a strong name, so I will create an ASYMMETRIC KEY in master DB by extracting it from the DLL file itself, grant the right permission (EXTERNAL ACCESS ASSEMBLY) and enable CLR in case it was not enabled:

USE [master]
GO

CREATE ASYMMETRIC KEY [snk_external_access_clr] FROM EXECUTABLE FILE = 'E:\temp\SqlUserDefinedModules.dll'
go

CREATE LOGIN [snk_external_access_clr] FROM ASYMMETRIC KEY [snk_external_access_clr]
GO

GRANT EXTERNAL ACCESS ASSEMBLY TO [snk_external_access_clr]
go

EXEC sp_configure 'clr enabled', 1; RECONFIGURE;
go

The next step would be to access the DB where we are going to host the assembly and create it.

NOTE: because we are trusting the module via a digital signature (Strong Name), we are trusting the module as a whole in the system, regardless of the database where it is hosted. Make sure to take this into account when using EXTERNAL ACCESS or UNSAFE assemblies.

CREATE ASSEMBLY [SqlUserDefinedModules] FROM 'E:\temp\SqlUserDefinedModules.dll'
WITH PERMISSION_SET = EXTERNAL_ACCESS
go

CREATE SCHEMA [SqlClrUserDefinedModules]
go

CREATE PROCEDURE [SqlClrUserDefinedModules].[DeleteFile]
(@filename nvarchar(2048))
AS EXTERNAL NAME [SqlUserDefinedModules].[SqlClrUserDefinedModules].[DeleteFile];
go

CREATE PROCEDURE [SqlClrUserDefinedModules].[CopyFile]
(@filename nvarchar(2048), @destinationFilename nvarchar(2048))
AS EXTERNAL NAME [SqlUserDefinedModules].[SqlClrUserDefinedModules].[CopyFile];
go

At this point, you could simply grant access to execute the module to normal users in the database. For example:

CREATE USER [clr_test_user] WITHOUT LOGIN
go

GRANT EXECUTE ON SCHEMA::[SqlClrUserDefinedModules] TO [clr_test_user]
go

Hopefully this example will be useful to customize CLR modules that can be used to replace any xp_cmdshell usage you may be using in such a way that the CLR modules are more secure and targeted.

Apply Row-Level Security automatically to newly created tables

$
0
0

We have discussed before that applications with multi-tenant databases, including those using Elastic Scale for sharding, commonly have a “TenantId” column in every table to indicate which rows belong to each tenant.

  In that previous post, we shared with you a SP that could help you to streamline the creation of a security policy that would add a filter on the TenantId column for all existing tables, but a common request we have heard since then is to have a mechanism to apply the same policy automatically to newly created tables.

  In order to help you streamline the create table scenario, it is possible to apply a DDL trigger to CREATE TABLE events, and automatically alter the policy to add the predicate we are using for all other TenantId-enabled tables.

  Because by default triggers execute under the same context as the principal who ran the operation that invoked them, and the ALTER SECURITY POLICY DDL has very high permission requirements; we will need to use some tricks using impersonation & module signing to make sure all the trigger’s operations will work as expected.

In order to make sure we can potentially reuse the code to add the table to a policy (in case we need to reuse it on ALTER TABLE DDL, or in the future for other objects) I will separate the ALTER SECURITY POLICY DDL logic into a separate SP. We made the SP as generic as possible so it can be reused in any database. Notice that the SP assumes the existence of a SECURITY POLICY.

-- First create a sproc that our trigger will call to add a table to an existing security policy
-- This SP will perform the ALTER SECURITY POLICY DDL
--
CREATE PROCEDURE dbo.sp_add_table_to_policy (
@rlsPolicySchema sysname,
@rlsPolicy sysname,
@rlsPredicateSchema sysname,
@rlsPredicateName sysname,
@targetScehma sysname,
@targetTable sysname,
@targetColName sysname,
@forcePolicy bit = 0 -- Use this parameter to control whether the @taregetColName is mandatory or not
)
AS
BEGIN
IF( @forcePolicy = 0 )
BEGIN
IF( NOT EXISTS (SELECT * FROM sys.columns WHERE object_id = object_id(quotename(@targetScehma) + N'.' + quotename(@targetTable)) AND name = @targetColName))
BEGIN
print 'Skipping policy creation since the table does not include the target column'
return;
END
END

DECLARE @cmd nvarchar(max);
SET @cmd = N'ALTER SECURITY POLICY ' + quotename(@rlsPolicySchema) + N'.' + quotename(@rlsPolicy) + N'
ADD FILTER PREDICATE ' + quotename(@rlsPredicateSchema) + N'.'+ quotename(@rlsPredicateName) + N'(' + quotename(@targetColName) + N')
ON ' + quotename(@targetScehma) + N'.' + quotename(@targetTable) + N';'
EXECUTE( @cmd )
END
go

-- Create certificate for special user, and use it to sign the sproc and make sure we will have the right permissions when executing it
--
CREATE CERTIFICATE cert_rls ENCRYPTION BY PASSWORD = '<<ThrowAway password124@>>' WITH SUBJECT = 'RLS policy trigger'
go
CREATE USER cert_rls FOR CERTIFICATE cert_rls
go
GRANT REFERENCES TO [cert_rls]
GRANT ALTER ANY SECURITY POLICY TO [cert_rls]
GRANT SELECT ON [rls].[fn_predicate_TenantId] TO [cert_rls]
GRANT ALTER ON [rls].[fn_predicate_TenantId] TO [cert_rls]
GRANT ALTER ON SCHEMA::[rls] TO [cert_rls]
go
ADD SIGNATURE TO [dbo].[sp_add_table_to_policy] BY CERTIFICATE [cert_rls] WITH PASSWORD = '<<ThrowAway password124@>>'
go
ALTER CERTIFICATE [cert_rls] REMOVE PRIVATE KEY
go

  Then we create the actual DDL trigger, where we will specify the actual values that will be used on the previously created SP as well as the behavior (e.g. whether the existence of the “TenantId’ column is mandatory or optional). For example:

-- Create a special user with elevated permissions that the trigger can use to execute the sproc to apply the policy (least privilege)
CREATE USER [user_rls_trigger] WITHOUT LOGIN
go

GRANT EXECUTE ON [dbo].[sp_add_table_to_policy] TO [user_rls_trigger]
go

-- Create a trigger on CREATE TABLE DDL to add a filter predicate whenever a table is created
CREATE TRIGGER trig_apply_policy ON DATABASE
WITH EXECUTE AS 'user_rls_trigger'
AFTER CREATE_TABLE
AS
-- Change these params depending on your scenario
DECLARE @forcePolicy bit = 1 -- if 1, prevents you from creating a new table without the target column (e.g. tenantId)
DECLARE @targetColumnName sysname = 'tenantId'; -- target column for the filter predicate
DECLARE @rlsPolicySchema sysname = 'rls';
DECLARE @rlsPolicyName sysname = 'secpol_TenantId';
DECLARE @rlsPredicateSchema sysname = 'rls';
DECLARE @rlsPredicateName sysname = 'fn_predicate_TenantId';

DECLARE @schema sysname
DECLARE @tableName sysname
DECLARE @data xml
SET @data = EVENTDATA()
SET @schema = @data.value('(/EVENT_INSTANCE/SchemaName)[1]', 'nvarchar(256)')
SET @tableName = @data.value('(/EVENT_INSTANCE/ObjectName)[1]', 'nvarchar(256)')
BEGIN TRY
EXEC [dbo].[sp_add_table_to_policy] @rlsPolicySchema, @rlsPolicyName, @rlsPredicateSchema, @rlsPredicateName, @schema, @tableName, @targetColumnName, @forcePolicy;
END TRY
BEGIN CATCH
declare @err int = error_number()
declare @msg nvarchar(256) = error_message()
raiserror( N'Table cannot be added to policy, it requires a column named %s in order to apply the filter predicate. Inner Error Number: %s',
12, 1, @targetColumnName, @msg )
END CATCH
go

For the full sample code, please visit: http://rlssamples.codeplex.com/SourceControl/latest#RlsTrigger.sql

I would also recommend reading a few additional articles that are relevant to module signing & dynamic SQL with module signatures:

Getting Started With Always Encrypted

$
0
0

The recently released SQL Server 2016 Community Technology Preview 2 introduced Always Encrypted , a new security feature that ensures sensitive data is never seen in plaintext in a SQL Server instance. Always Encrypted works by transparently encrypting the data in the application, so that SQL Server will only handle the encrypted data and not plaintext values. Even if the SQL instance or the host machine is compromised, all an attacker can get is ciphertext of sensitive data.

We will begin a series of articles on Always Encrypted with a simple example of the technology that can help everyone to get started. We will show how to develop a simple console application that uses Always Encrypted to protect patient information stored in a database.

For this example, you will need to install the following:

  1. Database Engine from CTP2 of SQL Server 2016 (on a SQL Server machine).
  2. SQL Server Management Studio from CTP2 of SQL Server 2016 (on your development machine).
  3. Visual Studio, preferably 2015 RC (on your development machine).

Create a Database Schema using Always Encrypted

For this simple example, we will perform the following steps using SSMS (SQL Server Management Studio) on the development machine:

  1. Create a local, self-signed certificate on the development machine, which will act as a column master key (CMK). The CMK will be used to protect column encryption keys (CEK), which encrypts the sensitive data. We will then create a column master key definition object in the database, which will store the information about the location of the CMK. Please notice that the certificate will never be copied to the database or to the SQL Server machine.
  2. Create a column encryption key on the development machine, encrypt it using the CMK and then create a column encryption key object in the database uploading the encrypted value of the key.
  3. Create a simple table with encrypted columns.

Step 1 - Configure a Column Master Key

a) Create a new database named Clinic.

b) Using Object Explorer, locate and open the Always Encrypted Keys folder under Security for your database. Right-click on Column Master Key Definitions and select New Column Master Key Definition…. This will open a dialog which you will use to define a column master key for your database. The easiest option for developing new apps using Always Encrypted is to use a certificate, stored in your personal Certificate Store, as a column master key.

c) Simply, enter CMK1 as a name of your column master key, click Generate Self-Signed Certificate, and click OK. This will generate a self-signed certificate, put it in your personal store (Certificate Store: Current User), and create a definition of the column master key in the database.

Step 2 - Configure a Column Encryption Key

To generate a column encryption key that will be used to encrypt sensitive data in the database, right-click on the Column Encryption Keys folder, select New Column Encryption Key, enter CEK1 as a key name and select CMK1 as an encrypting column master key for your new column encryption key. Once you click OK, a new column encryption key gets created, encrypted with the certificate you configured in step 1, and the encrypted value is uploaded to the database.

 

Step 3 – Create a Table using Always Encrypted:

Using a New Query window in SSMS, issue the following statement:

CREATE TABLE [dbo].[Patients](
[PatientId] [int] IDENTITY(1,1),
[SSN] [nvarchar](11) COLLATE Latin1_General_BIN2
ENCRYPTED WITH (ENCRYPTION_TYPE = DETERMINISTIC,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256',
COLUMN_ENCRYPTION_KEY = CEK1)
NOT NULL,
[FirstName] [nvarchar](50) NULL,
[LastName] [nvarchar](50) NULL,
[MiddleName] [nvarchar](50) NULL,
[StreetAddress] [nvarchar](50) NULL,
[City] [nvarchar](50) NULL,
[ZipCode] [int] NULL,
[State] [nvarchar](50) NULL,
[BirthDate] [datetime2]
ENCRYPTED WITH (ENCRYPTION_TYPE = RANDOMIZED,
ALGORITHM = 'AEAD_AES_256_CBC_HMAC_SHA_256',
COLUMN_ENCRYPTION_KEY = CEK1)
NOT NULL
PRIMARY KEY CLUSTERED ([PatientId] ASC) ON [PRIMARY] )
GO

The above T-SQL creates the Patients table with two encrypted columns: SSN and BirthDate. SSN is configured to be encrypted using deterministic encryption, which supports equality lookups, joins and group by. BirthDate is encrypted using randomized encryption, which does not support any operations, but that is ok, as the app is not going to perform any computations on the BirthDate column.

 

Create an Application using Always Encrypted

Now that we have the Always Encrypted keys and the schema configured, we can create a small application that will be able to insert data into the Patients table & query it.

In Visual Studio, we will create a new console application using C#. Since the SqlClient enhancements to support Always Encrypted were introduced in .Net Framework 4.6, we need to ensure the application is using the right version of the framework. Right click on the project, select Properties, then go to the Application tab, and make sure that Target Framework option is set to “.Net Framework 4.6”.

Next we will add very simple code that connects to the database, inserts and selects data using SqlClient. You will notice that the only change required to use Always Encrypted is including “Column Encryption Setting=Enabled;” in the connection string. The complete code is included as a file attachment.

For this exmple, we are enabling the Column Encryption Setting in the connection string using a SqlConnectionStringBuilder object and setting SqlConnectionStringBuilder.ColumnEncryptionSetting to Enabled… and that’s pretty much it.

 strbldr.ColumnEncryptionSetting = SqlConnectionColumnEncryptionSetting.Enabled;

Something to remark is that in order to send values that will correspond to encrypted columns, you need to use SqlParameter class. It is not possible to use literals to pass such values.

 cmd.CommandText = @"INSERT INTO [dbo].[Patients] ([SSN], [FirstName], [LastName], [BirthDate]) VALUES (@SSN, @FirstName, @LastName, @BirthDate);";

SqlParameter paramSSN = cmd.CreateParameter();
paramSSN.ParameterName = @"@SSN";
paramSSN.DbType = DbType.String;
paramSSN.Direction = ParameterDirection.Input;
paramSSN.Value = ssn;
paramSSN.Size = 11;
cmd.Parameters.Add(paramSSN);

SqlParameter paramBirthdate = cmd.CreateParameter();
paramBirthdate.ParameterName = @"@BirthDate";
paramBirthdate.DbType = DbType.DateTime2;
paramBirthdate.Direction = ParameterDirection.Input;
paramBirthdate.Value = birthdate;
cmd.Parameters.Add(paramBirthdate);

cmd.ExecuteNonQuery();

cmd.CommandText = @"SELECT [SSN], [FirstName], [LastName], [BirthDate] FROM [dbo].[Patients] WHERE [SSN] = @SSN;";

SqlParameter paramSSN = cmd.CreateParameter();
paramSSN.ParameterName = @"@SSN";
paramSSN.DbType = DbType.String;
paramSSN.Direction = ParameterDirection.Input;
paramSSN.Value = ssn;
paramSSN.Size = 11;
cmd.Parameters.Add(paramSSN);

SqlDataReader reader = cmd.ExecuteReader();

At this point many readers may be doubting we really encrypted anything, after all, the application seems to be simply handling plaintext as naturally as before; so, how can we verify that the data was properly encrypted?

We can use SSMS query for that. If we simply select our table, you will notice that the SSN & BirthDate columns seem to be displaying binary data.

Conclusion

As we have seen, Always Encrypted transparently encrypts/decrypts sensitive data in the application as long as the application has access to the certificate acting as a CMK. Users and applications without access to the CMK, including SQL Server itself, will not be able to decrypt the sensitive data.

If you connect SQL Profiler to the database while running this application, you will notice that SQL Server will receive data corresponding to the encrypted values only as ciphertext, never as plaintext.

Special thanks the Always Encrypted team for their help writing this article.

Viewing all 71 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>