SQL Server

Auditing login attempts with SQL Server

Today’s quest is similar to the blog post that I wrote two months ago about auditing security changes: we’re admin of our non-production environnements of SQL Server and we’d like to put in place some auditing. In this case we’d like to store all the login attempts on our database/server.

The core issue is that we have many users and logins on our databases and we have huge doubt their respective needs. The root cause is identified: sometimes, for a short period of time, we’re making exceptions to our own rules and let a few other friend projects access to our DEV database. On some other cases, we’re connecting our own solution in DEV environnement to the QA environnement of another solution. Why … planning, data quality issue, … we’ve valid reasons to do it … but these exceptions should be removed as soon as possible. And you know what? People forget. Nowadays, on our largest solution, we have 20 users but only 7 of them are expected and documented … other should be removed. But before executing this cleanup, we’d like to be sure that these users are not effectively used by other solutions. If it’s the case, we’ll need to update first the configuration of the corresponding solution.

To know if a user is effectively in use on a database, you need to capture the attempts of connexion (success and failure). Technically, you’ve several ways to implement this in SQL Server but I decided to use the audit feature to accomplish this task.

who-is-attempting-to-log-to-your-database

The first step is always to create an audit on the server or database (here, I’d like to track the whole server).

create server audit
    [Audit Login Informations]
to file
(   FILEPATH = N'N:\Temp\Trace\'
    , MAXSIZE = 1024 MB
    , MAX_FILES = 10
    , RESERVE_DISK_SPACE = OFF
)
with
(   QUEUE_DELAY = 1000
    , ON_FAILURE = CONTINUE
    , AUDIT_GUID = '33e055a9-1924-47c3-9777-e8875a5bcd37'
);

This audit must be activated

alter server audit
    [Audit Login Informations]
with (state=on);
go

Then the specification of this audit must be described. In our case we’ll intercept five groups of events related to login attempts (successful or not) and to contained database or not.

create server audit specification
    [Audit Login Informations specification]
for server audit [Audit Login Informations]
    add (FAILED_DATABASE_AUTHENTICATION_GROUP)
    , add (FAILED_LOGIN_GROUP)
    , add (SUCCESSFUL_DATABASE_AUTHENTICATION_GROUP)
    , add (SUCCESSFUL_LOGIN_GROUP)
    , add (DATABASE_LOGOUT_GROUP)
with (state=on);
go

By experience this kind of script is probably more useful and less breakable, if you add a cleanup phase before creating your audit and its specification.

Let’s start by testing if the audit is already existing. The view sys.dm_server_audit_status will let you check if the audit is existing and if it’s running. If it’s the case then you need to stop it and drop it (and its specification).

if exists(
    select
         *
    from
         sys.dm_server_audit_status
    where
         name='Audit Login Informations'
)
begin
    alter server audit specification
        [Audit Login Informations specification]
    with (state=off);

    drop server audit specification
        [Audit Login Informations specification];

    alter server audit
        [Audit Login Informations]
    with (state=off);

    drop server audit
        [Audit Login Informations];
end

That’s it! You can now query your audit to get all the connection attempts on your server by the means of this simple query:

declare  @LogFilePath varchar(255);
set @LogFilePath = N'N:\TEMP\Trace\';

declare  @AuditName varchar(255);
set @AuditName = N'Audit Login Informations';

select
    @LogFilePath=log_file_path
from
    sys.server_file_audits
where
    name=@AuditName;

set @LogFilePath=@LogFilePath
    + REPLACE(@AuditName, ' ', '%5')
    + '*.sqlaudit';

select
    session_server_principal_name
    ,database_name
    ,object_name
    ,m.class_type_desc
    ,statement
    ,convert(datetime,
        switchoffset(
            convert(datetimeoffset, event_time)
            , datename(TzOffset, sysdatetimeoffset())
        )
    )
from
    sys.fn_get_audit_file (@LogFilePath,default,default) f
inner join
    sys.dm_audit_class_type_map m
    on m.class_type=f.class_type
order by
    event_time desc;
Illustration performed with the help of icons from the noun project: Database by Dmitry Mirolyubov, Sightview by Andy Ivandikov, Lock by Maxim Kulikov. Common creative licence.

Managing firewall rules for Azure databases with SSMS

When you create a new Azure database, you usually need to open the firewall to remotely administrate or query this database with SSMS. An option is to create rules from the Azure Portal. It’s surely a convenient way to do it when you create a database but I prefer to keep a minimum of tools and when the Azure portal is not open, I prefer to not have to open it just to define a few firewall rules.

Opening the firewall with SSMS is a kind of chicken and eggs problem: to connect to your database/server, you need to open the firewall. Hopefully, SSMS has a great suite of screens to call the underlying API of Azure Portal and open the firewall for the computer running SSMS.

Attempt to connect to your Azure database.

ssms-azure-01

If the firewall is not already open for this IP, you’ll receive the following screen:

ssms-azure-02

The first step is to identify you with your Azure subscription.

ssms-azure-03If you’ve created your account a long time ago, you could receive this screen … If your account is linked to professional resources (granted by your comany) the good choice is probably the first. If you’re developing from home with resources that you’ll pay by yourself, the correct choice is probably the second.

ssms-azure-05

When you’re successfully identified, you can create a new rule. Open the firewall for your own IP or for a subnet.

ssms-azure-06

Now, you can connect and query your database or the virtual master database (open the folder System Databases).

ssms-azure-07

Firewall rules applicable at the whole server are effectively stored in the master database, you can list them by querying the view sys.firewall_rules.

ssms-azure-08

At this moment, you see that I was a bit lazy and didn’t clean all these rules since a long time. If you want to remove some rules, you can use the stored procedure sp_delete_firewall_rule.

ssms-azure-10

It’ll clean-up your list of firewal rules.ssms-azure-11

 

If you want you can create firewall rules at the database level by connecting to the database and using the sys.database_firewall_rules. The firewall rules at the database level are evaluated before these at the server level. To improve performance, server-level firewall rules are temporarily cached at the database level. To refresh the cache, see DBCC FLUSHAUTHCACHE

Tracking security changes on your SQL Server development environment

A few months ago, I was fed up to receive complains from my teammates about security settings changed on the development (or test) server. Every week, someone was at my desk complaining with a variation of “My development was working previous week and now it’s not working anymore. Someone (not me) must have change something about security but I don’t know who and why!”.

To give you a correct picture about our SQL Server environment, we’ve one instance for the dev server and another the test server but many people are developing on a given project. Consider also that we’re developing different projects at the given moment. All team members are (full) administrators of the dev and test servers. All team members know that the rule about security is “reduce privileges to the minimum required” (definitively db_owner is not a solution) and this rule will be checked during code reviews.

Some will consider a classical approach, supported by most DBA: we should restrict the administrator rights to less people: a kind of single-point-of-security. Well, it was probably my opinion seven years ago but my position has dramatically changed on this. This point of view is really based on my own environment … your mileage will vary but keep in mind that I’m speaking at the dev and test level not production. Main point for this position is that I’d not recommend a “single-point-of” for any agile development. The true names are not “single-point-of” but “bottleneck”, “not-shared-decisions”, “I’m-the-only-one-who-know-don’t-fire-me” and “not-my-business-ask-this-guy” … four things that I’m trying to avoid as much as possible.

I implemented a solution to track any change to “security” on the SQL Server. Not a word to my team members … I was the only one to know that something has been scripted and deployed. After a few weeks, my conclusion was clear: nobody in the team is trying to sabotage the work of others. After investigations, issues could be grouped as

  • You changed the code and it’s now requiring new privileges: Script them and stop blaming others.
  • During development, you manually added some privileges to the database. But the deployment script is not including all these privileges. Someone has redeployed the solution from scratch, discarding all the unscripted privileges. It’s your mistake.
  • Less frequent … indeed, someone on another solution is also developing on this shared component (SSISDB) and has a different definition than yours for this shared role. Discuss together and align your needs.

So clearly it was not a question of responsibility or trust but a problem of practices:

  • All privileges must be scripted (and reason to request them documented) and committed in the solution. It must be part of the deployment artefact (at least in dev/test).
  • All the work performed on SSISDB (or any shared component) must be executed on a shared project

That’s not enough, practices not supported by tools are pointless and will never be applied during more than a month. We must provide a tool for people to know who has changed “security”. The following trace is doing the job:

use [master];
go

create server audit
    [Audit Login Changes]
to file
(   FILEPATH = N'N:\Trace\'
    , MAXSIZE = 1024 MB
    , MAX_FILES = 10
    , RESERVE_DISK_SPACE = OFF
)
with
(   QUEUE_DELAY = 1000
    , ON_FAILURE = CONTINUE
    , AUDIT_GUID = '33e055a9-1924-47c3-9798-e8875a5bcd42'
);

alter server audit
    [Audit Login Changes]
with (state=on);
go

create server audit specification
    [Audit Login]
for server audit [Audit Login Changes]
    add (DATABASE_ROLE_MEMBER_CHANGE_GROUP)
    , add (SERVER_ROLE_MEMBER_CHANGE_GROUP)
    , add (DATABASE_PERMISSION_CHANGE_GROUP)
    , add (SERVER_OBJECT_PERMISSION_CHANGE_GROUP)
    , add (SERVER_PERMISSION_CHANGE_GROUP)
    , add (DATABASE_PRINCIPAL_CHANGE_GROUP)
    , add (SERVER_PRINCIPAL_CHANGE_GROUP)
    , add (SCHEMA_OBJECT_PERMISSION_CHANGE_GROUP)
with (state=on);
go

and the following query will help you to investigate the traces. Since this instant, I just received a few complains and it was about Windows (disk access for read/write files with SSIS) issues.

declare  @LogFilePath varchar(255);

select
	@LogFilePath=log_file_path
from
	sys.server_file_audits
where
	name='Audit Login Changes';

set @LogFilePath=@LogFilePath +'*.sqlaudit';

select
	session_server_principal_name
	,database_name
	,object_name
	,m.class_type_desc
	,statement
	,convert(datetime,
		switchoffset(
			convert(datetimeoffset, event_time)
			, datename(TzOffset, sysdatetimeoffset())
		)
	)
from
	sys.fn_get_audit_file (@LogFilePath,default,default) f
inner join
	sys.dm_audit_class_type_map m
	on m.class_type=f.class_type
order by
	event_time;

I know that if someone wanted to hijack this trace it would be easy to stop it (and restart it), but keep in mind that I’m not trying to fix a security issue but to fix a process issue during the development.