Managing SQL Server ERRORLOG File Growth to Prevent Space Issues

 Problem Overview:

In one of our smaller production environments, the SQL Server ERRORLOG file size unexpectedly grew to almost 60 GB. This caused a critical space crunch on the C:\ drive, leading to application timeout errors. The ERRORLOG file was located at:

C:\Program Files\Microsoft SQL Server\MSSQL14.INST1\MSSQL\Log

Challenges:

Active ERRORLOG File: The current ERRORLOG file could not be deleted directly because it was actively being used by SQL Server.

Time Constraints: Restarting the SQL Server instance to create a new ERRORLOG file required approval from the client and the change management team, which could be time-consuming.

Resolution Steps:

Step 1: Cycle the ERRORLOG File Without Restarting SQL Server

To address the issue without a service restart, we used the following command:

EXEC sp_cycle_errorlog;

GO

This command immediately created a new ERRORLOG file. The active log was cycled, and the previous ERRORLOG file was moved to the same directory with a numbered suffix (e.g., ERRORLOG.1).

Step 2: Relocate and Manage Old ERRORLOG Files

The old ERRORLOG file, which was consuming significant space, was manually moved to a different drive with sufficient free space. This provided temporary relief for the space issue on the C:\ drive. After a few days, we deleted the old log files to reclaim space permanently.

Step 3: Identify and Fix the Root Cause

Upon investigation, we discovered that one of the SQL Server Agent jobs was generating excessive logs in the ERRORLOG file. The problematic statement in the job's code was identified and removed to prevent further excessive logging.

Key Takeaways:

Proactive Monitoring: Regular monitoring of SQL Server ERRORLOG file size and disk space utilization is crucial to avoid unexpected space issues.

Efficient Log Management: Use the sp_cycle_errorlog procedure periodically to cycle logs and prevent single ERRORLOG files from growing too large.

Root Cause Analysis: Always investigate the underlying cause of excessive logging to implement a permanent fix.

How to save and then restore permissions after refreshing a database using T-sql

    The blog post provides a detailed guide on how to save and restore permissions after refreshing a SQL Server database. It introduces stored procedures for capturing and reapplying user and role permissions efficiently, ensuring minimal disruption during a database refresh. This method is particularly helpful when automating database refresh processes.

  • GenerateUserRoleScripts: This procedure generates the SQL scripts to create users and assign roles for the specified database and stores them in the UserRoleScripts table.

  • ExecuteUserRoleScripts: This procedure retrieves the scripts stored in UserRoleScripts and executes them on the specified database.
  • Stored Procedure 1: GenerateUserRoleScripts

    This procedure will generate and store the user-role scripts in the DBA..UserRoleScripts table for the specified database.

    USE DBA;  -- Change the database name as per you're requirement 

    GO

    -- Step 1: Create the procedure to generate and store user-role scripts

    CREATE PROCEDURE dbo.GenerateUserRoleScripts

        @DatabaseName NVARCHAR(128)  -- Input parameter for database name

    AS

    BEGIN

        -- Dynamic SQL to target the specified database

        DECLARE @SQL NVARCHAR(MAX);

        -- Create the UserRoleScripts table if it doesn't exist

        IF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[UserRoleScripts]') AND type in (N'U'))

        BEGIN

            CREATE TABLE dbo.UserRoleScripts (

                ID INT IDENTITY(1,1) PRIMARY KEY,

                Script NVARCHAR(MAX),

                GeneratedDate DATETIME DEFAULT GETDATE()

            );

        END;

        -- Generate the scripts for the specified database

        SET @SQL = N'

        INSERT INTO dbo.UserRoleScripts (Script)

        SELECT

            ''IF NOT EXISTS (SELECT 1 FROM sys.database_principals WHERE name = '''''' + mp.name + '''''')'' + CHAR(13) +

            ''BEGIN'' + CHAR(13) +

            ''    CREATE USER ['' + mp.name + ''] FOR LOGIN ['' + mp.name + ''];'' + CHAR(13) +

            ''END;'' + CHAR(13) +

            ''ALTER ROLE ['' + dp.name + ''] ADD MEMBER ['' + mp.name + ''];'' AS Script

        FROM

            [' + @DatabaseName + '].sys.database_role_members drm

        JOIN

            [' + @DatabaseName + '].sys.database_principals dp ON drm.role_principal_id = dp.principal_id

        JOIN

            [' + @DatabaseName + '].sys.database_principals mp ON drm.member_principal_id = mp.principal_id

        WHERE

            dp.name <> ''dbo''  -- Exclude roles where the role is dbo

            AND mp.name <> ''dbo''  -- Exclude users where the user is dbo

        ORDER BY dp.name, mp.name;

        ';

        -- Execute the dynamic SQL

        EXEC sp_executesql @SQL;

    END;

    GO

    Automating SQL Server Stored Procedure Execution Across Multiple Databases with PowerShell

    In many enterprise environments, database administrators (DBAs) often need to execute scripts across multiple databases on several SQL Server instances. Doing this manually can be time-consuming and error-prone, especially when managing a large number of servers. Automating this task using PowerShell can significantly streamline the process, ensuring consistency and saving valuable time.

    In this post, we'll walk through a PowerShell script that automates the execution of a stored procedure (sp_read) across all databases on multiple SQL Server instances. The script also captures the execution output and logs the status (success or failure) for each database in a detailed log file.

    SQL Joins and Order of Execution: An In-Depth Guide

    SQL Joins:

    1. INNER JOIN:

      • Definition: Retrieves records that have matching values in both tables.
      • Use Case: When you only want the records where there is a match in both tables.
      • Example:

        SELECT a.column1, b.column2 FROM table1 a INNER JOIN table2 b ON a.common_column = b.common_column;
    2. LEFT JOIN (LEFT OUTER JOIN):

      • Definition: Returns all records from the left table and the matched records from the right table. For unmatched rows from the right table, NULL values are returned.
      • Use Case: When you need all records from the left table regardless of whether they have a match in the right table.
      • Example:
        SELECT a.column1, b.column2 FROM table1 a LEFT JOIN table2 b ON a.common_column = b.common_column;
    3. RIGHT JOIN (RIGHT OUTER JOIN):

      • Definition: Similar to LEFT JOIN, but returns all records from the right table and the matched records from the left table.
      • Use Case: When you need all records from the right table regardless of whether they have a match in the left table.
      • Example:
        SELECT a.column1, b.column2 FROM table1 a RIGHT JOIN table2 b ON a.common_column = b.common_column;
    4. FULL JOIN (FULL OUTER JOIN):

      • Definition: Combines the results of both LEFT JOIN and RIGHT JOIN. Returns all records when there is a match in either table.
      • Use Case: When you need all records from both tables, with NULLs in places where there is no match.
      • Example:
        SELECT a.column1, b.column2 FROM table1 a FULL OUTER JOIN table2 b ON a.common_column = b.common_column;
    5. CROSS JOIN:

      • Definition: Returns the Cartesian product of both tables, pairing each row from the first table with every row from the second table.
      • Use Case: When you need all possible combinations of rows from the two tables.
      • Example:
        SELECT a.column1, b.column2 FROM table1 a CROSS JOIN table2 b;
    6. SELF JOIN:

      • Definition: A join in which a table is joined with itself to compare rows within the same table.
      • Use Case: When you need to compare rows within the same table.
      • Example:
        SELECT a.column1, b.column2 FROM table a INNER JOIN table b ON a.common_column = b.common_column;

    SQL Order of Execution:

    1. FROM:

      • Purpose: Specifies the tables involved in the query.
      • Details: This is the first step where the SQL engine identifies the source tables and builds a Cartesian product if multiple tables are specified.
    2. WHERE:

      • Purpose: Filters records based on specified conditions.
      • Details: Applies conditions to filter out rows that do not meet the criteria.
    3. GROUP BY:

      • Purpose: Groups records that have identical data in specified columns.
      • Details: Aggregates data to prepare for summary functions (e.g., COUNT, SUM).
    4. HAVING:

      • Purpose: Filters groups based on specified conditions.
      • Details: Similar to WHERE but operates on groups created by GROUP BY.
    5. SELECT:

      • Purpose: Specifies the columns to be returned.
      • Details: Determines the final columns to be included in the result set.
    6. ORDER BY:

      • Purpose: Sorts the result set based on specified columns.
      • Details: Orders the rows in the result set according to one or more columns.
    7. LIMIT:

      • Purpose: Restricts the number of rows returned.
      • Details: Used to limit the number of rows in the result set, useful for pagination.

    Example Query with Detailed Execution:

    Let's consider a complex query to see the order of execution in action:

    SELECT department, AVG(salary) AS avg_salary FROM employees WHERE hire_date > '2020-01-01' GROUP BY department HAVING AVG(salary) > 60000 ORDER BY avg_salary DESC LIMIT 5;

    Order of Execution:

    1. FROM: Identify the employees table.
    2. WHERE: Filter rows where hire_date is after '2020-01-01'.
    3. GROUP BY: Group the remaining rows by department.
    4. HAVING: Filter groups where the average salary is greater than 60,000.
    5. SELECT: Choose the department and calculate the average salary as avg_salary.
    6. ORDER BY: Sort the results by avg_salary in descending order.
    7. LIMIT: Return only the top 5 rows.

    Understanding ACID Properties in DBMS with Everyday Examples

    1. Atomicity

    Atomicity ensures that the entire transaction, which in this case involves deducting money from your account and crediting your friend's account, either happens fully or not at all. In practice, if the second step fails (crediting your friend's account), the first step (deducting your account) is automatically rolled back. This way, your account will still have the original balance, and no partial transaction will occur.

    2. Consistency

    Consistency maintains the integrity of the database. When you attempt to transfer ₹25,000, the system checks your balance against the minimum requirement (₹5,000). If this rule would be broken by the transaction, the system blocks it, ensuring that the rules governing account balances are respected. The database remains valid before and after the transaction.

    3. Isolation

    Isolation ensures that concurrent transactions don't interfere with each other. While you are transferring ₹10,000, another user looking at your account at an intermediate stage will not see a partially updated balance. This prevents inconsistencies during the process and ensures that only complete transactions are visible to others.

    4. Durability

    Durability means that once a transaction is completed, the changes are permanent, even if there's a power outage or system crash right after the transfer. So, after your transaction is confirmed, both your account and your friend's account will reflect the updated balances, regardless of any subsequent failures.

    These properties ensure that financial transactions are secure, reliable, and accurate, reflecting the real-world requirement for a robust system in handling sensitive operations like money transfers.

    How to Shrink All Database Log Files Using T-SQL Script

     As a DBA, managing log file sizes is crucial to ensure your databases run smoothly. Below is a T-SQL script to shrink all database log files at once, excluding the system databases (master, tempdb, model, msdb, rdsadmin). This script uses cursors to iterate through each database and its corresponding log files.

    Script to Shrink All Database Log Files

    Top 10 SQL Server Performance Tuning Tips

     Introduction

    SQL Server performance tuning is essential for maintaining a high-performing database system. Whether you're a DBA, developer, or just starting out with SQL Server, understanding the key areas to focus on can make a huge difference. In this post, we'll cover the top 10 performance tuning tips to help you get the most out of your SQL Server environment.

    1. Index Optimization

    Indexes are crucial for speeding up query performance. Regularly review and optimize indexes:

    Identify missing indexes using dynamic management views (DMVs).

    Remove unused or duplicate indexes.

    Rebuild or reorganize fragmented indexes.

    2. Query Optimization

    Poorly written queries can significantly impact performance. Consider the following:

    Use execution plans to identify bottlenecks.

    Avoid SELECT *; specify only the columns needed.

    Use appropriate JOINs and avoid unnecessary subqueries.

    3. Database Maintenance

    Regular maintenance tasks can keep your database healthy:

    Implement regular index maintenance (rebuild/reorganize).

    Update statistics to ensure the query optimizer has accurate data.

    Perform regular database integrity checks (DBCC CHECKDB).

    4. Monitor and Troubleshoot

    Monitoring helps identify performance issues before they become critical:

    Use SQL Server Profiler or Extended Events to trace slow queries.

    Monitor wait statistics to identify resource bottlenecks.

    Implement performance alerts to catch issues early.

    5. Optimize TempDB

    TempDB is a critical system database; optimizing it can enhance overall performance:

    Place TempDB on fast storage.

    Configure multiple TempDB files to reduce contention.

    Regularly monitor and clean up TempDB usage.

    6. Memory Management

    Proper memory configuration is vital for SQL Server performance:

    Set the max server memory to prevent SQL Server from using all available memory.

    Monitor memory usage to ensure there are no leaks.

    Use the buffer pool extension for additional memory management.

    7. Disk I/O Optimization

    Disk I/O can be a common performance bottleneck:

    Use fast storage solutions like SSDs for critical data files.

    Separate data files and log files onto different disks.

    Monitor disk I/O performance and address hotspots.

    8. CPU Optimization

    Efficient CPU usage is critical for performance:

    Monitor CPU usage to identify high-consumption queries.

    Optimize CPU-heavy queries by reducing complexity.

    Use the appropriate server hardware for your workload.

    9. Network Optimization

    Network latency can affect SQL Server performance:

    Ensure a fast and reliable network connection.

    Use proper network configurations and protocols.

    Monitor network latency and throughput.

    10. Regular Audits and Reviews

    Regularly auditing and reviewing your SQL Server environment can help maintain performance:

    Perform regular health checks.

    Review and update your maintenance plans.

    Stay updated with the latest SQL Server patches and updates.

    Migrating an SQL Server database to AWS RDS Aurora PostgreSQL

     Migrating an SQL Server database to AWS RDS Aurora PostgreSQL 

    Step 1: Planning

    1. Assess the Migration: Evaluate the source SQL Server database and identify any potential issues. Consider schema differences, data types, and compatibility issues.
    2. Backup Strategy: Plan for a backup strategy to ensure you have a point-in-time restore option.
    3. Tools and Resources: Familiarize yourself with AWS Database Migration Service (DMS) and AWS Schema Conversion Tool (SCT).

    Step 2: Set Up AWS Environment

    1. Create an AWS Account: If you don’t already have one, create an AWS account.
    2. Set Up IAM Roles and Policies: Ensure you have the necessary IAM roles and policies to manage AWS services securely.
    3. Launch Aurora PostgreSQL Instance:
      • Go to the RDS console.
      • Select "Create Database".
      • Choose "Amazon Aurora".
      • Select "PostgreSQL-compatible".
      • Configure the instance size, storage, and other settings.
      • Launch the instance.

    Step 3: Schema Conversion

    1. Install AWS SCT:
      • Download and install the AWS Schema Conversion Tool from the AWS website.
    2. Connect to Source SQL Server:
      • Open AWS SCT.
      • Connect to your SQL Server database by providing the connection details.
    3. Connect to Target Aurora PostgreSQL:
      • Connect to your Aurora PostgreSQL instance.
    4. Convert the Schema:
      • Use AWS SCT to convert the SQL Server schema to PostgreSQL-compatible schema.
      • Review and apply any necessary modifications manually.
      • Apply the converted schema to the Aurora PostgreSQL instance.

    Step 4: Data Migration

    1. Install AWS DMS:
      • Go to the AWS DMS console.
      • Create a replication instance.
      • Ensure the replication instance can connect to both the source SQL Server and target Aurora PostgreSQL.
    2. Create Endpoints:
      • Create source endpoint for SQL Server.
      • Create target endpoint for Aurora PostgreSQL.
    3. Create a Migration Task:
      • Define a migration task in AWS DMS.
      • Choose the type of migration (full load, full load + CDC, or CDC only).
    4. Run the Migration Task:
      • Start the migration task.
      • Monitor the migration process using the DMS console.
      • Validate data after the migration task completes.

    Step 5: Post-Migration

    1. Data Validation:
      • Compare the data in the source SQL Server and target Aurora PostgreSQL to ensure completeness and accuracy.
    2. Application Testing:
      • Test your applications with the new Aurora PostgreSQL database to ensure they work as expected.
    3. Performance Tuning:
      • Optimize your PostgreSQL database settings for better performance.
      • Apply necessary indexing and query optimizations.

    Step 6: Cutover

    1. Plan for Downtime:
      • Schedule a maintenance window for the cutover to minimize impact.
    2. Final Data Sync:
      • Perform a final data sync if using CDC (Change Data Capture) to ensure no data is missed.
    3. Switch Applications:
      • Update your application configurations to point to the new Aurora PostgreSQL database.
    4. Monitor:
      • Monitor the applications and database closely after cutover to quickly address any issues.

    Step 7: Decommission

    1. Decommission Old SQL Server:
      • Once confirmed that the new system is working perfectly, decommission the old SQL Server database.
    2. Cleanup:
      • Remove any unused resources in AWS to avoid unnecessary costs.