开发者

a table contains more than 10 lacs records

In the DataBase of our project, we have, one table alone which has more than 10 lac reco开发者_如何学运维rds.

this particular table makes our web application too slow in performance.


Can you add more details please? The info you shared so far is NOT enough to give you good responses. There could be many things that could lead to performance issues. You need to narrow down and isolate the problem clearly first before looking for a solution.

  1. SELECT @@VERSION to get the complete edition
  2. How much memory is on this box?
  3. Share details from the below query

    SELECT 
        object_name(s.object_id) AS 'Table Name'
        , i.name AS 'Index Name'
        , i.index_id
        , user_updates AS 'Total Writes'
        , user_seeks
        , user_scans 
        , user_lookups
    FROM sys.dm_db_index_usage_stats AS s WITH (NOLOCK)
    INNER JOIN sys.indexes AS i WITH (NOLOCK)
    ON s.object_id = i.object_id
    AND i.index_id = s.index_id
    WHERE objectproperty(s.object_id,'IsUserTable') = 1
    AND s.database_id = db_id()
    AND i.object_id = object_id('YourTableName')

  1. Share details of the below missing indexes DMV and see if you have any queries that involve the table you mentioned above.

    
    SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
    WITH XMLNAMESPACES
    (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan') SELECT TOP 25--ecp.plan_handle, DENSE_RANK() OVER ( ORDER BY ecp.plan_handle ) AS ArbitraryPlanNumber , n.value('(@StatementSubTreeCost)[1]', 'VARCHAR(128)') * ISNULL(n.value('(./QueryPlan/MissingIndexes/MissingIndexGroup/@Impact)[1]','float'), 0) * ecp.usecounts AS Improvement , query_plan AS CompleteQueryPlan , n.value('(@StatementId)[1]', 'float') AS StatementID , n.value('(@StatementText)[1]', 'VARCHAR(4000)') AS StatementText , n.value('(@StatementSubTreeCost)[1]', 'VARCHAR(128)') AS StatementSubTreeCost , n.query('./QueryPlan/MissingIndexes') MissingIndex , n.value('(./QueryPlan/MissingIndexes/MissingIndexGroup/@Impact)[1]','float') IndexImpact , ecp.usecounts FROM sys.dm_exec_cached_plans AS ecp CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS eqp CROSS APPLY query_plan.nodes('/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple') AS qn ( n ) WHERE n.value('(./QueryPlan/MissingIndexes/MissingIndexGroup/@Impact)[1]','float') IS NOT NULL AND --ecp.usecounts > 100 AND eqp.query_plan.exist('declare default element namespace "http://schemas.microsoft.com/sqlserver/2004/07/showplan"; /ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple/QueryPlan/MissingIndexes') = 1 ORDER BY Improvement DESC

  2. Have you checked the query stats for high cpu usage, logical reads, high execution counts?

    
    -- Cached SP's By Execution Count
    SELECT TOP (100) OBJECT_NAME(qt.objectid) AS 'SP Name', qs.execution_count AS 'Execution Count',
    qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second', qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime', qs.total_worker_time AS 'TotalWorkerTime', qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime', qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads, DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache' FROM sys.dm_exec_query_stats AS qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt WHERE qt.dbid = db_id() -- Filter by current database ORDER BY qs.execution_count DESC;

    -- Cached SP's By Worker Time SELECT TOP (50) OBJECT_NAME(qt.objectid) AS 'SP Name', qs.total_worker_time AS 'TotalWorkerTime', qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime', qs.execution_count AS 'Execution Count', ISNULL(qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()), 0) AS 'Calls/Second', ISNULL(qs.total_elapsed_time/qs.execution_count, 0) AS 'AvgElapsedTime', qs.max_logical_reads, qs.max_logical_writes, DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache in Minutes' FROM sys.dm_exec_query_stats AS qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt WHERE qt.dbid = db_id() -- Filter by current database ORDER BY qs.total_worker_time DESC;

    -- Cached SP's By Logical Reads SELECT TOP (50) OBJECT_NAME(qt.objectid) AS 'SP Name', total_logical_reads, qs.execution_count AS 'Execution Count', total_logical_reads/qs.execution_count AS 'AvgLogicalReads', qs.execution_count/DATEDIFF(Second, qs.creation_time, GetDate()) AS 'Calls/Second', qs.total_worker_time/qs.execution_count AS 'AvgWorkerTime', qs.total_worker_time AS 'TotalWorkerTime', qs.total_elapsed_time/qs.execution_count AS 'AvgElapsedTime', qs.total_logical_writes, qs.max_logical_reads, qs.max_logical_writes, qs.total_physical_reads, DATEDIFF(Minute, qs.creation_time, GetDate()) AS 'Age in Cache' FROM sys.dm_exec_query_stats AS qs CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt WHERE qt.dbid = db_id() -- Filter by current database ORDER BY total_logical_reads DESC;

  3. Have you looked at the WAIT STATS?

    
    WITH Waits AS
    (SELECT wait_type, wait_time_ms / 1000. AS wait_time_s,
        100. * wait_time_ms / SUM(wait_time_ms) OVER() AS pct,
        ROW_NUMBER() OVER(ORDER BY wait_time_ms DESC) AS rn
     FROM sys.dm_os_wait_stats
     WHERE wait_type NOT IN( 'BROKER_TASK_STOP', 
      'SQLTRACE_BUFFER_FLUSH', 'CLR_AUTO_EVENT', 'CLR_MANUAL_EVENT', 'BROKER_TRANSMITTER',
      'LAZYWRITER_SLEEP', 'SQLTRACE_WAIT_ENTRIES')) -- filter out additional irrelevant waits
    SELECT W1.wait_type, 
      CAST(W1.wait_time_s AS DECIMAL(12, 2)) AS wait_time_s,
      CAST(W1.pct AS DECIMAL(12, 2)) AS pct,
      CAST(SUM(W2.pct) AS DECIMAL(12, 2)) AS running_pct
    FROM Waits AS W1
    INNER JOIN Waits AS W2
    ON W2.rn <= W1.rn
    GROUP BY W1.rn, W1.wait_type, W1.wait_time_s, W1.pct
    HAVING SUM(W2.pct) - W1.pct < 99; -- percentage threshold

  4. DO you see blocking in your system?

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜