Wednesday, April 13, 2011

Slow MS SQL 2000, lots of timeouts. How can I improve performance?

I found this script on SQL Authority:

USE MyDatabase
GO
EXEC sp_MSforeachtable @command1=“print ’?' DBCC DBREINDEX (’?', ’ ’, 80)”
GO
EXEC sp_updatestats
GO

It has reduced my insert fail rate from 100% failure down to 50%. It is far more effective than the reindexing Maintenance Plan.

Should I also be reindexing master and tempdb? How often? (I write to this database 24 hrs a day) Any cautions that I should be aware of?

From stackoverflow
  • This could be anything at all. Is the system CPU bound? IO bound? Is the disk too fragemented? Is the system paging too much? Is the network overloaded?

    Have you tuned the indexes? I don't recall if there was an index tuning wizard in 2000, but at the least, you could run the profiler to create a workload that could be used by the SQL Server 2005 index tuning wizard.

    GateKiller : I would seriously think about upping the ram to 4GB at a minimum. Also, I would change the fillfactor to something like 95-98%. Remember, whenever you do an update, you do a read too.
  • Check out your query plans also. Some indexes might not be getting used or the SQL could be wholly inefficient.

    What table maintenance do you have?

    is all the data in the tables relevant to todays processing?

    Can you warehouse off some data?

    What is your locking like? Are you locking the whole table?

    EDIT: The SQL profiler shows all interactions with the SQL Server. It should be a DBAs lifeblood.

  • Thanks for all of the help. I'm not there yet, but getting better.

    I can't do much about hardware constraints. All available RAM is allowed to SQL Fillfactor is set at 95 Using profiler, an hour's trace offered index tuning with suggested increase of 27% efficiency.

    As a result, I doubled the amount of successful INSERTS. Now only 1 out of 4 are failing. Tracing now and will tune after to see if it gets better.

    Don't understand locking yet.

    For those who maintain SQL Server as a profession, am I on the right track?

  • RAID 5 on your NAS? That's your killer.

    An INSERT is logged: it writes to the .LDF (log) file. This file is 100% write (well, close enough).

    This huge write to read ratio generates a lot of extra writes per disk in RAID 5.

    I have an article in work (add later): RAID 5 writes 4 times as much per disk than RAID 10 in 100% write situations.

    Solutions

    You need to split your data and log files for your database at least.

    Edit: Clarified this line: The log files need go to RAID 1 or RAID 10 drives. It's not so important for data (.MDF) files. Log files are 100% write so benefit from RAID 1 or RAID 10.

    There are other potential isues too such as fragmented file system or many Vlog segments (depending on how your database has grown), but I'd say your main issue is RAID 5.

    For a 3TB DB, I'd also stuff as much RAM as possible in (32GB if Windows Advanced/Enterprise) and set PAE/AWE etc. This will mitigate some disk issues but only for data caching.

    Fill factor 85 or 90 is the usual rule of thumb. If your inserts are wide and not strictly monotonic (eg int IDENTITY column) then you'll have lots of page splits with anything higher.

    I'm not the only one who does not like RAID 5: BAARF

    Edit again:

    Look for "Write-Ahead Logging (WAL) Protocol" in this SQL Server 2000 article. It's still relevant: it explains why the log file is important.

    I can't find my article on how RAID 5 suffers compared to RAID 10 under 100% write loads.

    Finally, SQL Server does I/O in 64k chunks: so format NTFS with 64k clusters.

    gbn : Clarified + I forgot to get links from work.. will do later!

0 comments:

Post a Comment