Vault SQL Server at 100% Processor

If you are having a problem using Vault, post a message here.

Moderator: SourceGear

Post Reply
shenderson
Posts: 14
Joined: Wed Jun 23, 2004 9:36 am
Contact:

Vault SQL Server at 100% Processor

Post by shenderson » Fri Jul 16, 2004 10:10 am

We are seeing 100% processor usage spikes on our Vault Server for long periods of time. We are on Vault 2.0.3 but I just upgraded to 2.0.4 about 10 minutes ago so I hope the problem goes away. We see this happen at least once a week.

Our Vault Server is a Dual Processor 1.2 Ghz machine with 2 GB of RAM and dual 36 GB Raid 0+1 drives. It's a Compaq DL360 G2.

We did a Trace and got the following information on the Stored Procedure that is the culprit:
788,620 READS
360,733 ms
exec dbo.spgetlockedfilechangeswithsecurity @txid = 56827, @userid = 11, @repid = 1, @sessionid = N'zn5yp3r5x1l1n3jifsg4a145', @lastsecuritychange = 'Jul 1 2004 9:44:58:380PM', @refreshlist = @P1 output


1,022,078 READS
360,870 ms
exec dbo.spgetlockedfilechangeswithsecurity @txid = 56827, @userid = 3, @repid = 1, @sessionid = N'kf1zfv55q5hrhs45ujoars55', @lastsecuritychange = 'Jul 1 2004 9:51:44:253PM', @refreshlist = @P1 output

Any help would be appreciated!

jclausius
Posts: 3706
Joined: Tue Dec 16, 2003 1:17 pm
Location: SourceGear
Contact:

Post by jclausius » Fri Jul 16, 2004 11:57 am

Vault 2.0.4 will not solve the problem. When you operate with folder security, the user's rights must be applied to the tree when a user does a refresh.

Using 100% of the processor for short periods of time (2-4 seconds) could be normal. Do the spikes last longer than that?

A couple of items which could eliminate the spikes (but affect your usage of Vault):
1) Reduce the number of checkouts. Do your users only check out what they need or do people check out entire branches? By default, users should try to checkout only what they intend to modify.

2) Turn off folder security. If you do not require folder security within the repository, turn it off.
Jeff Clausius
SourceGear

shenderson
Posts: 14
Joined: Wed Jun 23, 2004 9:36 am
Contact:

Post by shenderson » Fri Jul 16, 2004 12:14 pm

Hi Jeff,

Thanks for your quick reply!

It looks like this is happening for much longer than 2-4 seconds, more like 5-10 minutes. Enough time for my fellow developers to start complaining that they are getting vault access errors and checkins/checkouts are taking around 5 minutes because the server is too busy. Most of them who see the problem are killing Visual Studio because it seems locked up and then starting over.

We do have folder security with groups enabled, and it is that way because of our internal processes and SOX compliance. So I can't get away from that.

Our developers typically only check out what they need so it shouldn't be more than a few files at a time. I'll check with them to make sure but I'm pretty sure that's not the problem.

jclausius
Posts: 3706
Joined: Tue Dec 16, 2003 1:17 pm
Location: SourceGear
Contact:

Post by jclausius » Fri Jul 16, 2004 12:38 pm

Hmmm. There was a general problem in the stored procedure, but that was fixed in Vault 2.0.2 or 2.0.3. Did you upgrade to Vault 2.0.4 yet? If so, is this still the problem.


One other thing comes to mind - INDEX rebuilding / STATISTICS update. An optimized tbltreerevisionfolders table is extremely important for this stored procedure.

We had the first reporter, who complained about performance in this stored proc, manually rebuild all indices on the table (and then for safety's sake updated the statistics). If memory serves, his timings on the stored proc went from something like 7 minutes to 11 seconds. Then after the 2.0.x fix the 11 seconds dropped to around 2 seconds.
Jeff Clausius
SourceGear

shenderson
Posts: 14
Joined: Wed Jun 23, 2004 9:36 am
Contact:

Post by shenderson » Fri Jul 16, 2004 12:40 pm

OK Thanks we'll try this and get back to you!

lbauer
Posts: 9736
Joined: Tue Dec 16, 2003 1:25 pm
Location: SourceGear

Post by lbauer » Fri Jul 16, 2004 3:08 pm

One of our users also found that running defrag on the server hard drive improved performance significantly.

These are the steps he took (I suggest backing up the Vault database to another machine first):

1) Turned off the SQL Server and the ASP.NET service so the files aren’t locked.
2) Ran defrag on the drive.
3) Turned on the SQL Server and ASP.NET service.
4) Rebuilt all indexes in the sgvault database.
5) sp_updatestats on the sgvault database.
Linda Bauer
SourceGear
Technical Support Manager

TheJet
Posts: 4
Joined: Mon Jul 12, 2004 10:15 am

Post by TheJet » Fri Jul 16, 2004 3:54 pm

from a SQL server standpoint, you may also want to ensure that the individual tables don't get terribly fragmented [i.e. get spread across too many pages]. To view that info:

DBCC SHOWCONTIG('<table>')

DBCC INDEXDEFRAG [doing this against the clustered index basically defrags the data]

DBCC DBREINDEX [to rebuild the indexes on the table]

If the tables have high fragmentation levels this can also cause severe performance problems, and can't be solved by just defragmenting the file system. It is also much less painful than defragmenting the actual filesystem and thus serves as a reasonable first step.

TheJet

jclausius
Posts: 3706
Joined: Tue Dec 16, 2003 1:17 pm
Location: SourceGear
Contact:

Post by jclausius » Fri Jul 16, 2004 4:06 pm

Thanks, TheJet.

Just to clarify Linda's post. Linda's Step 4 == TheJet's post (INDEXDEFRAG).

Note, past history suggests users with standard (non-raid) drives will have better success with INDEXDEFRAG after the drive(s) storing the Vault database has/have been defragged.
Jeff Clausius
SourceGear

Post Reply