Slow performance during checking in. Ending Transaction...

This forum is now locked, since Gold Support is no longer offered.

Moderator: SourceGear

Locked
dking513
Posts: 48
Joined: Tue Apr 20, 2004 3:23 pm

Slow performance during checking in. Ending Transaction...

Post by dking513 » Thu Jul 07, 2005 6:54 pm

Hello,
I am experiencing slow performance (~20 seconds) to check in a file.
The status message says "Ending Transaction..." where it is spending the most time.

I have looked at similar posts on this forum, and have already tried an index rebuild, and updating the statistics.

Using SQL Profiler I observed that the bulk of the time is spent on the stored procedure spsavecurrenttreefolders which is taking 17 seconds to execute.

I have a 2 Gig vault database of about 4000 files. The nature of my application is such that there are about as many folders as well. Maybe this is why spsavecurrenttreefolders is so slow. This is running on a dual Xeon processor Compaq DL380 with 2 Gig of RAM.

Any ideas how I can best debug/improve this issue?

Thanks,
Dave

jclausius
Posts: 3702
Joined: Tue Dec 16, 2003 1:17 pm
Location: SourceGear
Contact:

Post by jclausius » Thu Jul 07, 2005 8:22 pm

- What is the version of the Vault Server?

-Also, have you updated the table statistics in the last month or so?
Jeff Clausius
SourceGear

dking513
Posts: 48
Joined: Tue Apr 20, 2004 3:23 pm

Post by dking513 » Fri Jul 08, 2005 5:59 am

Vault version is 3.0.1
Updating the statistics on the tables does not seem to have an effect.

jclausius
Posts: 3702
Joined: Tue Dec 16, 2003 1:17 pm
Location: SourceGear
Contact:

Post by jclausius » Fri Jul 08, 2005 6:21 am

Dave:

4,000 folders is not necessarily a big tree. Give me some time to look into this.

Thanks.
Jeff Clausius
SourceGear

dking513
Posts: 48
Joined: Tue Apr 20, 2004 3:23 pm

My mistake...

Post by dking513 » Fri Jul 08, 2005 6:35 am

Jeff,
My mistake!
I was using your sql script to update the table/index information:

USE sgvault
GO

DECLARE c CURSOR FOR SELECT name FROM sysobjects WHERE type = N'U' ORDER BY name
OPEN c

DECLARE @@name [nvarchar](64),
@@stmt [nvarchar](256)

SET @@stmt = N'--'
FETCH FROM c INTO @@name
WHILE (@@FETCH_STATUS = 0)
BEGIN
-- statement to completely rebuild the index
SET @@stmt = N'DBCC DBREINDEX(''sgvault.dbo.' + @@name + ''', '''', 0)'

-- statement to update table's statistics
SET @@stmt = N'UPDATE STATISTICS ' + @@name

-- just print the name for now.
PRINT N'Table Name: ' + @@name

-- execute the command assigned to @@stmt
EXEC sp_executesql @@stmt
FETCH NEXT FROM c INTO @@name
END

CLOSE c
DEALLOCATE c



But didn't look closely enough at the code to realize that by uncommenting both the index and statistics lines I was really just overwriting @@stmt, and ONLY updating the stats.

I have now properly rebuilt the indexes, and performance is much better - the stored procedure that was taking 17 seconds is now running in a second or less.

Sorry to cry wolf.

Thanks,
Dave

jclausius
Posts: 3702
Joined: Tue Dec 16, 2003 1:17 pm
Location: SourceGear
Contact:

Post by jclausius » Fri Jul 08, 2005 6:49 am

Dave:

Thanks for the update. The original script is really meant to run one or the other - not both. However, I left both statements in for a reference. Sorry for any confusion.

One thing I was going to ask was about disk defragmentation on the disk drive hosting the Vault database. As the database grows with each transaction, SQL Server will need to periodically grow the database / log files. In the case of badly fragmented disks, this extension of the .mdf/.ldf files can slow down operations.

In any case, I'm glad you're "up to speed".
Jeff Clausius
SourceGear

Locked