Folder Security enabled = Very slow
Moderator: SourceGear
Folder Security enabled = Very slow
Vault Server 3.07
Our users reported that Vault Servers operations now become very slow. Even on the weekend.
Probably because of this slowness, corruptions has occured and the users could possibly have wrong file state in the local _sgvault folder. One user experienced wrong file status (all files appears as checked in but were already checked out by himself). This time, disabling folder security didn't solve the isssue like we did 2 weeks ago.
We then deleted C:\Documents and Settings\(UserLogin)\Application Data\SourceGear\Vault_1\Client\(Repository GUID)\(UserLogin) folder and this solved the issue (folder security is still off).
During the time folder security was turned off, all users suddenly noticed that Vault Server responsivness was much faster.
When the check out/in issue was solved for the user above, I reset folder security back to ON. Within minutes, the users reported that Vault operations are slow again.
Question: the performance hit is rather significant when folder security is on. Is there any technical reason? Hardware is not an issue. Our Vault server runs on an up to date Windows 2003 server and has plenty of HW resources.
Our users reported that Vault Servers operations now become very slow. Even on the weekend.
Probably because of this slowness, corruptions has occured and the users could possibly have wrong file state in the local _sgvault folder. One user experienced wrong file status (all files appears as checked in but were already checked out by himself). This time, disabling folder security didn't solve the isssue like we did 2 weeks ago.
We then deleted C:\Documents and Settings\(UserLogin)\Application Data\SourceGear\Vault_1\Client\(Repository GUID)\(UserLogin) folder and this solved the issue (folder security is still off).
During the time folder security was turned off, all users suddenly noticed that Vault Server responsivness was much faster.
When the check out/in issue was solved for the user above, I reset folder security back to ON. Within minutes, the users reported that Vault operations are slow again.
Question: the performance hit is rather significant when folder security is on. Is there any technical reason? Hardware is not an issue. Our Vault server runs on an up to date Windows 2003 server and has plenty of HW resources.
A couple of issues could be in effect:
A) Database fragmentation / statistics. We recently fixed a database in which the user's practices included checking out entire sub-folders, and then checking the sub-folders back in.
In this case, it turned out their tblcheckoutlists and tblcheckoutlistitems were severely fragmented. I asked them to some SQL Queries which rebuilt these indices, and that resolved their problem.
If the check out lists are severely changing, you may need to take a look at the checkout lists on a weekly basis.
One simple thing to try - update the statistics:
UPDATE STATISTICS sgvault.dbo.tblcheckoutlists
GO
UPDATE STATISTICS sgvault.dbo.tblcheckoutlistitems
GO
UPDATE STATISTICS sgvault.dbo.tbltreerevisionfolders
GO
UPDATE STATISTICS sgvault.dbo.tbltreerevisionfolderdeltas
GO
And see if that improves things.
In a more severe case, you may need to rebuild the indices when no one is accessing / using the Vault server:
USE sgvault
GO
DBCC DBREINDEX('sgvault.dbo.tblcheckoutlists', '', 0)
GO
DBCC DBREINDEX('sgvault.dbo.tblcheckoutlistitems', '', 0)
GO
DBCC DBREINDEX('sgvault.dbo.tbltreerevisionfolders', '', 0)
GO
DBCC DBREINDEX('sgvault.dbo.tbltreerevisionfolderdeltas', '', 0)
GO
B) Recompile issue - there was a problem with some stored procedures which caused recompilation so much it was causing concurrency issues when multiple people accessed the stored procedure. While, some of these were fixed in Vault 3.0.7, not all were addressed. Vault 3.1 Beta 2 has addressed these issues.
To know if this is the cause, you could run a SQL Profiler against the SQL Server. The data template should be looking for SP:Starting and SP:Recompile. If there are recompiles present, then at least these are addressed in Vault 3.1.
Some issues stated:
A) Database fragmentation / statistics. We recently fixed a database in which the user's practices included checking out entire sub-folders, and then checking the sub-folders back in.
In this case, it turned out their tblcheckoutlists and tblcheckoutlistitems were severely fragmented. I asked them to some SQL Queries which rebuilt these indices, and that resolved their problem.
If the check out lists are severely changing, you may need to take a look at the checkout lists on a weekly basis.
One simple thing to try - update the statistics:
UPDATE STATISTICS sgvault.dbo.tblcheckoutlists
GO
UPDATE STATISTICS sgvault.dbo.tblcheckoutlistitems
GO
UPDATE STATISTICS sgvault.dbo.tbltreerevisionfolders
GO
UPDATE STATISTICS sgvault.dbo.tbltreerevisionfolderdeltas
GO
And see if that improves things.
In a more severe case, you may need to rebuild the indices when no one is accessing / using the Vault server:
USE sgvault
GO
DBCC DBREINDEX('sgvault.dbo.tblcheckoutlists', '', 0)
GO
DBCC DBREINDEX('sgvault.dbo.tblcheckoutlistitems', '', 0)
GO
DBCC DBREINDEX('sgvault.dbo.tbltreerevisionfolders', '', 0)
GO
DBCC DBREINDEX('sgvault.dbo.tbltreerevisionfolderdeltas', '', 0)
GO
B) Recompile issue - there was a problem with some stored procedures which caused recompilation so much it was causing concurrency issues when multiple people accessed the stored procedure. While, some of these were fixed in Vault 3.0.7, not all were addressed. Vault 3.1 Beta 2 has addressed these issues.
To know if this is the cause, you could run a SQL Profiler against the SQL Server. The data template should be looking for SP:Starting and SP:Recompile. If there are recompiles present, then at least these are addressed in Vault 3.1.
Some issues stated:
Not in the server. With transaction handling in place, a client cannot corrupt the server's tree.Tri wrote:Probably because of this slowness, corruptions has occured and the users could possibly have wrong file state in the local _sgvault folder.
Jeff Clausius
SourceGear
SourceGear
I'll definitely review all the table frag & indexes stats tonight. Currently, it takes 90 seconds just to refresh a folder having 10 files.
Do you think that DB optimization is the main cause of the server slowness? (And not the Folder Security enabled as I initially thought)
Do you think that DB optimization is the main cause of the server slowness? (And not the Folder Security enabled as I initially thought)
But could this occur at the client side? Like storing corrupted file state in _sgvault folder? Sound like it's possible b/c deleting _sgvault has solved an issue yesterday.jclausius wrote:Not in the server. With transaction handling in place, a client cannot corrupt the server's tree.Tri wrote:Probably because of this slowness, corruptions has occured and the users could possibly have wrong file state in the local _sgvault folder.
Possibly. When a repository is configured with folder security, the server must generate a "folder" view of the tree to decide how to filter / allow actions based on a user's security right. If the tables I mentioned change drastically over time, the statistics may become stale. These table statistics are important because they help SQL Server determine lookup heuristics when building the "folder" view.Tri wrote:Do you think that DB optimization is the main cause of the server slowness? (And not the Folder Security enabled as I initially thought)
I'm not certain, but I'll log a request to look into this possibility.Tri wrote:But could this occur at the client side? Like storing corrupted file state in _sgvault folder? Sound like it's possible b/c deleting _sgvault has solved an issue yesterday.
Jeff Clausius
SourceGear
SourceGear
Do the deleted files have any influence?
For example, if the total size of the deleted files represents about 10% of the repository size, would that help anything to obliterate these files?
Also, does the number of files checked out increase the slowness?
For example, if the total size of the deleted files represents about 10% of the repository size, would that help anything to obliterate these files?
Also, does the number of files checked out increase the slowness?
Last edited by Tri on Tue Jul 05, 2005 9:41 am, edited 1 time in total.
I'm not sure it is linear, but a correlation does exist between the total number of items locked (checked-out). Note, this relationship also exists if you have folder security disabled.
Now, you've mentioned when security is disabled, the checkout can proceed in a normal fashion. So I don't think checkout lists are necessarily the problem.
About the only difference in the checkout routines when security is enabled will be the resulting checkout list is "filtered", so the the items sent to the client does not include any repository info where the user was denied folder access.
Now, you've mentioned when security is disabled, the checkout can proceed in a normal fashion. So I don't think checkout lists are necessarily the problem.
About the only difference in the checkout routines when security is enabled will be the resulting checkout list is "filtered", so the the items sent to the client does not include any repository info where the user was denied folder access.
Jeff Clausius
SourceGear
SourceGear
Hi Jeff,
We have completed the DB maintenance you suggested. In the process we also have archived some obsolete folders and deleted them (about 13,000 files for 300 MB in size).
It seems to solve our slowness issue. In the rush, I didn't have time to measure precisely the speed improvement of each factor. Roughly, I'd say that the DB optimization reduce the checkout (1 file) from 120 secs to 30 secs. And the clean up of obsolete file later on further reduce this to 5 secs.
Do you think that deleting 13,000 files could contribute (significantly) to accelerate Vault operations?
We have completed the DB maintenance you suggested. In the process we also have archived some obsolete folders and deleted them (about 13,000 files for 300 MB in size).
It seems to solve our slowness issue. In the rush, I didn't have time to measure precisely the speed improvement of each factor. Roughly, I'd say that the DB optimization reduce the checkout (1 file) from 120 secs to 30 secs. And the clean up of obsolete file later on further reduce this to 5 secs.
Do you think that deleting 13,000 files could contribute (significantly) to accelerate Vault operations?