Removing 90% of Repository: performance issues
Moderator: SourceGear
Removing 90% of Repository: performance issues
We are going to remove 90% of the objects and maybe 95% of the disk used in one repository. After that is done, and the objects have been obliterated is there anything else required to restore the performance of the database to a state as if those deleted objects had never been added?
This includes performance of this particular repository and also the other repositories in the same Vault instance.
Is it cleaner to export what is left, delete and re-create the repository and import the remainder back again? If so what steps may be required to tidy up cache and SQL Server?
This includes performance of this particular repository and also the other repositories in the same Vault instance.
Is it cleaner to export what is left, delete and re-create the repository and import the remainder back again? If so what steps may be required to tidy up cache and SQL Server?
regards
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Re: Removing 90% of Repository: performance issues
How large was your database prior to the delete and obliterates?
Have we discussed the performance issue with you previously? If so, can you point me to the thread where that discussion occurred at or the ticket number if you contacted us directly about it?
Exporting/Importing does not do anything to improve performance. Also, if you've performed a large amount of obliterating, then you probably won't be able to make an export work. Export/Import relies on history being intact, and Obliterate removes history.
If you've already obliterated, then you can reclaim your database space by using DBCC ShrinkDatabase. Information on that SQL command can be found here: http://msdn.microsoft.com/en-us/library/ms190488.aspx.
Are you performing regularly scheduled database maintenance using the recommendations we posted in this KB article: http://support.sourcegear.com/viewtopic.php?t=2924?
Have we discussed the performance issue with you previously? If so, can you point me to the thread where that discussion occurred at or the ticket number if you contacted us directly about it?
Exporting/Importing does not do anything to improve performance. Also, if you've performed a large amount of obliterating, then you probably won't be able to make an export work. Export/Import relies on history being intact, and Obliterate removes history.
If you've already obliterated, then you can reclaim your database space by using DBCC ShrinkDatabase. Information on that SQL command can be found here: http://msdn.microsoft.com/en-us/library/ms190488.aspx.
Are you performing regularly scheduled database maintenance using the recommendations we posted in this KB article: http://support.sourcegear.com/viewtopic.php?t=2924?
Beth Kieler
SourceGear Technical Support
SourceGear Technical Support
Re: Removing 90% of Repository: performance issues
18 GB. We have not yet deleted objects. It will happen next week. Expect about 3 GB to be freed. I'm planning for it now.Beth wrote:How large was your database prior to the delete and obliterates?
http://support.sourcegear.com/viewtopic ... 292#p62292Beth wrote:Have we discussed the performance issue with you previously? If so, can you point me to the thread where that discussion occurred at or the ticket number if you contacted us directly about it?
Yes we are.Beth wrote:Are you performing regularly scheduled database maintenance using the recommendations we posted in this KB article: http://support.sourcegear.com/viewtopic.php?t=2924?
regards
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Re: Removing 90% of Repository: performance issues
On what exact actions are you getting the performance issues on. Is it possibly just when clients start up, or is it during Gets or check ins?
Which version of Vault are you using?
Are you using Folder Security?
If your repository tree is very large (many, many files and folders), the usually just a delete without an obliterate is enough to speed that up.
It's not often that I see just database size affect performance. I normally suggest exhausting other avenues first.
If you think you might want to export some parts out, you will want to do that before obliterating. Obliterating usually makes the Export/Import unusable.
Which version of Vault are you using?
Are you using Folder Security?
If your repository tree is very large (many, many files and folders), the usually just a delete without an obliterate is enough to speed that up.
It's not often that I see just database size affect performance. I normally suggest exhausting other avenues first.
If you think you might want to export some parts out, you will want to do that before obliterating. Obliterating usually makes the Export/Import unusable.
Beth Kieler
SourceGear Technical Support
SourceGear Technical Support
Re: Removing 90% of Repository: performance issues
Start up, gets and Check Ins. The latter has gone from a few seconds to 30 seconds.Beth wrote:On what exact actions are you getting the performance issues on. Is it possibly just when clients start up, or is it during Gets or check ins?
As per signature - 5.0.3Beth wrote: Which version of Vault are you using?
No we are notBeth wrote: Are you using Folder Security?
OK. I'll not obliterate initially and monitor the performance, though the objects being removed are mostly large binary objects. So delete will probably not make a difference in performance.Beth wrote: If your repository tree is very large (many, many files and folders), the usually just a delete without an obliterate is enough to speed that up.
We are persuing any ideas. But note that the performance difference occurred coincident with the addition of 3GB of mostly binary data. Its not appropiate IMO so we will remove them. How many examples do you have of a large quantity of binary objects?Beth wrote: It's not often that I see just database size affect performance. I normally suggest exhausting other avenues first.
Given your assertion that database size should not be important, we would prefer to keep all the repositories in the one database. It simplifies backup and upgrades. So we would try obliterate before moving to another database. I wonder why Export/Import has such a limitation? It seems to me it should be more reliable than that.Beth wrote: If you think you might want to export some parts out, you will want to do that before obliterating. Obliterating usually makes the Export/Import unusable.
regards
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Re: Removing 90% of Repository: performance issues
1) To help with starting up the client, go to Vault Tools - Options - Network Settings and turn off the option "Request Database Delta on Repository Cache Miss."
2) Have the users try changing the setting on 'Use Expect: 100-Continue headers'. Let me know if there is any different with that setting on or off. That setting is also in Vault Tools - Options - Network Settings.
3) Have the users to to Vault Tools - Options - Local Files. Is the option to to 'Detect modified files using CRCs' checked? If so, have users try unchecking it and checking their performance.
4) On your database maintenance, are you performing all of the items listed in the maintenance article or just some of them?
5) Do you defrag the hard drive on the SQL Server?
2) Have the users try changing the setting on 'Use Expect: 100-Continue headers'. Let me know if there is any different with that setting on or off. That setting is also in Vault Tools - Options - Network Settings.
3) Have the users to to Vault Tools - Options - Local Files. Is the option to to 'Detect modified files using CRCs' checked? If so, have users try unchecking it and checking their performance.
4) On your database maintenance, are you performing all of the items listed in the maintenance article or just some of them?
5) Do you defrag the hard drive on the SQL Server?
Beth Kieler
SourceGear Technical Support
SourceGear Technical Support
Re: Removing 90% of Repository: performance issues
1) Already in progress. Its taking time for everyone to have it set as its a holiday time in Oz. Without this set ALL users are effected - the whole server uses 100% of one of the CPUs and no further work can be done until it finishes.Beth wrote:1) To help with starting up the client, go to Vault Tools - Options - Network Settings and turn off the option "Request Database Delta on Repository Cache Miss."
2) Have the users try changing the setting on 'Use Expect: 100-Continue headers'. Let me know if there is any different with that setting on or off. That setting is also in Vault Tools - Options - Network Settings.
3) Have the users to to Vault Tools - Options - Local Files. Is the option to to 'Detect modified files using CRCs' checked? If so, have users try unchecking it and checking their performance.
4) On your database maintenance, are you performing all of the items listed in the maintenance article or just some of them?
5) Do you defrag the hard drive on the SQL Server?
2) Is this setting going to effect all users? - Like (1) does? Please be certain. If its only the client I can run the test in parallel, otherwise I have to wait for (1) to fully take effect.
3) Ditto (2)
4) All database maintenance done.
5) Already defragged, but wrong drive. Defrag of correct drive scheduled for earliest oppourtunity - Saturday.
regards
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Re: Removing 90% of Repository: performance issues
Items 1-3 should only affect each individual user's client. Are they all logging in at exactly the same time? Is it just during that first login that the server CPU goes to 100%?
Beth Kieler
SourceGear Technical Support
SourceGear Technical Support
Re: Removing 90% of Repository: performance issues
Well (1) clearly effects everyone because the one user takes up all the CPU and so anything else requested grinds to a halt. Its not about concurrent logins.Beth wrote:Items 1-3 should only affect each individual user's client. Are they all logging in at exactly the same time? Is it just during that first login that the server CPU goes to 100%?
So what may (2) and (3) effect? (2) sounds like its about server communication? Exactly what is it doing?
(3) sounds like its local only. Is that true? That is no server comms involved?
regards
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Re: Removing 90% of Repository: performance issues
Also Beth, I was looking through the SQL Server profiling that I currently have constantly running - which is a small drain on resources, I just profile stored procedure calls, and login and logout - and this set of statements is repeated so many times it may be the majority of the profile over 24 hours:
Can you explain what the client would have been doing to cause this?
Can you explain what the client would have been doing to cause this?
Code: Select all
Audit Login -- network protocol: LPC
set quoted_identifier on
set arithabort off
set numeric_roundabort off
set ansi_warnings on
set ansi_padding on
set ansi_nulls on
set concat_null_yields_null on
set cursor_close_on_commit off
set implicit_transactions off
set language us_english
set dateformat mdy
set datefirst 7
set transaction isolation level read committed
6 sgvault DEVSRV 0X2000002838F4010000000000 58 2011-05-03 15:55:42.733
RPC:Starting exec dbo.spgetdeltachainforfullfile @objverid=1980806 6 sgvault spgetdeltachainforfullfile DEVSRV 0X00000000010000003C00640062006F002E0073007000670065007400640065006C007400610063006800610069006E0066006F007200660075006C006C00660069006C0065003200000014000C007F1062006900670069006E007400120040006F0062006A007600650072006900640086391E0000000000 58 2011-05-03 15:55:42.733
SP:Starting exec dbo.spgetdeltachainforfullfile @objverid=1980806 6 sgvault 517628937 spgetdeltachainforfullfile DEVSRV 58 2011-05-03 15:55:42.733
RPC:Starting exec sp_reset_connection 6 sgvault sp_reset_connection DEVSRV 0X00000000000000002600730070005F00720065007300650074005F0063006F006E006E0065006300740069006F006E00 58 2011-05-03 15:55:42.737
Audit Logout 6 sgvault DEVSRV 58 2011-05-03 15:55:42.733
regards
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Re: Removing 90% of Repository: performance issues
Details about each of the network settings can be found here: http://download.sourcegear.com/misc/vau ... tings.html.
I'll look into the second post you made and post back.
I'll look into the second post you made and post back.
Beth Kieler
SourceGear Technical Support
SourceGear Technical Support
Re: Removing 90% of Repository: performance issues
Do you mean the set set statements or the spgetdeltachainforfullfile? The spgetdeltachainforfullfile means someone is getting a file. If you have a build server that is continually checking and getting files, that could be the cause of all the Gets, or a user was actively Getting at the time you checked the server.
Beth Kieler
SourceGear Technical Support
SourceGear Technical Support
Re: Removing 90% of Repository: performance issues
So a GetLatestVersion would have one set of these statements for each file retrieved?
regards
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Re: Removing 90% of Repository: performance issues
I think that's correct for the part that calls spgetdeltachainforfullfile. A get from history would also call that. I'm not sure on the set commands.
Beth Kieler
SourceGear Technical Support
SourceGear Technical Support
Re: Removing 90% of Repository: performance issues
Slow login seems to be handled now.
We haven't yet removed the binary objects from the database.
I'm focusing on Check Ins now.
I've been investigating the Vault Plugin - as noted in another post. It does not seem to be taking an inordinate amount of time. In fact most of the Check In time occurs before the Plugin is Called, which occurs when the Vault Client "Ending Transaction" message is removed. The Plugin takes a a further 1 or 2 seconds, mostly on the .refresh that occurs as part of .SetActiveRepositoryID.
I've setup a Test server on a different machine using a backup from 28/1/11. Its performance is the same as the live system. But interestingly, there is different performance for different repositories within each database, but the SAME repository on different databases have the SAME performance.
So firstly this says that the database size is irrelevant! The test server is 4 GB smaller.
Secondly, the one repository on both servers "LANSA" takes 8 secs to check in a single file on both databases. On "lansatest" repository, it takes less than 2 seconds. (Both databases and both repositories use the Vault Plugin which again confirms its not the Vault Plugin). On all other repositories it takes less than 2 seconds.
"lansa" has 17422 versions, 86832 files and 3920 folders
'lansatest' has 26794 versions, 95695 files and 3280 folders.
So they are a similar size.
I would like to know what is happenning during that 8 seconds. Given its occurring on the Test machine too, we can do whatever you want. What tracing or profiling do you need to find out whats is happenning?
We haven't yet removed the binary objects from the database.
I'm focusing on Check Ins now.
I've been investigating the Vault Plugin - as noted in another post. It does not seem to be taking an inordinate amount of time. In fact most of the Check In time occurs before the Plugin is Called, which occurs when the Vault Client "Ending Transaction" message is removed. The Plugin takes a a further 1 or 2 seconds, mostly on the .refresh that occurs as part of .SetActiveRepositoryID.
I've setup a Test server on a different machine using a backup from 28/1/11. Its performance is the same as the live system. But interestingly, there is different performance for different repositories within each database, but the SAME repository on different databases have the SAME performance.
So firstly this says that the database size is irrelevant! The test server is 4 GB smaller.
Secondly, the one repository on both servers "LANSA" takes 8 secs to check in a single file on both databases. On "lansatest" repository, it takes less than 2 seconds. (Both databases and both repositories use the Vault Plugin which again confirms its not the Vault Plugin). On all other repositories it takes less than 2 seconds.
"lansa" has 17422 versions, 86832 files and 3920 folders
'lansatest' has 26794 versions, 95695 files and 3280 folders.
So they are a similar size.
I would like to know what is happenning during that 8 seconds. Given its occurring on the Test machine too, we can do whatever you want. What tracing or profiling do you need to find out whats is happenning?
regards
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3
Rob Goodridge
LANSA Pty Ltd
Software Made Simple
Vault 5.0.3