We’re running into some problems with blocking transactions on our repository during publishing. The culprit seems to be a query that is hitting rxsiteitems, and when I took a look at the table, I realized that it has 51,000+ rows in it.
Is there are recommended process for pruning this table to a manageable level? If I remove old entries, I may trigger some unneeded incremental publishing. But letting it grow indefinitely is clearly a bad idea.
The site item table is designed to record last published items. The size of it is limited to the number of sites and items in each site. So it should not grow indefinately.
My experience with rxsiteitems was during problems with the publishing process hanging. Apparently on a publish run, an attempt will be made to publish failed items. We now housekeep failed items and it doesn’t have any detriment.
If you run this query against the db:
select count(*) num_items,
pubstatus
from rxsiteitems
group by pubstatus
Do you have a significant no. of failed items? Obviously if you do go down the road of deleting failed items…make a backup of rxsiteitems.