5.7 publishing error

hi

over the last month our full publish has gone from taking 1 hour to 4 hours - or simply not working at all - just crashes out with no error logs.

I have tried a local publish and get the error below. This occurred multiple times. The error also says An exception occurred while processing the “PSXdTextToTree” extension: java.net.BindException: Address already in use: connect.

I have searched the forum for address already in use error but can’t find anything. Has anybody experienced this or have any idea what this could mean.

thanks
michelle

com.percussion.publisher.client.PSContentFetchException: Address already in use: connect
at com.percussion.publisher.PSUtils.fetchContent(Unknown Source)
at com.percussion.publisher.client.PSContentItem.publish(Unknown Source)
at com.percussion.publisher.client.PSContentItem.process(Unknown Source)
at com.percussion.publisher.client.PSContentPublisher.process(Unknown Source)
at com.percussion.publisher.client.PSContentPublisher.execute(Unknown Source)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.soap.server.RPCRouter.invoke(RPCRouter.java:146)
at org.apache.soap.providers.RPCJavaProvider.invoke(RPCJavaProvider.java:129)
at org.apache.soap.server.http.RPCRouterServlet.doPost(RPCRouterServlet.java:354)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:709)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:802)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:252)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:173)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:214)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:178)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:126)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:105)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:107)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:148)
at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:825)
at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.processConnection(Http11Protocol.java:738)
at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(PoolTcpEndpoint.java:526)
at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(LeaderFollowerWorkerThread.java:80)
at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:684)
at java.lang.Thread.run(Unknown Source)

Michelle, have you tried Tech Support?

com.percussion.publisher.PSUtils.fetchContent is attempting to retrieve external content, but I don’t know where that gets you in resolving your problem.

Hi Riley

no haven’t tried tech support yet as we have major major problems and we are just investigating to see if we can identify the cause of any of them. Our system is extremely unstable and has all sorts of problems in the UI for users as well as the publish failing completely or taking over 4 hours (used to be 1 hour).

thanks
Michelle

Wow, you may want to reach out to Technical Support and get some immediate attention if you’re having these type of issues.

With all the problems are you able to trace the errors from one problem to the next?

No seems to be no logic to our problems and are just getting worse and worse. Publishing issues seem to be completely random.

For instance I did a full publish on friday which 2.5 hours to publish 24000 files and resulted in 2 publishing errors. I fixed the errors and deleted any content files that were over 5MB - so tried to make things better.

Today i published again (the ONLY changes were as stated above - so exactly same content, environment etc etc) the publish took 3.5 hours.

so i made things better, the publish got worse!!!

go figure :O(

Our publishing issues were usually encountered if Percussion / Rhythmyx encountered a problem connecting/publishing to our web servers. Our full publish editions do take a while to complete; however, we only do these on rare occasions. Sounds like a Percussion / Rhythmyx 911 call is needed.

In the end looks very frustrating, and I pray for patience. Don’t want any developers/administrators tossing the application server out the window.


retrieving my baseball bat from the cupboard as we speak!!! ;O)

Are you still on with this.

Many moons ago we had issues with publishing. Tech support did intervene and we didn’t encounter any further issue with publishing. This was necessary because eventually RX packed up.

From memory we had to…

  1. Fix a problem with a data issue. I.e. there was a dodgy navon content item which we had to remove. This was identified by switching on SQL logging

Edit: /Rhythmyx/rxconfig/Server/rxlogger.properties

Uncomment the line that says:
#log4j.logger.com.percussion.util.PSSQLStatement=DEBUG

  1. Cranking up the Java heap through the RhythmyxServer.ja file located in the root directory of rhythmyx
    This simply consists of one config for memory…where you can give it up to 70% of total memory (ours is nowhere near 70%, but this allocation was enough)
    -Xmx1536m

Going back to your original post, you mentioned the “PSXdTextToTree” error which is an exit we use on assemblers for the shared body field / eWedEditPro / EditLive field.

Also, if you look at the rxpubdocs table

select *
from rxpubdocs t
where t.pubstatus = ‘failure’

do you have a large number of failed content items (i.e. 1000’s)? I’m sure Tech support once told me that the publisher will try to republish failed items. If that’s the case then this is obviously increased overhead on the publisher.

Anyhow good luck!

thanks for this - it is really useful and we are now trying out the things you suggest.

With regard to the SQL Logging - how did you find the dodgy navon - what should we be looking for in the logs?

also running the sql on the rxpubdocs table produces 24433 rows. Our last full publish shows 23 errors but no error log was produced and there is no info in this table to tell us what they are.

do you have any more clues/advice?

thanks
michelle

Hi,

Firstly, it may be obvious and I should have mentioned it, but the SQL logging will slow your system down…so if you are having problems with the system hanging on publish then this won’t help, so you’ll have to remove it as soon as!

In terms of the exact error for the dodgy navon…I cannot remember to be honest. At the point we were having problems, Tech support helped out as our system wouldn’t even start up. It may have even been Tech Support who identified the suspect navon, but the SQL logging may highlight any errors which are causing problems with the publish.

What seemed to happen with our system was that as the content in the repository grew over the years, the incremental publish would crash more frequently…similar to what you describe. The Java heap was the fix.

With regard to the failed items, 24000 is a lot. As I said, the guy at tech support seemed to think that the publisher would attempt to republish failed items…and I decided to remove them…do the necessary backing up if you decide to go down this route.

A query along these lines may help identify repeatedly failing content…

select contentid,revisionid,count(*)
from rxpubdocs t
where t.pubstatus=‘failure’
and trunc(add_months(sysdate,-1)) <= trunc(pubdate)
group by contentid,revisionid
order by 3 desc

Can you tell from the publogs when things start grinding to a halt if it is around a particular type of content or contentid?

thanks again jason

with regard to deleting the failed items, was it just a case of replacing the select statement above with a delete statement?

thanks
michelle

…found this the other day regarding the Java Heap which may be of use…particularly the max setting.

With regard to RXPUBDOCS…should be just a one liner (delete from rxpubdocs where pubstatus=‘failure’) if you want to get rid of them all. At the least you should backup RXPUBDOCS so you can reinsert the deleted rows if necessary… and of course I’m not advocating you do this straight onto a live environment unless your confident you can fully restore it from a system backup or have tested this first on a cloned environment.

J

thank you again