Question

BIX Process is running slow for 1 extract.

We are running multiple extracts from different classes using Get all Properties for XML.

For one of the Extracts which is supposed to be smaller than work object data is taking longer time than Work Class Extract.

For the extract we are using -g and -G parameters to pass pxcommitdatetime filter in the shell script.

The performance of that particular extract seems to be taking longer time to complete.

We are looking into improving the performance of that extract.

**Moderation Team has archived post**

This post has been archived for educational purposes. Contents and links will no longer be updated. If you have the same/similar question, please write a new post.

Comments

Keep up to date on this post and subscribe to comments

Pega
June 8, 2017 - 7:40am

Hi Aravind,

From BIX 6.3 onwards, the batch size is the number of work objects which are extracted. A single work object extraction may results in multiple table writes depending on the nested structures chosen in the extract rule.

There is always a trade off between memory and resources when we increase the batch size. When using a large batch size, the JDBC driver will keep those many statements (in this case 39 * batch size * number of records per table) in memory. This is a huge memory consumption. This also means that if a single record in a batch fails, then the whole batch fails and reconciliation could be a big effort considering not all databases work the same way in notifying which record caused the issue.

<env name=”compatibility/BIXUseOptimizedClipboardXML” value=”true” />

The above config setting is currently overloaded

 

1) If the XML is incorrectly generated

2) To avoid declarative processing during extraction

Obviously if the extract rules have properties on which declare expressions have been defined, the above config parameter can significantly reduce the memory consumption specially with large batches.

 

Hope this helps.

 

Thanks,

Venkat

June 8, 2017 - 12:55pm
Response to janav

Hi Venkat

Thanks for your reply, we are already using the above configuration in our extracts

we are using batch size as 1 and committing each record into XML file. but still we see the issue 

we have verified things from DB side and have correct indexes on the table also.

The Work Object extract is completing quick which is having more data and nested page structure than compared to the one we have issue.

 

Regards

Aravind

June 8, 2017 - 11:15am

June 8, 2017 - 1:32pm

It may be beneficial to enable debug on com.pega.pegarules.data.internal.access.ExtractImpl and determine where the longest delays are.