Ask the Expert - Background Processing with Prithanka Chatterjee

Join Prithanka (@chatp) in this Ask the Expert session (26-30 August) on Background Processing.

About Prithanka: Prithanka is a Senior Product Manager with Pega’s Core Platform team, with specific focus on Core-Engine. Prithanka has over 16 years of experience in delivering best-in-class Platforms and Frameworks for distributed enterprise applications. His work in Pega includes ensuring the availability of highly resilient and scalable background processing capabilities.

Message from Prithanka: Hello all, I am passionate about enabling customers to get the best out of Pega platform, and that includes getting the best out of the background processing capabilities built into the platform. In this segment of our interaction, I would love to answer your questions regarding Background Processing in general and more specifically about Standard Agents, Advanced Agents, Queue Processors, and Job Schedulers.

Ask the Expert Rules

  • Follow the Product Support Community's Community Rules of Engagement
  • This is not a Live Chat - Prithanka will reply to your questions over the course of the week (26-30 August)
  • Questions should be clearly and succinctly expressed
  • Questions should be of interest to many others in the audience
  • Have fun!

Group Tags


Keep up to date on this post and subscribe to comments

August 27, 2019 - 2:36am

Hi Priyanthka

we understand from some articles on Pega community that QP is using Kafka Service  and it is by default configured. Can you please give more details on configuration and benefits of Kafka.

August 27, 2019 - 10:23am
Response to ShivaRamBhupathi

Hi Shiva,


Firstly thank you for the Question.

Kafka is a stream-processing software platform which is used for building real-time data pipelines and streaming applications. It is horizontally scalable, fault-tolerant, wicked fast, and runs in production in thousands of companies. Among a host of other benefits, I would primarily like to draw your attention towards the following few. Kafka is highly recommended as a streaming solution because:

  1. It provides high throughput
  2. It has very low latency
  3. It is fault tolerant
  4. It supports both vertical and horizontal scalability
  5. It is distributed in nature
  6. It supports both streaming and messaging capabilities
  7. It provides high concurrency
  8. It provides high durability

Kafka is highly configurable and has been configured in a way to best serve the Pega deployment. I would encourage you to work with the default settings as currently available, which have been tested to perform the best. In case you need any support for an advanced setting, I would request you to reach out to support.


The current defaults, as a few example, are as follows:

  1. Number of default partitions are set to 20
  2. Max retention period is set to 7 days
  3. Maximum message size is set to 5 MB, and so on.

Hope that answers your questions.


Best regards,



August 28, 2019 - 2:02am
Response to chatp

Thanks for this insight on Kafka 

August 27, 2019 - 8:50am

August 27, 2019 - 7:25pm

We want to build a rest service that can handle hundreds, if not thousands of requests for every few seconds. From pdn, it is my understanding that this can be done using asynchronous processing (and Service Request Processor). Could you please shed some light on this and if there are any downsides to this approach. Can we associate the Queue ID with the workobject using OOTB rules?

August 28, 2019 - 11:02am
Response to AbhinayC

Hi Abhinay,

Firstly thank you for the question.

Background processing using Advanced/Standard agents or Queue Processors and Job Schedulers do not support the kind of functionality you need out of the box. However, you can choose to build it if required. But since you are already looking to use Service Request Processor for building the rest service, let me help you and tag the Expert for the same.

@nvkap : Request you to kindly answer this question.

Best regards,






August 28, 2019 - 6:20am

Please watch this TechTalk on Background Processing. Feel free to discuss any of the points from this video with our expert!

TechTalk Episode 30: Background Processing 

Lochana | Community Moderator | Pegasystems Inc.

August 28, 2019 - 7:00am

Hi Prithanka,

Is there a configuration we can set or Rule to use so we can make sure the Queue Processors are automatically restarted every time PEGA crashes or is also restarted?. Specifically we're interested in automatically restarting the following queue processors:

* cyFetchRoutingDestination

* cyReRouteConversationRequest

Reason for this is that after the last restart we had (one of the nodes went down), we noticed the queue processors did not automatically started and had to manually be started.

Many thanks.

August 28, 2019 - 11:15am
Response to RicardoH3054

Hi Ricardo,

Thank you for the question.

Queue Processors are built to be resilient and survive crashes and restarts. As long as the Queue Processor is not Disabled on the Queue Processor Rule-Form or is not explicitly stopped from the Queue Processor Landing Page, in the Admin studio, the Queue Processor is supposed to start automatically on node crashes or generic restarts.

I request you to check for these two configurations, one on the rule-form and the other in the Landing-Page, to see that it is configured correctly. And if the problem persists, I would request you to log a support ticket.

I hope, that answers your question.

Best regards,


August 28, 2019 - 11:23am

Hi Prithanka,

I was trying to replace an existing advanced agent with Job Scheduler. I was able to successfully create the job scheduler rule and followed the configuration guidelines as per the rule help guide. I'm uploading the configuration snapshot.

  • Enable job scheduler = yes;
  • associated node types = RunOnAllNodes;
  • Schedule = Daily every 1 day;
  • context = SpecifyAccess group; accessgroup:'App:Administrators';
  • Activity = activityname; Class:class context
  • Application server is Jboss and pega version 8.2.2

I was trying to monitor the job from Admin Studio to see its status and I couldn't locate the job under Jobs landing page in Admin Studio.

As per your TechTalk episode you mentioned we can check the status of the job in admin studio after we successfully create one.

Can you please help me understand do I need to do any specific configuration changes for the job to get displayed under Jobs landing page in admin studio?


Vyas Raman Loka.

August 28, 2019 - 11:54am
Response to Vyas Raman Loka

Hi Vyas,

Firstly, thanks for your question.

Every Job Scheduler and Queue Processor in 8.2.x gets rule resolved against the context specified in ASYNCPROCESSOR Requestor Type. And unless a Job Scheduler is resolvable against the specified context, it will not show up in Admin Studio Landing Page. Please see the snippet below from Pega Help for more details:

AsyncProcessor requestor type

You use AsyncProcessor requestor type to resolve Job Scheduler and Queue Processor rules.

Each Pega Platform operator logs in using unique ID that is associated with a specific access group. This access group provides a context to resolve rules. Because Job Scheduler and Queue Processor rules in the background, no user-based context is available to resolve these rules. The AsyncProcessor requestor type provides the context to resolve Job Scheduler and Queue Processor rules.

Requestor type definition

The AsyncProcessor requestor type defines a list of rulesets that create a context for Job Scheduler and Queue Processor rules resolution.

At system startup, unique queue processors and job schedulers are found by using the context defined in the AsyncProcessor requestor type. When the system is running, and a new queue processor is added, or an existing one is overridden in a different ruleset, the context is updated to include the new ruleset to resolve the right rule.

The dafult access group is an Application-based Access group. This definition includes your custom access group that corresponds to your custom rulesets. When you use the Application-based access group, only job schedulers and queue processors that belong to this access group run. The default access group is PRPC:AsyncProcessor.

I request you to explore and assess the Access Group that is currently specified in the ASYNCPROCESSOR Requestor Type, and ensure that your Job Scheduler is resolvable against that. 

Best regards,


August 28, 2019 - 1:05pm
Response to chatp

Just to clarify the access group context in the Job scheduler or queue processor ruleform only determines the context for the activity rule mentioned in the rule but it doesn't have any significance for the rule to show up in the Admin studio.

For it show up in the  Admin Studio we need to check the AsyncProcessor requestor type instance to include the application context for the job schedulers or queue processors created in the corresponding application context right?

August 29, 2019 - 9:12am
Response to Vyas Raman Loka

That is correct, Vyas.


Best regards,


August 29, 2019 - 1:19am

Is BIX not comes Under Background Processing? if yes  please clear this doubt

How Extract rule picks or identifies  the our own FTP server instance and location path  where the extracted files to store
Can we skip generating manifest file generation and Summary file generation ?

Please help

August 29, 2019 - 1:53am
Response to VidyaSagarM8878

Hi Sagar,

BIX is related but not specifically under Background Processing. I realize that I had branched your earlier reply into a new post for better visibility: BIX question - Best approach for Extracting smoother with low impact on performance

If your present question is related, you could update the post, else create a new one.

Thank you,

Lochana | Community Moderator | Pegasystems Inc.

August 30, 2019 - 4:36am

Hi Prithanka,

I was replacing one of the advanced agents with Job Shedulers. 

With advanced agents we had the provision to update the "Agent Interval" or "Pattern" using the Data-Agent-Queue instance.

However, I could not find such a facility with Job Schedulers. All I can do it enable or disable it.

So if there is a need to update the schedule of Job Schedulers in Production, how can we do that?

August 30, 2019 - 9:22am
Response to MAHI302446

Hi Mahi,

Firstly , thanks for the question and this is a really interesting one.

Agents always had a rule (Rule-Agent-Queue) and a data (Data-Agent-Queue) instance working together in combination to determine the current configuration. This, though was useful in some cases, it was also a cause for substantial pain for most of our users. For example:

  1. This made it difficult to ascertain the state of a certain Agent just by looking at the rule.
  2. Sometimes during migration or upgrades the data instances would be lost and that would cause a change in behavior. 
  3. This would also make Agents behave differently than other Pega rules.

So, in an effort to standardize the behavior, Job Schedulers are created to contain all configurations within the rule itself. Any change in the behavior of the rule has to be made and contained within the rule itself. Though this takes away a little bit of the flexibility, this offers better resilience and management capabilities. As for your particular case Mahi, I would advice you to resave the Job scheduler rule after you make changes (possibly in a production ruleset). And hopefully you are not required to make such changes frequently in Production systems (which again is not advisable).

If this does not solve your use case, please feel free to get in touch with me by sending me a private message here on Pega Community and we can discuss further, and explore your specific use case.

Best regards,


September 1, 2019 - 5:57am

Thank you everyone for the great discussions and special thanks to @chatp for being a wonderful expert!

Lochana | Community Moderator | Pegasystems Inc.