Question

Getting exception while trying to save data into Decision Data Store (DDS) using Data Flow in Pega Marketing 7.3.1

As part of our Decisioning application, we are storing customer info into Decision Data Store using a DataFlow. It was working fine.

Suddenly, we started to have problem in saving the data and it waits for 30 secs and throw the following exception.

Environment : PROD

Pega Marketing 7.3.1

Nodes: 2

DNode is up and running on both nodes.

Any help is very much appreciated

com.pega.dsm.dnode.api.dataflow.StageException: Exception in stage: StagingCustomerData

at com.pega.dsm.dnode.api.dataflow.StageException.create(StageException.java:37) ~[dnode-7.3.1.jar:?]

at com.pega.dsm.dnode.api.dataflow.DataFlowStage$StageInputSubscriber.onNext(DataFlowStage.java:334) ~[dnode-7.3.1.jar:?]

at com.pega.dsm.dnode.api.dataflow.DataFlowStage$StageInputSubscriber.onNext(DataFlowStage.java:251) ~[dnode-7.3.1.jar:?]

at com.pega.dsm.dnode.api.dataflow.DataFlowExecutor$SynchronousQueueDataFlowExecutor$2.process(DataFlowExecutor.java:571) ~[dnode-7.3.1.jar:?]

at com.pega.dsm.dnode.api.dataflow.DataFlowExecutor$SynchronousQueueDataFlowExecutor.runEventLoop(DataFlowExecutor.java:545) ~[dnode-7.3.1.jar:?].............

..........

Caused by: com.pega.dsm.dnode.api.ExceptionWithInputRecord: java.lang.RuntimeException: com.pega.dsm.dnode.api.BatchRecordException

at com.pega.dsm.dnode.api.dataflow.DataFlowStage$StageInputSubscriber.onNext(DataFlowStage.java:333) ~[dnode-7.3.1.jar:?]

... 82 more

Caused by: java.lang.RuntimeException: com.pega.dsm.dnode.api.BatchRecordException

at com.google.common.base.Throwables.propagate(Throwables.java:160) ~[closure-compiler-v20160911.jar:?]

at com.pega.dsm.dnode.impl.stream.DataObservableImpl.await(DataObservableImpl.java:124) ~[dnode-7.3.1.jar:?]

at com.pega.dsm.dnode.impl.stream.DataObservableImpl.await(DataObservableImpl.java:88) ~[dnode-7.3.1.jar:?]

at com.pega.dsm.dnode.impl.dataflow.SaveStageProcessor.onNext(SaveStageProcessor.java:107) ~[dnode-7.3.1.jar:?]

at com.pega.dsm.dnode.api.dataflow.DataFlowStageBatchProcessor.commitBatchInternal(DataFlowStageBatchProcessor.java:120) ~[dnode-7.3.1.jar:?]

at com.pega.dsm.dnode.api.dataflow.DataFlowStageBatchProcessor.onNext(DataFlowStageBatchProcessor.java:70) ~[dnode-7.3.1.jar:?]

at com.pega.dsm.dnode.api.dataflow.DataFlowStageBatchProcessor.onNext(DataFlowStageBatchProcessor.java:18) ~[dnode-7.3.1.jar:?]

at com.pega.dsm.dnode.api.dataflow.DataFlowStage$StageInputSubscriber.onNext(DataFlowStage.java:331) ~[dnode-7.3.1.jar:?]

... 82 more

Caused by: com.pega.dsm.dnode.api.BatchRecordException

at com.pega.dsm.dnode.api.BatchRecordException$Builder.build(BatchRecordException.java:66) ~[dnode-7.3.1.jar:?]

at com.pega.dsm.dnode.impl.dataset.cassandra.CassandraSaveWithTTLOperation$4.emit(CassandraSaveWithTTLOperation.java:281) ~[dnode-7.3.1.jar:?]

**Moderation Team has archived post**

This post has been archived for educational purposes. Contents and links will no longer be updated. If you have the same/similar question, please write a new post.

Comments

Keep up to date on this post and subscribe to comments

May 29, 2018 - 8:32am

Hi Saravanan,

Can you verify if you are running out of disk space on DDS nodes?

I would suggest you to raising a support ticket for further investigation.

Thanks & Regards,

Santhosh