DEVOPS - Schema not priveleged


With Pega Deployment Manager 02.01.03 we are delivering package of 2 products.

Rule jar ( list of all ruleset) and Data Instance (All Data instances for DB, DB Table, Class etc).

1 - When we import Rule Jar, if there are any aged updates then the pipeline fails with status, pyStatusMessage: "There are aged updates in the archive. Exiting the import process. List of aged updates" and the pipeline fails.

the alternative is we save the list of aged rules in Development Environment and then re-import using the same utility tool. This way Import was successful. Is there any alternative to handle this?

2 - When we import Data Instance, we get a screen with checkbox for Automatic and Manual. We import via Manual skipping the auto update of Schema, but Deployment Manager takes the automatic by default and sends the status value as failed. is there any alternative to handle this?

the DSS value for "AutoDBSchemaChanges" is enabled.

,"pySchemaSQL":"--execute on the database specified by Data-Admin-DB-Name PegaRULES:\nALTER TABLE MCOM_SCHEMA.pc_BGC_BE_IDSync_MCOM_Work ADD \"MYFIRSTNAME\" VARCHAR2 (50)\n;\n\n--execute on the database specified by Data-Admin-DB-Name PegaRULES:\n

--execute on the database specified by Data-Admin-DB-Name PegaRULES,"



Keep up to date on this post and subscribe to comments

August 20, 2019 - 12:31pm
  1. Handling aged imports is supported in the 4.x series of Deployment Manager and not in 2.x series of DM:
    1. If you are getting aged updated, I would recommend trying to avoid it, avoid packaging operators, or data instances or configuration that are updated per environment. This is often a sign that too much environment specific information is being packaged in with the application. 
  2. Regarding schema management, please follow the instructions on how to configure manual vs automated schema application in the pipeline :


August 22, 2019 - 10:44am
Response to Linoy Alexander

Dear Linoy,

This approach is feasible only if we handle single release, but in case of multiple releases handled/deployed from same development environment, then it's a challenge.

AutoDBSchemaChange was set to false in our Orchestration server, but that didn't help either, where as PEGA says, Schema changes need to be applied before this build on Production stage..Once you are done, please complete “Schema Changes” task on to  Production stage to progress this build further. 

Few of the SQL Changes which is coming as a blocker, if we don't wish to deploy, but due to feature or design of PEGA Deployment Manager, the pipeline will never succeed until we deploy it manually.

Please suggest if you think there is an alternative.

August 22, 2019 - 10:27pm
Response to ParmeetSinghH

I am a little unclear about what the exact issue is. Are you trying to avoid having the schema be applied automatically? If so by following the instructions in Deployment Manager to disable automatic schema application, but turning off AutoDBSchemaChange, the pipeline should simply pause waiting for someone to manually apply the schema and then you can resume the step. It behaves like a manual step.

If this is not what you are looking for it would be helpful to clarify your use case.

August 23, 2019 - 9:47am
Response to Linoy Alexander

Indeed, I want to avoid the schema changes to be applied automatically. But then the Pipeline will pause for me to add the schema changes manually, so pipeline can resume.

my Question is, is it possible if my pipeline proceeds manually but ignore the schema changes also?

August 23, 2019 - 11:10am
Response to ParmeetSinghH

You can choose to ignore applying the schema change and just approve the manual step. Just make sure that you application won't fail if the schema isn't applied, or that it is being taken care off in some other way. 

One other thing to double check. You should also check the status of AutoDBSchemaChange DSS on the candidate system, make sure that it is set to false on the production environment as well.