Overview of the Scenario Testing features on the Pega platform

This post will cover all the high level capabilities and know limitations of the Scenario Testing feature of the Pega platform. The documentation covers information about Scenario Testing feature to learn more about it.

Scenario testing allows Pega application authors to create UI based end to end scenarios to test a Pega application. It has the following capabilities

  1. Record tests by simply running the application - Creating a test case is as simple as running the end user application. User input will be captured as the test data that will be used for subsequent test runs
  2. Support for all out of the box UI controls - All all the platform supplied UI controls are directly supported without any additional configuration
  3. Easy to read steps - The test case features simple easy to read steps that can be understood by all stakeholders as it simply describes all the necessary steps in the scenario.
  4. Simple validations - The validations are simple direct comparators of actual versus expected values
  5. Support for dynamic test data - Dynamic data can be supported for used for the expected user input value, or for the expected output value. Dynamic data can be specified through a predefined data page, D_pyScenarioTestData.
  6. Test casetypes and end user portals - Test cases can be created for both the end user portal after login, or it can be used to target individual casetypes.

Creating Scenario tests

Scenario tests can only effectively be captured in the context of the application portal. Scenario tests cannot be recorded from Dev Studio, App Studio or any of the development portals. However there are a few additional things to keep in mind

  • Tests can only be captured through the Test recorder which is access through the run-time toolbar.
  • The Create test case button allows you to create either a Portal or a Casetype test.
    • Portal level test allow for capturing user actions from the header and footer, left side menu and other navigation elements
    • Case level tests captures all the user interactions starting from the new screen for the casetype. All interactions outside of case will be ignored.​
  • After selecting the type of test, an orange selector will show up on hover over any supported UI control.
  • Test case steps are captured on every user interaction with that has the orange selector.
  • Add explicit validations by clicking on the "+" within the orange selector. An overlay will be displayed with options for additional assertions.
  • Once the recording is done click on Stop and Save to create the test case.

Running Scenario Tests from Deployment Manager

Running scenario tests from Deployment Manager requires the use of a Selenium runner. There are various test services out there such as CrossBrowserTesting, BrowserStack or SauceLabs. You can also use a stand alone Selenium runner such as Selenium server or Selenium Grid.

Please refer to the Deployment Manager help on running Scenario Tasks.

You can also run Scenario Tests from other pipeline tools such as Jenkins, using the associated Pega API. For more information on how to use this API please follow the instructions here Pega RESTFul Api for remote execution of scenario tests.

Additional resources

For more information, click through to the following resources

Recommended best practices


  • Wait till step gets updated in the right panel.
  • If something goes wrong while recording then cancel and re-record steps.
  • Close the work item tab in interaction portal after done with the test recording or record closing tab as part of the step. Reset the portal by closing interaction or the case items created.
  • Login and create case types manually before executing tests, so that pages are cached and render fast. This needs to be done if the server cache has been cleared and server is re-started.
  • Collapse right panel while recording, if any element which needs to be recorded is behind the panel. After recording expand right panel again so that steps getting recorded can be seen.
  • If there is any element which on hover shows there is no unique selector then it means either there is no data-test-id for that element or element is not supported by infrastructure. If it’s a supported element then generating Test Id should make that element recordable.


  • Don’t rush through elements while recording elements involving ajax calls like cascading drop downs, refresh actions etc. Just wait for the UI to get updated and right panel to get refreshed with the step.
  • Do not use autofill to enter data in forms.

Known limitations

  • CSS styles on hover are not possible to capture now, for e.g., on hover assertions. On hover action is not supported for capturing while recording a test case.
  • Scenario testing can only be run in the requestor context. i.e., a user has to login to Pega and run the scenario tests
  • Unlike selenium, Scenario tests cannot be run using different personas or logins. As we are tightly coupled with the requestor, once the user logs out, the scenario testing will end and we will not be able to continue running the same test with a different operator.
  • A scenario test cannot be included in another scenario test, which is generally the case with other test frameworks which allow atomic level test cases to be included in the main test case.
  • Scenario tests are portal dependent. i.e., once recorded on a portal, they cannot be run from a different portal. They have to run in the portal on which they are recorded.
  • In order to run the scenario tests from the Deployment manager, customer should have accounts with third party platforms like Cross Browser testing or an in house selenium grid which can take care of launching the Pega instance and logging the user into the portal.


Keep up to date on this post and subscribe to comments