What happened
A routine Pega update broke every single automated test
This bank runs Pega for central fraud detection. When Pega released a new version, they changed the underlying technology while keeping the user interface identical. Same screens, same buttons, same workflows. Nothing looked different to the people using it.
But their test automation noticed. Or rather, it stopped working entirely. Every automated test failed because the hidden identifiers the tools depended on had changed. The UI was the same. The code underneath was not.
The team had two options: rebuild all tests from scratch, or find a tool that doesn't break when someone else ships an update. They chose the second option.
How they fixed it
Automated testing that works like a human tester: looks at the screen, not at the code
TestResults doesn't read hidden identifiers or code structures. It looks at the screen the way a person would: recognizing fields, buttons, and labels by what they are. When an application changes the technology behind the UI, nothing breaks. The screen still looks the same, so the tests still pass.
The team was up and running faster than expected. Their first two test cases, each covering around 50 steps across 20 different screens, were automated in three days. Nobody on the team had used TestResults before. They even brought in apprentices to help build tests, something that would have been impossible with their previous tool.
Because TestResults is built for regulated environments, audit compliance came out of the box. The bank met its financial market authority requirements without additional work.
less test flakiness in test execution
to first automated test cases. No experience needed.
faster test execution without artificial wait times
What changed
99% less flakiness. 50% faster execution.
The old setup ran background processes to keep things stable, and 40% of test runs still failed for reasons that had nothing to do with actual bugs. That's not automation. That's babysitting.
With TestResults, there are no background workarounds. The screen monitoring waits until the application is actually ready before interacting. Flakiness dropped by over 99%.
Test execution got 50% faster too, because there are no artificial wait times or stability steps baked into each run. Tests start, they run, they finish.


