Managing your automated Playwright test runs
Background
I was approached by Pratik Patel, CEO of TestDino to take a look at his pride and joy 😃. I had initially taken a quick look at the tool and their YouTube channel and so very impressed with the polish I made a Linkedin post to encourage folks to take a look. However I felt that I should at the very least do a review of the tool.
In this article, I perform a very quick review. I only base my feedback and assessments on viewing the videos on their channel and viewing their sandbox. I plan to do a more hands on integration later building a CI/CD pipeline and I’m likely to use this tool as part of it. The target audience of this article is engineering/test leaders looking for such a tool to meet the needs of managing their Playwright test results especially for traceability and also want some help with isolating test failures and flaky UI tests.
What is TestDino?
TestDino is a tool to help you manage your test pipelines by helping you manage your Playwright test run results (I believe they plan to gradually increase the scope of their supported test frameworks eventually). The key problem they are trying to solve is the mess resulting from the sheer cacophony of test result data that is generated in an environment where (a) your entire system consists of multiple microservices exposing their functionality via their own microsites (b) there are multiple deploys per day or week hence generating a lot of data which is usually thrown away but have value if aggregated eg. a module could consistently run into similar problems during testing — that could point to something systematic rather than ad-hoc.
This really resonated with me as in the past, I’ve worked with teams with either Allure reporting (Java based systems) which is great but you have to create a system to persist the test run reports in a cloud bucket somewhere and then have good naming system for the report names so that you’re able to align the test run report with the PR or CI run; or there the test run results are captured in a Test Case Management system like TestRail but not the other useful information that is framework specific; or no artifacts are persisted and everyone just essentially gropes in the dark.
With TestDino (and similar tools), they link your PRs to the test run report and perform analysis of the runs to allow teams and managers the ability to identify systemic issues with the team or development process.
Navigating the Sandbox
Firstly you do need to obtain access to the sandbox:
- Go to http://testdino.com
- Sign up for the free (ie. community) account. There are limits for free accounts — see https://testdino.com/pricing/
- There is a link to their Discord server on the main app page. Just sign on to it and make the request for the sandbox
Let’s get rocking!
Ok once you’ve signed on and have access to the sandbox, just head over there. You should see something like this:
You can select the role you play (see “I am working as”) and also select which environment’s data you want to view. As I’ve not integrated this to a real pipeline I can’t really say how these are configured (scope for another review article).
Next you can see usual statistics like Total number of tests executed etc. What I didn’t like was that you can’t zoom in by clicking on the metrics. That’s the similar issue I face as I scroll down:
All the test failure metrics are great but I can’t zoom in to find out which ones are say unstable tests or actual bugs.
This frustration is echoed in the Analysis and AI Insights pages. The links to those are on the side bar menu.
Here again I have useful information but I can’t zoom in to the test run(s) itself.
Similarly with the AI Insights page. I can see some of the tests are having issues but I can’t click on it to follow through. Even a link to the code on github would be really useful.
The Test Runs page
To access this page, just click on the “Test Runs” link on the side bar menu.
On the Test Runs page, you see (well duh!) all the test runs. Over here, if you click on the number of passed, failed, flaky, skipped, unstable etc they bring you to a Test Run details page where you get to see the test case runs that match accordingly.
I picked 1 unstable test case to zoom in:
The page includes screenshot and video. This is great! That helps the dev/QE figure out what’s the problem.
If you click on the “AI Insights” button, you get:
And if you scroll down you get more details on how to fix the issue:
The Pull Requests page
To access this page, just click on the “Pull Requests” link on the side bar menu.
If you click on the PR you will be brought to the PR page on github. If you click on the Merge link you will be brought to the Merge page on github. But if you click on the test result eg. pass, failed, flaky, skipped it doesn’t bring you to anywhere. I suspect they are not linked to the test runs page. It would be great if both could be linked so that a manager or dev could check to see why her PR didn’t deploy (if you have full Continuous Deploy).
Author’s note: the issues I raised in this article have been flagged out to the TestDino team and they are actively working on getting them addressed
How easy is it to set up?
Full disclosure, I have not actually set up Github actions for this but from the video on their channel it does look fairly straightforward.
Getting Support
As described in the previous section, you can sign up to the TestDino Discord server via the link on the UI and get help. The folks are pretty helpful but be mindful of the timezone the team is in.
They also have a YouTube channel and they are constantly adding more content to help you get started.
Final words
I plan to integrate TestDino to the pipeline I’m vibe coding (see this, this, this and this articles) and I’m looking to see how well TestDino stacks up to a vibe coding (or AI assisted coding) workflow. However, at its current state, I like:
- That it looks very polished
- The team has good support infrastructure set up ie. Discord
- The team has taken the trouble to provide videos to get you started quickly
- The UI is pretty easy to navigate — I didn’t have to ask questions what this was for or how do I do this or that
I’d give some kudos to the TestDino product/engineering team for pulling this off.
What I didn’t like:
- I didn’t like that I couldn’t zoom in to do more analysis
- I didn’t like the fact I could not relate PRs to test runs
- There were great analyses and AI insights but they didn’t link to the code or the test run or the PR that caused it
I’ll use the chocolate cake analogy (somehow chocolate cake comes up often in my writings): you tell me the recipe you are using for the chocolate cake, what Valrhona (high quality) chocolate you use, what exquisite cocoa beans were used, how much fluffy the mousse is on the cake, how the cake just melts in your mouth and then when I ask you if you have any for sale, you give me the phone numbers of the shops that sell them.
As it is the tool is ready to be used and for some teams it could be very useful. As there’s a community version, and seeing setting it up via Github actions is straightforward, it may be worthwhile to give it a go and see if your team is able to get value from it (I do see a lot of value, don’t get me wrong esp with the chocolate cake analogy).
Author’s note: the TestDino team is aware of the current shortcomings and are working to have them addressed
Learn more about An initial review of TestDino. Managing your automated Playwright test…