Testing – Automated Tests on Dynamic Web Content

functional-testingintegration-testsseleniumtestingweb-applications

I'm doing SQA work for several Kendo-based sites that have many tables (some are hand-made by our devs). These tables have a lot of rows, columns, pages, and data filled in them- so I'm basically doing SQA on very dynamic content.

I'm trying to make automated scripts to make sure features like add a row or edit a row work but the process seems terribly tedious and prone to failing (not because the actual table code is bad, but because the content is dynamic and thus the Selenium scripts grab the wrong row, column, etc.)

For example, if I want to make a Selenium script for adding a row in a table, I have to:

  • figure out the xPath to that specific table
  • store the Xpath
  • store the XpathCount
  • add a row
  • fill in details
  • get the new XpathCount
  • make sure the new count of rows is 1 more than the original number of rows
  • if all is good so far, get the specific path to the new row and hope that it's where you think it is
  • assert that all of the data of this new row is what was entered on creation of the row

Let's say your table stores things alphabaetically and you can't control all of the other tests the devs are running, so its populated with 54 items; some made by you, some made by others. You run your Selenium script to click 'Create Row' and then on the 'Create Row' page, it automatically fills in the details for a row with the main attribute name of 'Bob'. Selenium then clicks 'Submit'.

The table/webpage inserts the row 'Bob' between 'BAMF' and 'Karl' but the Selenium test ultimately fails because the content is dynamic and thus it has no clue what row to look for that has 'Bob' in it. If I have to look at the table each time I run a test to see where 'Bob' would go so I can update the script to know where the row will be, I might as well not automate.

Are tests like these not supposed to be automated? Are test scripts like these supposed to only run on empty tables that you populate yourself?

Best Answer

You seem to be on the right path for live, automated testing, but have some significantly complicating factors:

  1. Dynamic content ups the difficulty of specifying a test. You often have to state it in at least somewhat more generic terms.

  2. But far worse: Miscellaneous other, possibly competing tests happening in parallel. This really ups the ante. If other tests might add "Alice" or even "Bob" while you're adding "Bob," you can't assume items will be at fixed positions in the dataset, or that there will be a fixed count of them. Your navigation and test conditions end up having to be much more flexibly and generically stated. And you'll need to be flexible about failures. In a parallel environment, some tests will invariably fail, even if correctly specified. That is inescapable. If someone else tests "delete a random row" or "purge all records that start with the letters A-F" while you're testing "add a Bob row," your row is at risk. Over time, given enough tests, some times the competing test will occasionally win. It will also be a transient failure. Rerun your test, and it's unlikely the tests will collide again in quite the same way. (Transient issues can make things better, because they are more rare, but also worse. They're much, much harder to track down, understand, and fix, because they're time-dependent.)

Some recommendations:

  1. Do not give up on automation. The more complex and dynamic the data and the application behavior, the more important it is that tests be automated. In complex systems. you are not going to have the energy and focus to repeatedly test every behavior. Not even if you have a large team doing the work. Automation is the only reliable way forward. Be patient as you learn to specify those tests in more flexible and inclusive ways.
  2. Increase the isolation of your tests. These days cloud instances, virtual machines, and virtual environments are amazingly cheap. Maybe you can spawn off your own app instance to test. That would remove the parallelism bugaboos entirely, simplifying your job. It would also allow you to start with pristine, pre-defined datasets that you load for each test run, or whenever convenient for your testing needs. However, I know that might be difficult. I have been in situations were management insisted we couldn't afford separate test instances. I've even worked on live, commercial products where it was mandate that we test some things on production systems, with production data (shiver).

  3. Increase your data isolation. Even if you cannot carve out your own test instance(s), you can still increase the isolation of your tests. I often use obviously unique data to do this. Don't add "Alice" and "Bob" for example; add "Alice_21387" and "Bob_21387" say, where the suffix is a randomly generated index per test run. Can't add numbers to those fields? Add random letters. There is very little chance "Alicezcfh" and "Bobzcfh" will be used by any competing test.

  4. Allow your tests to fail. Most testing frameworks have the ability to wait and retry tests. Use it. Let your tests fail up to N times, with a D second delay between runs. For many webapps, N=3, D=15 will do the trick. If it's no a built-in option to your test runner, it's easy enough to write this kind of retry-test logic.

  5. Make your test scripts more robust in what they expect as outcomes. For example, if after an insert the record for "Bob" must be somewhere in a table, do not write a test which expects it to appear exactly at row number 42. Write the test instead to expect "Bob" just somewhere in the table. (hat tip to @Doc Brown)

In summary, don't give up on the automated approach. There are ways to make it work. It's imperfect, especially if you're testing a system that is being changed as you test. You will sometimes have to rejigger, harden, and improve tests as you go along. But it can be practical and effective. It's also your only real hope. The logical alternative, repeated manual testing, will never work with anything resembling ease, speed, and reliability.

Related Topic