A little while back I went to the launch of the latest version of Testpro’s TAF product suite.
Testpro are an Australian company that specialise in testing solutions using some of the automated testing tools that are in the market place, principally IBM’s Rational software.
Unlike other testing companies they are not simply a supplier of services but have also developed software solutions that add value to the entire approach to lessen the investment necessary to establish an automated testing regime. Most notable is their Testpro Automation Framework or TAF. Apparently it just won an IBM international product award which is quiet prestigious however I don’t really play in that space so am not entirely sure what that means. In any case, we have partnered with Testpro using TAF on a few projects for law firms now and hence the invite to the function.
What TAF provides is an environment to easily build tests that utilise the power of the Rational product as the automation engine. If you have seen any of these tools you will know that there is a fair investment in developing the test cases and linking them to the applicable application to be tested.
Most importantly, TAF is data driven so you simply feed the same script a number of different data sets, to ‘execute’ different scenarios. If you think about it, when someone is testing an application – say creation of a bill – the human tester will start the application and basically execute the same or similar steps, but enter different data to test different scenarios. So you might have to select a range of different types of matters, or add a discount for example. But ultimately, the nature of practice management system transactions means that you are visiting the same screens over and over to enter different data to test different scenarios.
This is where the power of the Testpro solution comes to the fore – because it is data driven. So by ‘feeding’ the testing scripts a different set of ‘data scenarios’ you are testing different parts of the system.
The data driven nature also allows you to use the testing scripts to create the data scenarios that you need to test. So, for example, say you wanted to test a billing situation where there was certain types of time items for a given set of timekeepers in a given department. And additionally you want to test a certain billing of certain types of disbursements. With this solution, you can populate the datasheets for the time and disbursement entry script to create those items (which in itself is a testing stream), and then as the next step, create a bill which includes those items. So you never have to worry about ‘finding the right data’ to execute a specific test, nor having to laboriously enter the individual data items to ‘create’ the test scenarios – the testing framework can be used to save that manual labour.
The last project we were involved with was with one of Australia’s largest law firms, working on a version upgrade of the Aderant product. The main difficulty with this client is that the system has been highly customised and the processes have multiple and varied information flows that delivered varied output as a result. Of course this is an issue with all firms but the larger the firm the more customisation and tailoring that occurs.
The traditional approach to answering these questions has been to engage internal testing resources to manually test that the system is functioning correctly. The perennial challenge with manual testing is that it is slow, error prone, rarely thorough enough and constrained by the availability of business resources. The business impact of sub-optimum testing ranges from loss of time and productivity from minor system errors, through to disruption to clients and ultimately financial losses from major system issues.
Additionally, as you get closer to the ‘go live’ date, i.e. where you have less and less time, the testing window starts to shorten, so you have to make value judgements on which parts to the system you will and won’t test when you “just put in that one minor, last minute change”! The result being that, at the time when you want the most thorough testing coverage – i.e. just before you pull the trigger to go live, you start to narrow the coverage instead because you just haven’t got enough time to assemble all the testers to retest all of the scenarios.
The beauty of automated data driven testing is that you can kick off the process and let it run all the test scenarios that are available at the time, as it’s infinitely faster to run than manual testing. So your coverage actually increases as the project gets closer to go live – because you can add new data scenarios to the test suite to cover the changes.
That’s not to diminish the roles of the test management team. Analysis and thought needs to go into the definition of the data scenarios that will be used to drive the testing suite. And additionally, there will be some areas of the system that require manual testing because they don’t lend themselves to automation. But being able to provide extensive coverage of the repetitive and mundane testing areas not only saves time, but also the sanity of the testing team – they will definitely thank you for it!
But it’s not just in new implementations and upgrades of systems that automated testing provides value, consider when you need to apply service packs to Microsoft server and desktop environments. Sure, you would hope that the vendors involved have performed their compatibility testing but they can’t hope to cover the myriad of combinations of software that may reside within a large organisation. Being able to kick off several thousand test scenarios without having to worry about gathering a testing team and using up valuable resources or worse, deciding to trust the vendors implicitly and only doing rudimentary testing, is like playing Russian roulette……