Automation strategy can be seen in synchronous to traditional Test Strategy. But I can see that increasingly programs and projects these days are preparing to include the test automation as de facto embedded in the quality assurance process, which raises the need to modify our existing test strategy document, keeping automation in mind.
A good automation strategy identifies various scenarios in development life cycle, most of which comes from experience and defines the standard operating procedure for each identified scenario.
Since test automation can not be a one size fits all solution, I have tried to provide my best to include all key points to build a significantly better automation test strategy for your programme or project.
It cannot change frequently when in flight
Explains bigger picture of quality assurance
Aligned to your organization’s quality assurance goals
Is agreed with program leadership
Can be used for multiple projects with minimal modification
Major sections of a good automation strategy
Clearly define the high-level organizational inputs with regards to quality assurance and automation. This section reflects the organizations attitude towards test automation and how serious they are about it
Include any reference materials, links to confluence pages, email communications, white papers, books as reference materials used to prepare the document
General definitions acronyms
Project Name(s)
Ad hoc testing
Scenario
Test Cases
Automated Test Cases
Coverage
Design
Test Plan
Use Cases
What are all covered in QA process. Project names, list of applications under test can be listed here. For example, all the development happened in previous sprint need to be automated by the end of current sprint. This section can also mention the level of automation need in Unit testing, API / SOA testing and GUI testing levels
Mention any limitation, out of scope areas for QA and Automation process, here. For example, at any given time, current sprint code need not to be automated but should be manually tested
This section defines the documentation followed within the organization
Document name
Prepared By
Audience
Approvals
Frequency
Testing approach section is where we define the types of testing performed in system under test
Unit Testing
API and SOA testing
System testing
Usability Testing
Load testing
Performance Testing
Regression Testing
Recovery Testing
Conversion Testing
Security and Pen Testing
Installation and Configuration
Documentation testing
Test cases coverage
Automated tests coverage
Defect lifecycle model
Test cases
Execution - Releases, Cycles
Exit Criteria
Go – No Go guideline
Program leadership can choose either of these approaches. Both approaches have their advantages and disadvantages so this can be further elaborated in project specific test plan
Program should decide on automation triangle approach and clearly set goals for percentage of unit, API and UI tests to be covered for system under test
Discusses how to involve manual testers in requirement analysis phase and automation testers in development unit testing phase
Automation tools available in the organization can be defined and listed here. Automation tools are generally identified early in the development life cycle by wide and shallow approach where testers will do a smoke test in all the available systems under test using various different tools to come up with a feasibility matrix which will be a great input for program to choose to invest in the appropriate automation tools
The overall automation framework is created and maintained in the centralized repository can be mentioned here. Boiler plate code for most of the types of testing involved, data models for most data and methods for generating test data are maintained in the repository
Data driven framework
Keyword driven framework
Scenario driven framework
Hybrid Framework
BDD, ATDD framework
Automation Framework Technology Stack
Frequency of scheduled automation run in its dedicated environment and CI pipeline, location and links of such pipelines, target audience for automatic report distribution list
Standard procedures of how to test various shipments and level of tests (hot fixes, features, releases)
Most importantly, this section should define when to stop automation effort and start to think about revisiting the automation plan, strategy or both. This is important because sometimes test cases and application under test demands the need for a change in automation approach and it might be easier to drop current effort and go back to whiteboard and brainstorming to come up with an efficient alternative to automate such tests (for example, will it be easier to use postman rather than bare-bone Java to test my REST calls?)
Environment setup strategy agreed with IT support team should be maintained here
Environment details
Credential management for testing team, test (mock) users and accounts
Software shipping strategy (Dev -> SIT -> UAT -> Live)
Level, Type and Amount of testing conducted in each environment (Dev – Unit and 30 % API, SIT – Regression and 70% API done here etc)
Data creation, storage, anonymization and tear down process
Separate Automation Environment and Its own CI pipeline
Automation environment requirements (RAM, Hard Drive capacity)
Despite this is closely related to what programming language is chosen for automation, general coding standards should be explicitly defined here to avoid any conflicts and confusion
This can be a simple list of industry standards such as Class names starting in caps and method nomenclature, how to name data models
This section should explicitly indicate what level of tagging of UI elements should be maintained by development team
Central repository location
Repository branching strategy
Peer review process
Overall branching model (develop, feature branches, hot fixes, releases, master)
Agreed reporting templates that churned out of automated runs
A simple, elegant and standardized report makes it easy for audience to perceive
Level of reports and respective target audience
UNIT test reports for all developers and test leads
Regression test reports for all development leads and all testers
Overall Applications RAG status for program leadership