Automation Guild 2021 Online Conference — Review

Anirudh Prayaga
14 min readFeb 18, 2021

Automation Guild 2021 is a 5 day online conference where test automation experts with diverse backgrounds and expertise share their knowledge and experience across various topics. It is an event conducted annually by Joe Colantonio and this was the 5th and largest event by registrations (~800 people) till date. This year it was held from Feb 8–12 and there were more than 30 speakers from different parts of the world. They had two sections of the conference one for Functional testing (Day 1–3) and the other for Non-Functional testing (Day 4–5). They discussed various topics ranging from API automation testing , test data management, mobile app automation testing, accessibility automated testing, performance testing to name a few.

source : https://guildconferences.com/wp-content/uploads/2019/08/Round-Guild-Logos-AG-With-text.png

I always look forward to this time of the year as it is a great opportunity for me to learn awesome things in the world of automation from the experts and to connect and chat with other test automation pros via the slack channel (https://automationguildconf.slack.com) and stay updated with the latest trends in the test automation space.

As I was attending this conference during the work week and had meetings and other work related stuff parallely going, obviously I was not able to attend all of the sessions. I was able to summarize the talks that I was able to attend and found interesting below.

DAY1:

Test Automation is Not (just) Automating Tests by Rick Martin

Photo by Christina @ wocintechchat.com on Unsplash

Rick Martin discusses how test automation is influenced by and influences every aspect of Software Development Life Cycle (SDLC). He discusses an important point to automate the behavior, not the manual test cases directly as manual tests serve just as a guide and not as a precise recipe. Test automation should benefit the whole team and business and shifting left will definitely be of huge help. Shifting left will enable the whole team to understand the requirements in large detail and rope in devs for unit testing, devops engineer for CI and how often to run these tests and what reporting should look like, product owners will benefit from getting important feedback from other team members for use cases and requirements in general.

Test automation is software development+ and the test code should be written with simplistic beauty and should be understandable, maintainable. One should not automate everything but apply test automation ROI and give more time for exploratory testing. Test automation is not test case automation. These are related but different. Test automation should not be considered for reducing the cost and overhead of testing. By saying so we are ignoring the larger value test automation brings to the table. Using record and playback is test case automation not test automation. Test automation requires you to look at the entire SDLC (waterfall or agile). Look for the biz value to be achieved and how to allocate specific tests to achieve those goals. Test automation is a habit!

Requirements : Need to be sure that reqs give us appropriate information to do the job. We should be able to verify if they are complete, clear, correct, consistent, testable? Definition of done should include both manual and automated testing.Architecture : During this phase, we need to assess how early we can address non-functional reqs if at all. The factors such as reliability/safety, precision, scalability must be taken into consideration.

Design : SOLID is a design technique that is robust. Single Responsibility, Open-closed(Open for extension, closed for modification), Liskov Substitution, Interface Segregation (2D, 3D methods should be separated using Interfaces), Dependency Inversion(High level modules should not depend on low level modules) and our test automation framework should comply by these principles. For Implementation these factors should be taken into account: should we be using static analysis or Test Driven Development (TDD), Behavior Driven Development(BDD), Acceptance Test Driven Development(ATDD) and what is the definition of done (DOD) for both manual and automated tests and to test both positive and negative scenarios. Test automation should not be a post release activity.

During & Post Deployment : We should be testing in the production environment as a part of deployment. Synthetic Transactions(health checks), Monitors and Alarms (to adequately protect the systems), Chaos Engineering (resiliency and recoverability from falls, lack of service from a 3rd party) should be a part of the overall test automation strategy.

Julia Pottinger : Next Level API Automation: Testing the Boundaries

In this talk Julia discusses how to efficiently test the API for automation and implementing both the positive and negative test scenarios to thoroughly test the APIs.

Photo by Michael Dziedzic on Unsplash

We normally test positive scenarios for API automated tests but we need to test the negative scenarios as well and ensure the correct error message is shown up and verify that we are not allowed the access if the current token or access code is not provided. We need to also verify the schema for the response that is being returned. Out of lot of different tools, ‘Xhr’ can be used for API testing, ‘frisby’ as well can be used, so are ‘super test’ (http assertions are easy with superagent), ‘chai http’ (integration testing using chai assertions and regular API testing), ‘Cypress’(to make http requests), ‘Chakram’ (Rest/Graph QL APIs), this works along with Mocha (BDD Framework).

She demonstrated the verification of these three scenarios using ‘chakram’ : 1. Verifying the APIs with schema, 2. verifying errors and 3. verifying workflows in APIs and making sure they are working properly.

Using jsonschema.net we can generate schemas that can be used to validate the APIs. It ensures the responses are structured correctly and to check the values in the response are of a certain type. She used the ‘chakram’ command to verify the response to have the same schema. Additionally We can also check the status of the response and the header type using ‘chakram’.

Good advice when we do API automation is always have negative tests (like invalid API key or invalid POST request, input data-type mismatch , sending reqs with invalid names etc. and verifying the status codes/error messages for these such as for 400, 401, 403, 404 etc.). She also demoed how to test the multi step API workflow using chakram(using POST, GET methods). This is similar to using one request’s data in another request for verification. Let’s say we use a POST req to create a record and get the ID of that record and verify the details of that record using a GET request aka Chaining the requests and then updating the record.

Titus Fortner : Managing Your Data with Abstractions

In this talk Titus focuses on the usage of test data in the automated test suite and how to effectively manage it without having to deal with maintenance issues. He suggests avoiding data driven testing and any implementation that uses data in a way that does not provide semantically useful information to our tests.

Photo by Alexander Sinn on Unsplash

There are 8 major components of a good test automation framework. One of the 8 components that often gets overlooked is managing the test data. Abstracting away implementation details from your tests is just as important for data as it is for the display. He then went over the basics of test data management and demoed an example using java code(reflections) for managing the abstraction strategy effectively.

8 components of Test Framework are Assertions on Actions, Initialization and cleanup, Configuration data, Data modeling, site modeling abstractions, synchronization strategy, wrappers and helpers, API usage. He focussed on Data Modeling. He suggests writing our tests declaratively(which focuses on ‘what’, ‘biz logic’, ‘big picture’, ‘contextual data’) instead of imperatively(‘How’, ‘Implementation logic’, ‘specific details’, ‘all data’). He then demoed it with an example.

He discussed the 4 major approaches to Test Data Management.

  • Grab and Hope — Use the first available option on the site.
  • Static Reference — Traditional QA staging Environment : list correlating existing data to specific tests.
  • Dynamic Fixtures : Inject data into the database at the beginning of the test run.
  • Just in time : Each test is responsible for all of its own data.

The underlying idea being — the only way to scale your tests is to run them in parallel and to achieve this the tests should run autonomously. The way to achieve this is the ‘Just In time’ approach. This means every single test is responsible for all of its own data. As he demonstrates this with an example he suggests how awesome ‘Faker’ library is. ‘Faker’ gives us useful semantically relevant methods with a lot of control over exactly what goes where, pretty much anything that can be put in a form can be pulled from a ‘faker’ library.

DAY2:

Eran kinsbruner: Key Mobile App Trends & What They Mean for Dev & Testing

Photo by Daniel Romero on Unsplash

Eran’s talk mainly focuses on key mobile app trends and what they mean for dev and testing and things to focus on 2021 and beyond regarding the same.

He talks about ‘Apple appclips’ which are a part of iOS14 : subset of native mobile app (it gives us a sneak peek of the app and as a user we can see the app’s functionality without actually going to the app store and downloading it). ‘Foldables’ : Many big players in the market are currently working on creating these foldable mobile devices. It is a totally different approach and consideration for these phones when compared to the traditional smartphones. He suggests, ‘PWAs’ : Progressive Web Apps will be trending in the mobile space for 2021.

He discusses ‘Android APKs’ : Zip of smaller android apps which are very unique to a specific set of devices and geography with a dedicated set of functionalities. ‘Big Data and Test Impact Analysis’ will be critical as we move more and more towards continuous testing in the devops phenomena. Mobile test automation frameworks such as ‘Appium’, ‘espresso’, ‘XCUITest’ and ‘Flutter’ are being standardized.

A tester need to focus on below mentioned Generic Mobile App Testing Considerations for an efficient mobile testing strategy :App Functionality’(Biz flows, UI testing, cross platform coverage), ‘real env condition testing’(Network conditions, Interruptions (Calls/Texts/Alerts) Background/Gestures), ‘Non-Functional Testing’(Security/Accessibility/Performance/Availability/API Testing). All these must be shifted left as much as possible and should be integrated in the CI/CD pipeline.

He then discusses platform specific mobile test automation frameworks for iOS and android (Espresso/XCUITest) and cross platform frameworks for ioS and Android(Appium/Flutter) and how they are implemented.

  • Espresso — Java within android studio for android apps
  • XCUITest — Objective-C/Swift test creation within Xcode for iOS apps.
  • Appium — Webdriver Language bindings
  • Flutter — Dart scripting & Appium/Flutter

Although ‘Appium’ is the leading framework for cross platform mobile automated testing, the best way to test native apps in an efficient way is to use the specific frameworks.

‘APKs’ are typically the zip files that consist of single APK files, based on rules, specific devices and user requests these files will get installed and enabled on the devices in production. Every app needs to be publishing APKs that are being mandated by Google from the second half of 2021.

Benefits and deployment options for APKs

  • Application binary sizes.
  • Overall resource consumption (CPU, battery etc.)
  • The ability to gradually expose as well as enable/disable specific modules and features based on customers countries, requirements and other considerations.

He also mentions important considerations for testing App Clips using various distribution methods such as NFC tags, QR codes, Safari App Banner, Links in messages, Place cards in Maps, Recently used App clips. Also, some of the key areas to test on foldable devices are App continuity/extension, multi window capabilities main and cover displays, making the app sizable, splitview, UI layout, multi OS compatibility, resource consumption, accessibility in foldable devices.

Lauren Clayberg & Anton Hristov: Testing in Fast-Paced Delivery Organization

As the title of the talk suggests, the speakers will give us a deeper insight on what entails testing in a fast paced delivery environment and how they utilize ‘mabl’ to achieve this.

source : https://s27389.pcdn.co/wp-content/uploads/2020/04/automating-six-cs-devops.jpg

Devops is a fast paced environment and testing is becoming more of a bottleneck with this fast paced environment and testing is becoming more important than ever because of the competitiveness and testing is becoming critical and we should include it earlier in the devops pipeline. Testing in DevOps : Consists of testing in ‘code’ , ‘pull request’, ‘deploy’, ‘run’ phases. Unit tests are great and integration testing is different pieces of code working together, E2E tests are essential and considered best practices for User Experiences. Exploratory testing is very critical as it can find the defects which automated testing cannot.

Speakers then discussed the testing strategies at ‘Mabl’. ‘Mabl’ has a chrome extension and a CLI tool and desktop app that we can make use of. Team also uses ‘Mabl’ to test any web app. They use mabl for Unit, integration, E2E testing and they do a team based Exploratory where they get together and sit in a room and test the application to find any uncaught bugs. Speakers Demoed ‘mabl’ on ‘mabl’ workspace setup for testing including mabl DevTestOps pipeline , monitoring their production app and using insights to understand product quality

KeyTakeaways for any team would be to test at every step(code, build, deploy, run) of your deployment process to catch issues early. Looking for ways to integrate your testing tools with your Workflow and analytics tools so that everyone is more informed and productive. Determining wastes and coverage gaps in the E2E tests and measures to rectify them. Testing in production environments to proactively monitor for regressions, especially in 3rd party services.

DAY3:

Shriram Krishnan & Guljeet Nagpaul: Full Stack Automation -How to get it right!

source : https://cdn.helpsystems.com/styles/crop_general/storage-api-public/am-bpa-tools-1200x628_0_0.jpg?itok=cuHqDxXz

The speakers kicked off the session by giving a brief history of test automation. They emphasized on how the prominence of web automation has risen in the early part of last decade and how the importance unit and API automation has been increasing in the latter part of the decade and the steep rise of ‘Mobile and continuous’ testing from 2018.

Now is the time ‘Full stack automation’ has almost become a need. Main reason being increasing focus on customer experience. Also, app architecture is more componentized and integrated. CI/CD and agile development is bringing in faster rate of change and we need ‘Full stack automation’ to have confidence in our system that our app’s customer experience is top notch.

As the different types of automated tests that have evolved, the validations for API, UI, Backend are all done independently and that is a challenge in the fullstack automation. Due to which there are some dis-integrated frameworks being developed and it is a challenge.

The speakers then discuss the key considerations for success. Integrated frameworks are the first thing that we can take advantage of as they are cross invocable, pluggable anywhere in the E2E chain. The framework has to be adaptable and extendible. Being able to share the data across tech stacks is a key component. The framework needs to be manageable and traceable — Life Cycle is linked together. Speakers then demoed a scenario where the full stack automation can be done right using ‘AccelQ’.

DAY4:

Pragati Sharma : Accessibility Automation Testing

source : https://cdn3.vectorstock.com/i/1000x1000/53/22/accessibility-icon-vector-25025322.jpg

Speaker starts the session off by introducing us to what accessibility testing is all about and why we need it. She brings a good point regarding the potential of accessibility testing as around 20% of the world’s population is disabled and how the apps we develop need to cater their needs as well. She gives an example of subway stairs vs ramps in stressing the importance of accessibility is not only targeted to a specific group but on a holistic basis it enhances the user experience.

She focuses on the financial aspect of accessibility testing by giving an example of a scenario where an online content streaming company $750,000 when a lawsuit was filed as they did not have closed captions (CC) enabled.

She demonstrated an example of automated accessibility tests using ‘Cypress’ and ‘AXE’. ‘AXE’ can be used with various languages and there is a chrome plugin for ‘AXE’ which a user can make use of to detect accessibility violations in a manual fashion. She then explained the ‘Cypress project structure’ that she used for the accessibility automation. Following that she explained how we can run accessibility tests against a specific standard such as ‘wcag2a’, ‘wcag2aa’ etc.

Before we address the accessibility needs, we need to analyse what kind of application is ours and how our users are using our app and a survey of what kind of specific accessibility needs(eg: vision vs audio) we need to address so that our user’s experience is enhanced.

DAY5:

Mohamed Labouardy : Pipeline as Code

source : https://images.techhive.com/images/article/2017/02/pressure-water-line-100707995-large.jpg

In this session, the speaker shows how we can run automated tests using the ‘pipeline as code’ approach with an example.

In this example, he uses Jenkins in a distributed mode. We have Jenkins master and jenkins workers. Jenkins master schedules jobs on available jenkins workers. The workers running inside the CI/CD pipeline based on the stages defined in the jenkins file. Here he uses the ‘SonarQube’ tool to continuously inspect the quality of source code. He also uses ‘anchore’ which is used to run security tests on ‘docker’ images before he pushes the code to prod servers and at the end he will be deploying AWS servers.

He uses ‘terraform’ to configure the jenkins ‘masters’ and ‘workers’ and when an additional worker should be scaled. He has a jenkins file which has multiple stages the first one being the checkout stage, this fetches all the changes from the github repo and then builds the docker image based on dockerfile.test in his project. In his docker image, he installs the native dependencies and copies the source code to that docker container and installs chromium to run tests in a headless fashion. He then runs the ‘lint’ which checks the source code for programmatic, syntactic errors. It makes the app in a uniform format.

The next stage is running the unit tests, here unit tests are run using headless chromium, this inturn generates a coverage report. The next stage is running ‘sonarqube’ for static code analysis which he pre-defines in the sonar.project properties as to what folders in the source code it needs to analyse. If there are no issues, we move to the ‘build’ stage where docker image is triggered which builds the application. We then push it to the docker registry. The next stage is to analyze any security vulnerabilities of the docker image. If all these stages are successful we will deploy. Then the speaker showed how it all works using a real time example and the stages and results of each stage in jenkins.

EPILOG

This was one of the best conferences I’ve attended with a lot of new learnings from the speakers and the slack community. Thanks to Cory Schmidt, for encouraging me to attend these sessions and kudos to Joe & team for making this happen and taking this to another level. It is just intriguing to know how things are changing in the test automation space rapidly year by year especially with AI getting into the picture. As always, I’m excited for the next year’s event and look forward to it.

--

--