How We QA at Chirpify

The moment you implement a process that reduces mistakes, or improves how you respond to mistakes is the moment your QA process is born. If you’re lucky this started long before you hired testers or QA engineers, but almost certainly you started to think more seriously about it when your team or product grew to a certain size. Thankfully its been a priority here since early on.

It’s been an iterative and sometimes transformative process as we’ve grown to serve hundreds of clients and hundreds of thousands of users. Here is a little bit of info on how we do it.

The User

Who are our Users? Mostly they’re mobile (60% + and counting) and increasingly they’re transacting with us across multiple channels. They’re coming across a Chirpify campaign in the midst of doing something else, so we need to be not just easy to use, but as invisible and lightweight as possible.

My new favorite comment when discussing features is “That’s great, but how does it look in mobile?”. Testing natively on actual Android and iOS devices in the office pairs with tools like Chrome’s mobile viewport emulator to help quickly surface bugs.

Before reaching the testing stage, a bigger focus on faster load times, and asset size optimization in development pairs well with the responsive frameworks, like Bootstrap, that we already have in place. With that said we’re continually on the lookout for tools to help us better serve our ever broadening user base.

The Engineer

Testing starts with the engineer, reviewing their own code, and testing the “Happy Paths” of the requirements and scope, this all happens locally and on a staging environment that mimics production.

Of course nothing moves on for further review if the tests are broken. All our repos plug into CircleCi and tests are run automatically in the background every time a pull request is made, which allows for an easy at a glance pass-fail check to move ahead. Once it does pass, the feature shuffles off to QA.

The QA Engineer

We treat QA as a fast moving part of the tail end of the development process. Once a ticket is in my hands we practice quick turnaround of fixes as tickets are filed. This is good not just because of the turnaround time, but also because a feature is often being iteratively improved immediately while its fresh in the developers mind.

As QA gets closer to completion and approval we’re able to integrate feedback from product stakeholders as well in the same rapid turnaround process.

Once completed we work to deploy as soon as possible. On to the next one.

The Stakeholder

Product stakeholders are never far away from the product and usually are providing feedback before a product leaves QA. This allows us to make quick decisions about UX, UI and deeper functionality no matter where we are in the development process.

The Product

One of the practical difficulties of testing our platform lies in our “invisible” software. After a registration process a user may not see our app UI for months at a time even while our software powers their participation, through location check-ins, messaging, and image posts.

Users may not notice the absence of notifications, so we need to be smart about how we gauge the health of our messaging. Analytics become an important part of this process, where we can build a funnel of user actions, our messaging, and finally a transaction and its receipt. Monitoring volumes and ratios of responses allows us to quickly diagnose discrepancies at scale, instead of fruitlessly trying to watch for user interactions and responses one by one.

When we’re not being invisible, however, we’re interacting with the User either through our app UI, e-mails, or messages on social. All logged meticulously and often cross referenced. What powers our end to end testing of messaging and user interaction is also paving the way for deeper customer insights as we learn more about just how users prefer to interact on social.

The Automation

I’m in love with any kind of automation, and we’re working hard to bring it to every facet of our product development.

Testing lives on CircleCI, giving us quick feedback and easy reviews of Pull Requests with inline information on whether tests have passed in Github. No passing tests? Instant veto.

CasperJS powers my UI testing, and quickly lets me know of more glaring issues in a manner of seconds. Its been a life saver for integration testing our UI which can in theory support thousands of permutations. What sold me on it? I was writing specs within two minutes of deciding to use and deploy it. Sold.

Puppet powers our deployment and removes a lot of uncertainty around maintaing a vast array of often changing AWS instances. We’ve only just scratched the surface here, but there is a lot to love already.

The Checklist(s)

Any checklist is just a placeholder for potential automation and happily we’ve managed to convert many of our checklists into hands off processes that are much more resistant to failure and human error. Deployment, UI testing, code review, and social messaging are all in some stage of automation and we’ll continue to improve them.

The Deploy

If we’re diligent we’re deploying daily across multiple stacks, with the help of Puppet and AWS with small iterative changes sometimes being released multiple times in a day as they come ready.

If you liked what you read and are interested becoming a part of and helping us improve this process, drop us a line. We’re hiring.

Written by Johann Hannesson |QA Chirpify