Continuous Development

Continuing on with the TestOps posts by the, well, awesome Awesome Testing blog, is Continuous Development. Which is actually very interesting as it was a large part of what was taught in my Software Process Management course last year, so it was an enjoyable surprise to see this as the next covered topic.

Generally speaking, Continuous Development is, according to Wikipedia, “the process of executing automated tests as part of the software delivery pipeline to obtain immediate feedback on the business risks associated with software development. ”

The first step is Continuous Integration and unit tests. After every single commit by a developer, the main branch app should be compiled and built, and then unit tests should b executed to give the quickest feedback possible. The post suggests using mutation testing, testing that adds random faults to your code to see how well your tests perform, to test the unit tests themselves to see how good they are. After that, the developer should be made aware of their commit changed overall code coverage statistics.

The next step is Continuous Delivery or Automated Deployment. One should do numerous test environment deployments to test the deployment process of the application as well. After this is testing higher level things, such as functionalities on the integration or API level. End to end testing is very expensive, resource-wise, and should be done sparingly.

After that is performance testing, using a testing environment as close to the production environment as possible. You want to see how the application handles heavy loads. And then is security testing, to make sure the application is as safe from being hacked as you can manage, and then the hardest step, exploratory testing. This is a manual exploration of the application, that takes a lot of time and resources. It should be done sparingly as well.

Overall, this was another nice intersection between software development and testing. It was also a good reminder of concepts I learned in the very recent past which I found very interesting at the time. The ability to streamline the process for a developer and to give them feedback as quickly as possible is incredibly important, its ability to foster greater productivity readily available. To create such, there a many useful tools out there for testers and developers alike. Its a very straightforward example of testing directly helping developers, which is nice to see.

Original Post: http://www.awesome-testing.com/2016/10/testops-3-continuous-testing.html

Future of Testing Continued

A while ago (long while), I talked about an interesting post about the something called Test Ops in this post: https://fusfaultyfunctions.wordpress.com/2017/09/20/the-future-of-testing-taking-an-interesting-turn/.

Now I’d like to talk about a post by Awesome Testing describing an important topic in Test Ops, Testing in Progress. Essentially, its a set of ways of testing that utilizes real users and the different ideas and implementations that arise in a production environment. So how do you test a new feature or update produced for a service.

Obviously, the one metric is that it works without errors for the users. But the next most important metric, is the number of users it retains. The amount of people using the service and continuing to use the service is the most important thing for these applications. And this needs to be tested.

Now what do you do when you produce a new feature and need to test it? You could just throw it out into the wild and then see how the statistics work out. If it worked, keep it, otherwise throw it away. But that can annoy users and make you lose people.

There is no one best way, but there are several different ones used. There’s risks that need to be mitigated. So the first method outlined is Blue-Green Deployment, or Canary Deployment. You deploy the new feature or software on a separate series of servers, the blue pool. Preliminary tests are done, internal, users, and then if it looks good 5% of users are redirected to it from the original servers, the green pool. Then you can see how well the new software is working. If it doesn’t look good, move everyone back to the green pool.

Test Flights are similar. You hide a new feature in a code path, with another code path without the feature. By changing a config file, you show the new feature to users in the same manner as in Canary Deployment. First internal users, then lets say 5%. The feature can always be reverted with a change of a config file. A/B testing is a bit more extreme, essentially you have, say, two variations of an application. Fifty percent of users see one and the other, and the one that retains the most users becomes the finalized version.

There’s also a technique where faults are intentionally injects in software. In this way, it leads to a design that focuses on being secure. And then there’s one popularized by Microsoft. Developers are forced to use the applications that are being developed locally, to ensure the program is a reasonably good user experience.

Overall, it’s really interesting seeing the considerations required when dealing with testing new software. I never considered that not only would testing test for working product, but one that works well too. It makes testing a much more complicated, yet exciting field. It also makes the job of a tester much more integral to the success of an application.

Original Post: http://www.awesome-testing.com/2016/09/testops-2-testing-in-production.html