It’s interesting to look at how much we copy learn from nature, duplicate implement others work, and sometimes overlook and ignore fantastic findings. Whilst producing software, we come across the classic dilemma of how much and how long to design, create and test the functionalities.
To quote Wikipedia:
“The Pareto principle states that, for many events, roughly 80% of the effects come from 20% of the causes.”
This principle in the business world shows up in several different aspects:
80% of your profits come from 20% of your customers
80% of your complaints come from 20% of your customers80% of your profits come from 20% of the time you spend
80% of your sales come from 20% of your products80% of your sales are made by 20% of your sales staff
Therefore, many businesses have an easy access to the answer to making dramatic improvements in profitability, by focusing on the most effective areas and eliminating, ignoring, automating, delegating or re-training the rest, as appropriate.
Whether knowingly or not, industries implement some variation of Pareto Principles for optimisation efforts. Some of the variation used in industries are Gini coefficient, Hoover index, Theil index, etc.
Microsoft noted that by fixing the top 20% of the most reported bugs, 80% of the errors and crashes would be eliminated. Any developers dream would to write a bug-less code, but in reality, this can’t happen.
To find out how much effort should be added to achieve desirable quality, we can refer to Pareto efficiency.
Looking at the Production-possibility frontier, shows how productive efficiency is a precondition for Pareto efficiency. Point A is not efficient in production because you can produce more of either one or both goods (Butter and Guns) without producing less of the other. Thus, moving from A to B, C or D enables you to make one person better off without making anyone else worse off (rise in Pareto efficiency).
This concept can be applied in various different aspects of software engineering, mainly:
- How much tolerance is allowed without affecting the quality or functionality of products
- How much efforts should be added to achieve desired quality
- Cost required to achieve desired results
- Value of risk involved in specific approach taken
Software testing is an investigation conducted to provide stakeholders with information about the quality of the product or service under test. Software testing can also provide an objective and independent view of the software, to allow the business to appreciate and understand the risks of software implementation.
Test techniques include, but are not limited to, the process of executing a program or application with the intent of finding software bugs (errors or other defects).
There are various tools, techniques and methods available, not all are applicable, capable and feasible of achieving 100% of the desired quality, code testing, etc. So most of the time everyone ends up using a mix and match of various different tools and techniques. In the last decade or so, a tool set around web testing has been built and is constantly growing.
Nowadays, it is very hard to find a developer who has not written any unit test. To keep the article simple, I will write the approach but all the technology variants are available for all tools discussed.
- Unit testing : junit, nunit or a Microsoft test in visual studio. Code snippet to test a piece of code or discreet functionality.
- Selenium : Firefox browser based plugin and server to test the user actions and data flow. This records the user click actions and key entries to replicate the test. Limited to test the UI behaviour in web applications.
- WaitN : Automated html element testing tool for web applications. Handles popup, ajax and Java script calls. Works with various different browsers. Again limited to web applications.
- Rhino mocks : This is a mocking object framework for .net platform and used to test little under the skin functionality. This testing technique used to replicate the interactions of objects involved in use case functionality.
- Writing your own : You can write your own similar tests using HttpSimulator.
There are other commercial testing suits available and major software houses are using these suits. To achieve the functionality of a commercial testing suit, you can implement various techniques readily available separately.
I am going to use Team Foundation Server (TFS) and a .NET project to explain the approach.
- Enable TFS for continuous integration using CCC + nant or TFS build.
- Add TFS rule to run code analysis before check-in, thus improving quality of code.
- Create MS unit test or nunit test for all the code functionality unit tests.
- Record these unit tests as test work items in TFS
- Add check-in policy to run these unit tests, thus preventing any bug passing in build code.
- Set automated builds and deploy to a staging / testing server
- Run unit tests once the automated build is successful
- Update the test work items status after every build and record their status
- Send summary notifications for failed build and deployment status.
Each of these steps can be written more in depth, but I’ll include this in another post, and save 80% of my efforts for 20% of quality articles!