software testing

champagne-bucket

Performance testing is not a luxury, it’s a necessity

Recently I chanced upon a post posing the question of whether Software Testing is a luxury or a necessity. My first thought was that testing should not be a luxury; it’s much more expensive to face an issue when it arises in a live system, so if you can afford to do that, perhaps that’s the true “luxury”. However, testing is now accepted as a fundamental aspect of the software lifecycle and Test-Driven Development (TDD) stresses this aspect.

Unfortunately too often software testing is understood only as functional software testing and this is a very big limitation. Only if your software is supposed to be used by a very small number of users can you avoid caring about performance; only if your software manages completely trivial data can you avoid caring about security. Nevertheless, too often even the more advanced companies that use TDD doesn’t consider performance and penetration tests; or, at least, they do it only just at the User Acceptance Test (UAT) stage.

Working for a company that is very often requested to run performance tests for clients in the few days before the live release, we are often faced with all the problems that the development team has ignored. It’s hard when we must say “your system can’t go live with the expected workload” when the client’s marketing has already advertised the new system release.

Intechnica is stressing the performance aspect so much that we are now following the “Performance-Driven Development” approach. The performance requirements are collected together with the functional ones, addressing the design in terms of system architecture, software and hardware. Then the performance tests are run together with the unit tests during the entire software lifecycle (following the Continuous Integration practice).

I think that such an extreme approach may not be suitable for everybody, but recently I tested the performance of a new web application for a financial institution. We had already tested it 3 months earlier, but during these 3 months the development team has added new features and fixed some issues, the web interface has slightly changed, and as a result, almost all the test scripts became unusable and had to be fixed.

This tells me that the performance requirements must be considered throughout the entire development stage. They must be expressed and included in the unit tests because that is the only place where defined software contracts are used. Performance tests can be done at the UAT stage, but, just as no serious company would implement a UAT without having first functionally tested the code, so should you measure performance during the development stage. Additionally, an Application Performance Management (APM) tool is highly advisable to monitor the application and to find out the cause of performance issues rapidly in development as in the production environment.

Is testing a luxury? I’d prefer the luxury of a good dinner to the “luxury” of wasting money for an application found unfit for launch on the “go live” day.

This post was contributed by Cristian Vanti, one of our Performance Consultants here at Intechnica.

We’re Hiring at Intechnica!

We’re currently looking to fill several positions here at Intechnica, both in London and at our head office in Manchester. We’re looking for people who share our innovative approach to work; people with sound problem solving skills, a passion for technology and who can take responsibility to get things done.

About the roles

We have a variety of positions currently open:

Senior Performance Consultant/Developer: an experienced developer with strong experience of performance troubleshooting and system design.

C# .Net Developer: a developer who is comfortable with developing web based systems on the .net platform, however due to the way that the division between server and client side development is blurring it is expected that the role will need a familiarity with javascript technologies and frameworks (such as Jquery and knockout.js) to be able to developer highly interactive web based applications.

Lead Automation Engineer: someone who has implemented automation tooling and process within a software development organisation.

Solution Assurance Analyst: someone who has experience in capturing complex and detailed business requirements, and testing against it to verify the software meets expectations.

Software Tester: someone who is has a proven track record in the testing arena.

Qualities we look for

  • Good communication skills – to interact with our clients and deliver a better product than they had in mind.
  • Collaborative and competent – excels in individual and team projects.
  • Innovative mindset – a keen interest in the cutting edge of technology.
  • Problem solving skills – our projects often depend on high availability systems.

Why you should work with us

We’re at the cutting edge of IT performance, bringing in the best people and using best of breed tools and technologies to help our clients (brands like Asos and Channel 4) speed up their systems, improve their processes and increase their revenues. Our projects are interesting, challenging and often high profile. High performance, high availability systems are our speciality, so if you’re looking to join us, having more than a passing interest in this is a good start.

We are based in Manchester’s popular Northern Quarter, with a highly skilled team of professionals turning the cogs. From Performance Consultants and Engineers to Developers and Testers, we offer a full range of expertise to our clients and are only seeking to hire the best talent out there.

How to apply

To apply for any of these positions, send your cover letter and CV to careers@intechnica.co.uk, stating which position you are interest in applying for. Please note that we will not be responding to any agencies.

Faster page load time

Stories from the trench: APM Benefits at Nisa Retail and User Perceptions

Is it better to feel fast, or to actually be fast? Sometimes the performance improvements you make won’t make a difference to those that count – your customers. This post will describe some of the interesting effects we observed during a performance assurance exercise with a client – and describe how user perception is a factor that needs to be both ignored and considered.

Background

Nisa Retail had a complicated order placement system that was progressively getting slower under increasing load. Let’s quantify “complicated” within this context; It meant the overall system was made of a number of legacy systems that have been coupled together. Each of these had an element of custom development progressively layered over time, and with that an element of undocumented knowledge loss as people had moved on. A rewrite was out of the question and the client needed results quickly. We installed an APM solution that allowed us to measure, quantify, evidence and introduce performance improvements.

The Results

There were many small and medium sized improvements that resulted in a collectively large improvement for the user. The following graph neatly sums up the results achieved for Nisa Retail.

Orange Line: Over a 12-week period the site experienced a 20% increase in page views.
Green Line: Over a 12-week period the overall trend in page response time decreasing by 46%.

It should be noted that the system was a closed B2B system. This meant that the number of users remained fixed over the 12 week period.

Key Points

  • The average response time of a page had decreased by 46% over a 12 week period
  • The number of page views had increased by 20%.

So a rather obvious benefit is that by decreasing the page load times, we increased the rate at which the users interacted with the site.

There also appears to be a general macro relationship between the average page load times and pages views. For every 2.3% increase in page load time, there is a corresponding 1% increase in page views and with that an increase in sales. We can now start to quantify to the business the ROI for increasing page responses times. Don’t forget – this is an average of all the pages on the site, but this general correlation can be used as a powerful tool.

Something was wrong.

We had numerous benefits fall out of the exercise – fewer phone complaints, better utilization of developer time, increased system stability. But what was most interesting (to me) was this: While metrics were telling us that we were improving the site – end user feedback said that what they experienced didn’t stack up to the improvements reported by our statistics. Why?

A Lesson Learnt

A key soft lesson learnt is the psychology of the user. Users perceive response times very subjectively with what appeared to be a strong bias to application entry points. Intechnica had introduced some major performance improvements, but the majority of users didn’t perceive these until the performance of the common application entry points were improved. For example, as soon as the response time of the home page was improved (from 20 to 5 seconds), the users noticed general improvements that had been introduced further downstream.

This has a number of implications:

  1. Objective measures should always be in place as a reality check, do not rely on the user feedback to tell you where you are
  2. Slow initial entry points to a system taint a users perception for the proceeding journey.
  3. Perceived User feedback is important – this slightly contradicts the first point, but if the users do not feel that the system is improving then you need to investigate why. You may be targeting and improving parts of the system that nobody really notices, cares about or the can visibly see*.
  4. To quickly improve the perceived performance, target the entry points users perceive as the most unresponsive first. Potentially cheap initial wins.
  5. Always confirm your statistics with users where possible. Understand the statistics reported, don’t take them at face value – e.g. what exactly does “Page Load time” mean?

*There is an additional layer of complication in what can constitute a page load time. There are different metrics for this and using good practices the user perceived page load time could be speeded up considerably in some cases. At its simplest this means the pages should be structured to display images and text first, and then concentrate on other background items the user isn’t initially concerned with e.g. javascript, widgets – especially synchronous.

So, by analysing, prioritizing and solving the performance points around the end user perceptions you could gain a higher degree of value for invested effort – A win for everyone involved.

It doesn’t just have to be about page speed

We often get a new headline figure in the performance industry – page load times, how the latest millisecond slow down by implication will loose you a per cent of your target audience, hit your conversion rate and impact your bottom line. However, in most cases these tend to be based on high volume sites and I would be wary if these stats are being wielded when being used to sell a solution into your business. Take a step back and put them into your own business context (more on this in a different blog post). If your aim is to increase usage and browsing on your site, then there may be alternative’s – e.g. making suggestions based on previous search behaviour, time of year, BI or general increasing usability.

Hearing what we don’t want to hear

What is often forgotten in the never-ending quest for page speed is end-user perception.  This is natural, as it is more difficult to measure.  In some cases altering the perceived page load time may give a cheaper and more cost effective improvement than actually increasing the actual page load time. I’d like to see more research and reporting around this. It would also be useful for the industry to be a little more honest, we shout loudly when we improve and see benefit.  However I suspect there have been many cases where sites have improved their load times and not seen the expected benefits – what doesn’t conform to our understanding can tell us much about the world and web we occupy. Faster is better, but our customers need to know when faster won’t give them the anticipated benefits.

See also::   Stoyan Stefanov, Psychology of Performance
Also see this presentation from our senior consultant Richard Bishop for a more detailed analysis of how we achieved performance optimizations using APM technology, and the resulting business benefits.
Requirements Traceability

Agile and QA approaches – Requirements Traceability

Following on from my post two weeks ago about specification by example and application maturity, this piece is about requirements traceability.

Software development processes have traditionally worked from signed artefacts and signing up to agreed pieces of work. This would normally describe a set amount of development time, some testing time and some support time to resolve defects and help bed-in the new application. An important part of this process is a description of what will be delivered. This is a key document in the specification by example process. It is important when creating the original statement of requirements and describing the user stories in tools such as Speclog that all requirements have been captured and that there is a common understanding between the project sponsors and the suppliers of how this functionality will be used. By describing key business flows in terms of the behaviour of the application an opportunity is provided for areas of discussion and the understanding of the supplier and the customer can be explored.

These activities drive the test approach. Traditional testing analysis would follow on from here. Often this would be long weeks of Business Analysts creating specification documents and from there test analysts would follow the same path and develop test packs which examine and validate these requirements. In the mean-time developers will work with the documents and the outline discussions, and begin their development approach. In the old days this would have created detail specification of requirements, a detailed development approach and detailed test approach – all of which would have to be approved and signed off. This produces a lot of documents and the big problem with them is that it assumes the requirements are fully understood at the beginning of the process. Any changes are managed through change requests, which require impacting and updating of all these documents. It is not a flexible process but although slow, delivers good quality software.

In the same way that Agile has helped to remove all the unnecessary documentation in development and testing of new applications, specification by example is looking to broaden/develop this concept to address the requirements itself. In the area of Quality Assurance the big challenge with ever more complex systems is measuring what areas of the requirements are being met by which development activities and then validated by which testing activities. In its broadest sense specification by example attempts to address this by a straight-forward approach that states that tests will be developed for each requirement specified in the user story.

The challenge is what artefacts we need to create and how they fit in with the user stories. For instance can/will test approach documents be taken straight from the stories that are created in the analysis. The examples and systems that have been discussed in Agile conferences and across the web have been small with simple business processes which have limited number of functional combinations of application modules. And these systems have been created from scratch or are small improvements. This work hasn’t taken place with existing systems that may not have full documentation or where new software and changes to existing applications take place.

This is where testing theory and previous processes can supplement the given process. It is important that the relationships between the requirements and stories are understood. What are the most important part of the applications ? Which are the most complex? By using the traditional requirements traceability it is possible to create a an application relationship map – which then can be used to drive the test plan and more importantly be used to manage the delivery of the application functionality. This will help with deciding the critical path and the main release points where key modules from a testable business process.

Where defects are being seen then it can steer back to the stories and allow us to understand whether there is something missing from the requirements or something has changed or details need to be more fully understood. This takes us back to the largest flaw in current development/test methodology. When the initial analysis takes place assumptions are made using the business/test/developer analyst experience and understanding of the business. The won’t necessarily align and more importantly they will change as more work is done. It is vital that all that information is shared – and that is at the heart of collaborative processes such as specification by example try to engender. But at the moment the process seeks to implement a one-size-fits-all approach and only looks at basing everything on the user stories. There may well need to be additional process activities.

What we want to find out over the next few months is how requirements traceability works in the Specification by Example area. It has proved to be a valuable tool in Agile projects – where there isn’t the time to record all the details of all the requirements and in Waterfall where the long time periods of development and testing activity can be managed through the matrix. It has weaknesses – it doesn’t work well across a large number of conflicting requirements and doesn’t work well when the business capability being developed splits into small components. But it will be interesting to see what it produces and future blogs we will be recording what we see.

JogiBaer2

Specification by Example: Defect management in Agile projects

It is often thought that Testing/Quality Assurance is just about finding defects – or as some people think – playing with the software and seeing what works and what doesn’t. As usual the reality is a lot more involved. The finding of defects is only one aspect of what QA does. When preparing a test approach you need to ask some fundamental questions that appear to have obvious answers, but when you think about them a bit more – you are not so sure.

  • How much testing needs to be carried out before the application is ready to deploy?
  • What quantity of defects found in a test phase demonstrates that the application meets customers’ acceptance criteria?
  • What business activities need to be evaluated to prove that the application will be able to carry out business tasks as required?

The third question is actually the key. Any project will have a set of opportunities or a capability that a business needs. They will measure the benefits of employing people to create code and time to deliver the project by the value benefits these capabilities add to the profits of the company. This could be from consolidating a number of accounting systems, being able to sell via an electronic market place a new service or being able to manage more effectively the routes that delivery drivers take to deliver stock. All add value to the bottom line. To achieve this the project has to be very clear about what new software or changes to existing applications in a language which there can be no ambiguity. Working with the developers, Quality Assurance exercise the software and measure whether the code meets these needs. And this is done through looking at the behaviour of the application rather than trying to take apart all the constituent parts. Because specification-by-example builds testing into the development process – the lower level code modules will be tested and validated with the requirements. So low level noise (syntax errors, validation errors etc.) shouldn’t be present. So the QA focus when using the application will look at validating the user-stories and from that examining the requirements. So how many defects found in a test phase mean that the software is acceptable for use? The key measure used by QA is to look at the spread of defects by functional area. Traditionally these areas will have been mapped in the early part of the project via a traceability document. This document measures the business criticality and complexity of the requirements and relates that to use stories on how the new functionality will be used. The challenge is going to be that in an Agile context, making it lightweight enough so that enough information is provided but it doesn’t take a long time to produce. The drivers behind the Agile methodology emphasise only doing those tasks that are necessary – to drive the release forward and understanding what tasks need to be done on the critical path to deliver a release. Traditionally the readiness of an application for service was proved by the level of defects seen in each functional area. Traditional QA activities come from a waterfall past where specific quality gates must be achieved and everything moves in slow sequence. In the new methodology it is necessary to only do the minimum amount of testing that proves that project requirements can be met and that the application will perform provide a reliable and effective level of service. Without the old techniques of testing all parts of the application the focus moves to testing those areas that are critical to the purpose of the change and support the business capability being delivered. So it is possible to use the old technique but it is necessary to put it in a new context making the process lighter and more flexible. This is just the start of some big changes in software development methodologies. When defects are seen they will have much higher impact and could have slipped unnoticed through a regression change or integration point where two modules have been brought together for the first time. The new processes will need to give this information context so that critical issues are fixed quickly without impacting overall delivery times. Testing will focus more on end-to-end process flows (business stories) and less on small components such as data validation and software branch variation. This will not mean that the testing won’t be done, but it will be done at an earlier point. This is only one area where “Specification by Example” will make subtle – but fundamental changes to application delivery. Important Agile behaviours such as “fix-first-time” – “active collaboration” will need to be enforced to drive through quality. But old measures such as defect latency, burn rate on feature delivery and making decisions on what will be a realistic time for launch of the application are going to be equally important. These are going to be exciting times for project methodologies and it will be interesting to see where we are in 6 months time.

Old Globe

The Changing World of Testing

I’m part of Intechnica’s (fairly new) QA team, who are bringing in new perspectives to Intechnica, as we are from the testing world and Intechnica is traditionally a software development (as well as performance assurance) focused company. My background is not only test leadership but also test culture, architecture, working on methodologies and helping them align with the places a business is going to go.

In our world there have been big changes since the banking crash in 2008. In that time projects were large and all software testing was big budget to go with big software and big applications. But increasing pressure to make application changes faster & smarter has resulted in radical changes in how software is developed. Now applications are developed strategically and are being devolved into web services and cloud applications – Software as a Service (SaaS) – where focus is purely on the service, not how it is served.

The software engineering world is now focused on application capability, making sure requirements are clear and that they are tested as early as possible. The old job roles of test analyst, lead, and the follow on role of Quality Assurance, where people like myself would validate the work of out-sourced testing units to ensure that requirements are in line with business expectations, are melting away. Now test is being driven throughout the software life-cycle – from initial requirements to business validation. So what will these roles turn into ?

There are quite a few discussion pieces going on out there now; Some predicting that formal test and test groups will be go the same way as operations and capacity planning (what I used to do). Why would we need specific testing activity when testing is part of all project work? This article discusses this possibility: The Shrinking Role of QA

I agree that testing is changing. The question is, how do we become part of the new conversation? I think the old test/QA skill sets still remain important. How will the new software function be used? Will the expected behaviour of the application match the business opportunities outlined in the proposal? More importantly (and this is the bit that is certainly beyond current defensive method thinking right now) are there opportunities to give the users of the project a better product which brings together a couple of requirements in a more structured useful mechanism? Can we deliver exactly on time with just the requirements the customer wants instead of waiting for all the functionality to come together?

Test Analysts, Leads and Architects need to keep doing what they do best – talk through the requirements with all parties involved – work with development teams and help make sure the bigger picture of the overall solution isn’t lost – and working with implementation teams on how to get this out to the customers with the appropriate business proving trials – and, most importantly, tracking and validating the requirements, making sure any changes are done in a structured way that keeps all groups’ view of the changes aligned.

The names will change, but the role will continue to be planted at the heart of the project delivery mechanism.

Richard Bishop at SIGiST

Performance Testing in the Cloud [Presentation]

Intechnica recently sponsored the British Computer Society’s Special Interest Group in Software Testing (SIGiST) summer conference in London. The SIGiST is a great place to come and listen to the country’s top software testers talk about methodologies, tools, new technology and experiences, as well as to meet others in the world of testing.

View a Photosynth panorama of the SIGiST conference in London

One of the speakers for this summer’s SIGiST conference was Intechnica’s own Richard Bishop, who contributes blog posts for this site. Richard spoke about Intechnica’s findings and observations from the use of cloud platforms in performance testing (we use TrafficSpike, based on Facilita Forecast, to generate load from the cloud for tests, as well as developing and migrating applications for and to the cloud).

Richard Bishop at SIGiST

Richard Bishop, speaking at the BCS SIGiST summer conference

The presentation was well received, gaining praise on Twitter via the #SIGiST hashtag.

https://twitter.com/webcowgirl/status/215770663586234368

The slides have since been uploaded to SlideShare; view and download them here: