champagne-bucket

Performance testing is not a luxury, it’s a necessity

Recently I chanced upon a post posing the question of whether Software Testing is a luxury or a necessity. My first thought was that testing should not be a luxury; it’s much more expensive to face an issue when it arises in a live system, so if you can afford to do that, perhaps that’s the true “luxury”. However, testing is now accepted as a fundamental aspect of the software lifecycle and Test-Driven Development (TDD) stresses this aspect.

Unfortunately too often software testing is understood only as functional software testing and this is a very big limitation. Only if your software is supposed to be used by a very small number of users can you avoid caring about performance; only if your software manages completely trivial data can you avoid caring about security. Nevertheless, too often even the more advanced companies that use TDD doesn’t consider performance and penetration tests; or, at least, they do it only just at the User Acceptance Test (UAT) stage.

Working for a company that is very often requested to run performance tests for clients in the few days before the live release, we are often faced with all the problems that the development team has ignored. It’s hard when we must say “your system can’t go live with the expected workload” when the client’s marketing has already advertised the new system release.

Intechnica is stressing the performance aspect so much that we are now following the “Performance-Driven Development” approach. The performance requirements are collected together with the functional ones, addressing the design in terms of system architecture, software and hardware. Then the performance tests are run together with the unit tests during the entire software lifecycle (following the Continuous Integration practice).

I think that such an extreme approach may not be suitable for everybody, but recently I tested the performance of a new web application for a financial institution. We had already tested it 3 months earlier, but during these 3 months the development team has added new features and fixed some issues, the web interface has slightly changed, and as a result, almost all the test scripts became unusable and had to be fixed.

This tells me that the performance requirements must be considered throughout the entire development stage. They must be expressed and included in the unit tests because that is the only place where defined software contracts are used. Performance tests can be done at the UAT stage, but, just as no serious company would implement a UAT without having first functionally tested the code, so should you measure performance during the development stage. Additionally, an Application Performance Management (APM) tool is highly advisable to monitor the application and to find out the cause of performance issues rapidly in development as in the production environment.

Is testing a luxury? I’d prefer the luxury of a good dinner to the “luxury” of wasting money for an application found unfit for launch on the “go live” day.

This post was contributed by Cristian Vanti, one of our Performance Consultants here at Intechnica.

thirdparty-thumb

How much damage does Third Party Content really do to ecommerce sites? [REPORT]

thirdpartyreportIntechnica recently did some in-depth research into how the performance of ecommerce (specifically fashion retail) websites is being affected by the over-proliferation of third party content, the results of which have been published in our brand new report, “The Impact of Third Party Content on Retail Web Performance”.

For the uninitiated, third party content refers to basically anything on your website served externally – usually adverts, tracking tags, analytics codes, social buttons and so forth. Check here and here for some good background on the dangers of third party content.

If you are already initiated, though, you will already know that this content places an overhead on a website’s performance to varying degrees. In a best case scenario you’re looking at some extra page weight in the form of objects, which are being served from added hosts via extra connections to your users, the result being a few extra milliseconds on your page load time. On the other end of the spectrum, a misbehaving piece of third party content can slow pages to a crawl or even knock your site offline. Remember when everyone blamed Facebook for taking sites like the Huffington Post, Soundcloud and CNN down? Yep. Third party content at its most disruptive.

So what did we want to find out?

We wanted to really see how much of an impact third party content was having on real websites. We went about this by doing some external performance monitoring of the normal websites in parallel with filtering out the third party content (which we identified based on experience – if you want to see an open source, ever-growing list of third party content, I recommend ThirdPartyContent.org). It’s also important to note that we measured a “land and search” transactional journey, not just home page response, to get a more realistic measurement. From this information, we were able to distinguish which kinds of content are most prevalent and even what element of this content was most to blame for any performance overhead. It also gave us some interesting insights into whether sites that use third party content more conservatively also happen to follow other web performance best practices.

Some fast facts

There is more detailed data in the report itself, which is available to download in full here. However, let’s look at some key findings. Keep in mind that some of these might seem predictable but it’s nice to have some concrete data, isn’t it?

  1. In general, sites that use more third party content are heavier, use more hosts and connections, and are slower than those that use less third party content.
  2. When third party content is completely filtered out, websites with a lot to begin with see a greater immediate speed boost than those with less to begin with. In fact, many retailers swapped places in the rankings once third party content was taken out of the equation.
  3. Falling in line with the famous “golden rule of web performance”, it’s the extra hosts, connections and objects brought on by third party content that result in slower response times – more so, in fact, than page weight.

Some pretty graphs and tables

I recently saw someone on Twitter express the rule “nt;dr” (no table; didn’t read), so let’s get some data in this post, shall we?

average-tag-types

The chart above shows the average number of each type of third party content present on each of our target websites. As you can see, there is far more variation in adverts than in any other form of third party content, although it’s somewhat surprising to see the average of 1.8 analytic tags on these websites. Serving adverts etc. from multiple hosts impacts performance, as we see in the table below…

table2

Our second table tells us a little more about why sites with more third party content tend to be slower. The median, maximum and minimum response, hosts ,connections, objects and size represent only what is added to the sites by third party content, not the overall metrics. We can see that third party content alone is adding up to 18.7 seconds to the response time (a median of 9.4 seconds). What’s also interesting is that there is a stronger correlation between average response time and number of objects, connections and hosts added than there is between response time and added page weight (0 is no correlation ,1 is absolute correlation).

example

The last chart I’ll show you here is very telling in terms of real impact of third party content. Red and blue are the full, normally served websites of Retailer A and Retailer B respectively. We can see that red is much slower than blue during this time period. Yellow and green are the same websites (A and B respectively) except with all third party content removed. Not only did Retailer A gain a greater speed boost than Retailer B by filtering this content out, but it actually went from being the slower website to being the faster of the two. What’s even more interesting is that Retailer A reported disappointing revenues of this period, while Retailer B posted a strong revenue report.

Read the full report

This is just a sample of some of the data we collected and analysed, so make the jump to our website to read the full report for yourself.

 

intechnica-xmas-report-2013-small

Which retailers had the best mobile and tablet performance at Christmas?

As every retailer knows, Christmas is the most critical time of the year (even if “critical” isn’t quite as catchy as “wonderful”). It’s when retailers have the most to gain and the most to lose. You could make an argument that this is even more true in the case of eCommerce and online retail, since it’s so easy for consumers to open another browser window and take their business elsewhere.

Since we know how important this time of year is to online retail, and we know how much impact site speed has on conversion rates, user satisfaction, bounce rate etc, we thought it would be a good idea to monitor the performance of some leading retail sites during December and see how they were doing.

What did we measure?

  • Without revealing the full list of retailers, we had a shortlist of 13 leading sites, mostly in the fashion sector.
  • Since mobile and tablet have a growing market share of sales, we decided to dedicate our monitoring to iPads and Samsung Galaxy S4 smartphones over a 3G network.
  • Rather than looking at just the landing page response time, which is a fairly shallow measure, we looked at a two-page “land and search” transactional journey – the search term being “scarves” (it was cold outside, after all).

What results did we get?

Here is just a quick snapshot of the type of data we got back when we arrived in the office in January.

  • As you might expect, performance largely followed the trends and patterns of demand, with most sites slowing down on busy shopping days and speeding up after the last postal days had passed.
  • There’s a huge gap between the fastest and slowest sites.
  • Sites which are on average slower are also much more erratic and inconsistent (bigger spikes happening more often).
  • More in the full report!
There was a wide rift between fastest and slowest in baseline performance this Christmas.

There was a wide rift between fastest and slowest in baseline performance this Christmas.

Where can you read it?

We thought you’d never ask.

Click here to request your copy of the report.

Amazon-Cloud-Computing-Logo

New case study: Nisa Retail Mobile app on Amazon Web Services

Amazon Web Services have recently published a new AWS Case Study, looking at how Nisa Retail have implemented an innovative mobile application to make their members’ lives easier. Nisa engaged with Intechnica to design and develop the app, which is built on AWS technology.

As an AWS Consulting Partner in the Amazon Partner Network, Intechnica was well positioned to leverage the power and flexibility of AWS to deliver a scalable solution that would not impact on Nisa’s core IT systems such as their Order Capture System (OCS).

The full case study is available to read on the AWS website.

If you need an application built with performance in mind from the outset, or specifically built around the flexibility of the AWS infrastructure, Intechnica can help. Fill in the form below or use our contact page.

Photo: William Hook / Flickr

Intechnica Technical Director on why fastest is not always best

Throughout December, Stoyan Stefanov‘s Planet Performance blog hosts Performance Calendar – one web performance post a day from different authors, including experts such as Steve Souders, Paul Lewis and Tammy Everts. The 21st day this year featured a post by Intechnica’s own Technical Director, Andy Still.

Andy’s post, entitled “It’s not just about being the fastest…” explains the business barriers to implementing better performance and how these can be overcome. Andy also discusses the difference between “super ultra performance” and “appropriate performance”, and the dangers of over-optimisation.

Read Andy’s post here, and while you’re at it, make sure you go back and read all the other posts in this year’s calendar. It’s a fantastic, must-read resource for those interested in performance!

When tech glitches become business problems

Technology and specifically IT are essential to business growth, but IT can become a double-edged sword. When things go wrong, tech glitches become real business problems.

As people begin to expect more out of technology advancements, and these advancements have the potential to improve our day-to-day lives, more and more businesses are looking to innovations and the next progression to support their growth. As we’ve seen in recent years with the demise of traditional brick-and-mortar high street businesses unable to adapt in time to new digital trends (in the past year alone we’ve seen HMV, Blockbuster and Jessops hit hard, and nearly 2,500 stores affected), embracing technology is more important than ever to thrive, especially in the retail space.

However, as important as implementing new technologies is, it’s just as important to get the technology right first time. The investment in IT is now so high that tech glitches are now business problems in the most real way. IT glitches are now recognised as a mainstream issue reported on the front page of newspapers (and perhaps more significantly, virally spread digital news sources and social media discussions).

Chaos at Argos

An IT glitch caused chaos at one of the new Argos digital stores.

One recent high-profile example in the news concerned Argos’s new ground breaking “digital-only” stores. Argos has been a true trailblazer in the “click and collect” genre of retail, and the six new flagship stores are designed to fully embrace the sleek “touch screen”-y experience of the future. However, a technical glitch meant that orders were placed to be collected from these new stores before they were actually opened, leading to a frustrating experience as customers turned up to collect goods only to find closed or unfinished stores.

But it’s not just innovative new initiatives that can cause problems. Even fairly routine progressions and changes can damage business if not carefully implemented. Take, for example, BrandAlley, who brought ire to customers after delays in orders being processed. The cause was a switch to a new IT platform, instigated to prepare for international expansion. IT advancement was necessary to grow, but ended up causing a real business issue. BrandAlley has since given out vouchers worth £25 each to affected customers to save face.

I’ve written about a lot of performance specific tech glitches that cost businesses a lot of money and lost trust from customers on the blog before – from the Facebook IPO crashing NASDAQ leading to legal action, to the BT Sport app being unprepared for demand on the first day of the Premier League – and since performance issues are typically much more difficult to put right than functional issues (just look at the ongoing Healthcare.gov fiasco in the US), it really makes so much sense to pay close attention to performance. Just look at how much poor performance can damage brand and revenues.

So with the pace of IT advancement ever quickening, and our need to see the next advancements growing at least as quickly, just as important as keeping up with the “wave of the future” is making sure these changes don’t do more harm than good. After all, tech glitches ARE business problems.

Who was quickest on the draw at Velocity Europe?

Last week Intechnica were pleased to be involved in Velocity Europe 2013, O’Reilly’s Web Performance and Operations conference, where we sponsored an evening reception – as well as soaking up the sights and sounds of the web performance community.

The conference featured almost too many fascinating talks to mention – which is why we’ve left that to one of our SMEs Jason Buksh on his blog – check out his Velocity Europe retrospective.

Photo: O'Reilly Conferences (CC BY-NC 2.0)

A sharp shooter takes the quickdraw challenge. Photo: O’Reilly Conferences (CC BY-NC 2.0)

During the Intechnica-sponsored evening event, and at various breaks throughout the conference, we invited attendees to take part in our Quickdraw competition. Contestants were challenged to land a lethal shot on James Bond in the fastest time possible, with a prize for the quickest on the draw.

When the dust cleared, one man stood victorious – Matt Hailey from Skyscanner, who won himself a £100 Amazon voucher by delivering a killing blow to Bond in a lightning-fast 0.34 seconds – significantly faster than the mean average time of 0.61 seconds. Congratulations Matt!

Passionate about Performance?

Join Intechnica to work as a Performance Engineer/Architect within leading eCommerce, media and financial services organisations.

Find out more on our website