Brace yourself: Black Friday is coming… in 9 weeks. Will your website stay up?

Winter is coming, which means Black Friday is coming. And Cyber Monday… and Boxing Day – the three busiest days of the year for online retail.

Each year more and more shoppers in the UK are becoming aware of the crazy bargains touted by retailers, both in store and online. And if you think Black Friday and Cyber Monday are only relevant in the US, consider these stats from 2013:

  • Last year, John Lewis reported a large rise in online traffic from midnight to 8am on Black Friday – up 323% on other November Fridays.
  • From 7am to 8am, they saw a 1,340% spike in mobile traffic.
  • Even smaller brands are affected: Maternity focused retailer Isabella Oliver saw a 1,200% increase in traffic on Black Friday.
  • Amazon UK reported than Cyber Monday was even bigger than Black Friday, selling 47 items a second.

The Christmas period in general is a big deal for online retail, accounting for 26% of the year’s sales overall, but while eCommerce and marketing teams are plotting the best campaigns and optimisations to capitalise on the season, IT teams are also working hard to ensure everything runs smoothly.

This is very important when you consider that a crashed website can generate no revenue.

So… will your retail website stay up when the traffic starts pouring in?

Won’t elastic, auto-scaling infrastructure keep us going?

There are lots of ways to ensure websites will remain fast and not crash when Christmas shopping fever hits, but eventually it comes down to sheer available capacity. When you have a fixed infrastructure, this looks like the chart below:


The black line is the capacity of the website, with the blue line being traffic level over time. For most of the time, you’re not using most of your available capacity – you’re balancing having enough overhead and spending on infrastructure you never use. But in the example above, the blue line (let’s say it’s 7am on Black Friday) peaks above the website’s capacity in the red area, which represents downtime – customers unable to use the site. During this time, everyone already on the site suffers slowdown or gets booted off altogether.

One popular solution is cloud-based auto scaling infrastructure, which looks like the below:


The expected result is that you scale your infrastructure up and down with your traffic levels, which means you only pay for what’s being used and you can scale up to react to spikes and peaks in traffic.

This works well, but in practice the most sudden and extreme traffic spikes look more like this:


Because the process of scaling the infrastructure up and down is automated and it takes several minutes to spin up additional servers (as seen in the flat line and red area above), there is lag in auto scaling elastic infrastructure. This means that elastic auto scaling is still vulnerable to sudden peaks (say from a TV advert or celebrity/sponsored tweet) and the website can still experience downtime, even to those who came to the site before the traffic hit.

Pulling the rug out from customers’ feet

The trouble with capacity is that once a website is full, it doesn’t only stop more people from getting in; it effectively slows down or throws off everyone already on the website. Even if a customer has spent 10 minutes filling their basket, servers don’t discriminate and the chances are those baskets will be abandoned.

So what’s your plan?

Fortunately, it’s not too late to invest in an insurance policy against your website becoming overloaded and unable to generate revenue. Intechnica have developed a solution that can manage any overflow in traffic, protecting the website from performance issues while delivering a good, consistent experience to new visitors.

The best part for the IT team is that it’s extremely quick and hassle free to implement – no code changes or extra capacity needed – so it can still be put in place in time for that seasonal peak traffic.

Protecting revenue even at full capacity

The solution is called TrafficDefender, and it works like this:


TrafficDefender protects existing website visitors by deferring potentially overwhelming traffic away from the website, keeping revenue flowing beyond website capacity

TrafficDefender watches how many visitors are entering and leaving a website (or entering and becoming inactive). Once capacity is reached, new visitors are automatically directed into a queue to get into the website. Instead of getting an error page or nothing at all, they see a branded page showing where they are in the queue, how long they’ll be waiting and whatever else the website owner wants to put on the page (exclusive discount codes, games, product photos etc.)

As soon as existing visitor’s session ends (either through becoming inactive or navigating away), the next visitor in line is taken straight to the website as promised.

This ensures that you are always delivering the clean, uninterrupted experience to existing customers all the way through their visit, even with the site “over” capacity.

Webinars coming up in October

Join us at 11am UK time on any Thursday throughout October to hear how TrafficDefender works and how it can keep your website running, even if Christmas peak traffic would otherwise bring it down.

Click here to book your place now.


Performance Testing Mobile Apps: How to capture the complete end-user experience

Intechnica will be part of a live webinar presentation on August 26th (10am UK time) focused around the challenges of capturing mobile end-user experience in performance tests.

Today, so much emphasis is placed on end-user experience, particularly when it comes to mobile where end users expect the same level of performance from mobile apps as they do from web apps.

In the discipline of performance testing the end-user experience of mobile users is highly impacted by characteristics of the devices and the quality of the network. This makes it difficult for testers to accurately replicate production conditions with conventional testing tools and approaches.

Head of Performance Ian Molyneaux (author of “The Art of Application Performance Testing”, second edition available soon from O’Reilly) will join fellow performance testing expert Henrik Rexed (Neotys) to provide insights on:

  • What are the challenges of including mobile end-user experience in performance testing
  • What are the metrics you need to collect from the application, from the network and from the device
  • How to capture, correlate and analyse the relevant metrics and get a true picture of mobile app performance

Register now for the webinar, or if you can’t make it on the 26th, register anyway to receive a recording once available.


Ian Molyneaux and Andy Still

Velocity Europe 2014 registrations open, Intechnica speakers announced

O’Reilly have officially opened registrations for their annual Velocity Europe conference, taking place this year in Barcelona from 17-19 November, with two talks to be contributed by senior Intechnica staff.

Velocity Conference, which takes place each year in Santa Clara, New York, Beijing and Europe, is the premier Web Performance & Operations conference in the world.

We’re delighted that our CTO, Andy Still, along with Head of Performance Ian Molyneaux, will be speaking at the conference. Andy’s 40 minute session is entitled “Mobile Performance: When is Good Practice Irresponsible?”, while Ian will give a 90 minute “Performance Testing 101″ tutorial closely tied to his recently published 2nd edition of “The Art of Application Performance Testing”.

Ian Molyneaux and Andy Still

Ian Molyneaux and Andy Still will be contributing a tutorial and session respectively at Velocity Europe 2014

An early bird discount rate is available for a limited time from the Velocity website, which also lists the announced schedule to date.

Which sessions are you looking forward to the most? Let us know in the comments.

Velocity Europe 2014


Gritty performance at the Great Manchester Run from Intechnica team

Well done to the nine members of the Intechnica team who successfully completed the 10k Great Manchester Run yesterday. Despite the heat and the headwind, they all worked incredibly hard and managed a mean average time of 55 minutes.

After weeks of office banter, ranging from modesty around running ability to out-and-out claims of superior athletic prowess (much more of the former), it was Nathan who ultimately finished at the front of the pack with an impressive time of 50:48. I’m sure next year they’ll all be pushing to beat his time!

But it really was more about the taking part, as proven by the total of £350 raised on the team’s Just Giving page for local charity Forever Manchester. An amazing result, which you can still add to via their donation page.



Performance testing is not a luxury, it’s a necessity

Recently I chanced upon a post posing the question of whether Software Testing is a luxury or a necessity. My first thought was that testing should not be a luxury; it’s much more expensive to face an issue when it arises in a live system, so if you can afford to do that, perhaps that’s the true “luxury”. However, testing is now accepted as a fundamental aspect of the software lifecycle and Test-Driven Development (TDD) stresses this aspect.

Unfortunately too often software testing is understood only as functional software testing and this is a very big limitation. Only if your software is supposed to be used by a very small number of users can you avoid caring about performance; only if your software manages completely trivial data can you avoid caring about security. Nevertheless, too often even the more advanced companies that use TDD doesn’t consider performance and penetration tests; or, at least, they do it only just at the User Acceptance Test (UAT) stage.

Working for a company that is very often requested to run performance tests for clients in the few days before the live release, we are often faced with all the problems that the development team has ignored. It’s hard when we must say “your system can’t go live with the expected workload” when the client’s marketing has already advertised the new system release.

Intechnica is stressing the performance aspect so much that we are now following the “Performance-Driven Development” approach. The performance requirements are collected together with the functional ones, addressing the design in terms of system architecture, software and hardware. Then the performance tests are run together with the unit tests during the entire software lifecycle (following the Continuous Integration practice).

I think that such an extreme approach may not be suitable for everybody, but recently I tested the performance of a new web application for a financial institution. We had already tested it 3 months earlier, but during these 3 months the development team has added new features and fixed some issues, the web interface has slightly changed, and as a result, almost all the test scripts became unusable and had to be fixed.

This tells me that the performance requirements must be considered throughout the entire development stage. They must be expressed and included in the unit tests because that is the only place where defined software contracts are used. Performance tests can be done at the UAT stage, but, just as no serious company would implement a UAT without having first functionally tested the code, so should you measure performance during the development stage. Additionally, an Application Performance Management (APM) tool is highly advisable to monitor the application and to find out the cause of performance issues rapidly in development as in the production environment.

Is testing a luxury? I’d prefer the luxury of a good dinner to the “luxury” of wasting money for an application found unfit for launch on the “go live” day.

This post was contributed by Cristian Vanti, one of our Performance Consultants here at Intechnica.


How much damage does Third Party Content really do to ecommerce sites? [REPORT]

thirdpartyreportIntechnica recently did some in-depth research into how the performance of ecommerce (specifically fashion retail) websites is being affected by the over-proliferation of third party content, the results of which have been published in our brand new report, “The Impact of Third Party Content on Retail Web Performance”.

For the uninitiated, third party content refers to basically anything on your website served externally – usually adverts, tracking tags, analytics codes, social buttons and so forth. Check here and here for some good background on the dangers of third party content.

If you are already initiated, though, you will already know that this content places an overhead on a website’s performance to varying degrees. In a best case scenario you’re looking at some extra page weight in the form of objects, which are being served from added hosts via extra connections to your users, the result being a few extra milliseconds on your page load time. On the other end of the spectrum, a misbehaving piece of third party content can slow pages to a crawl or even knock your site offline. Remember when everyone blamed Facebook for taking sites like the Huffington Post, Soundcloud and CNN down? Yep. Third party content at its most disruptive.

So what did we want to find out?

We wanted to really see how much of an impact third party content was having on real websites. We went about this by doing some external performance monitoring of the normal websites in parallel with filtering out the third party content (which we identified based on experience – if you want to see an open source, ever-growing list of third party content, I recommend It’s also important to note that we measured a “land and search” transactional journey, not just home page response, to get a more realistic measurement. From this information, we were able to distinguish which kinds of content are most prevalent and even what element of this content was most to blame for any performance overhead. It also gave us some interesting insights into whether sites that use third party content more conservatively also happen to follow other web performance best practices.

Some fast facts

There is more detailed data in the report itself, which is available to download in full here. However, let’s look at some key findings. Keep in mind that some of these might seem predictable but it’s nice to have some concrete data, isn’t it?

  1. In general, sites that use more third party content are heavier, use more hosts and connections, and are slower than those that use less third party content.
  2. When third party content is completely filtered out, websites with a lot to begin with see a greater immediate speed boost than those with less to begin with. In fact, many retailers swapped places in the rankings once third party content was taken out of the equation.
  3. Falling in line with the famous “golden rule of web performance”, it’s the extra hosts, connections and objects brought on by third party content that result in slower response times – more so, in fact, than page weight.

Some pretty graphs and tables

I recently saw someone on Twitter express the rule “nt;dr” (no table; didn’t read), so let’s get some data in this post, shall we?


The chart above shows the average number of each type of third party content present on each of our target websites. As you can see, there is far more variation in adverts than in any other form of third party content, although it’s somewhat surprising to see the average of 1.8 analytic tags on these websites. Serving adverts etc. from multiple hosts impacts performance, as we see in the table below…


Our second table tells us a little more about why sites with more third party content tend to be slower. The median, maximum and minimum response, hosts ,connections, objects and size represent only what is added to the sites by third party content, not the overall metrics. We can see that third party content alone is adding up to 18.7 seconds to the response time (a median of 9.4 seconds). What’s also interesting is that there is a stronger correlation between average response time and number of objects, connections and hosts added than there is between response time and added page weight (0 is no correlation ,1 is absolute correlation).


The last chart I’ll show you here is very telling in terms of real impact of third party content. Red and blue are the full, normally served websites of Retailer A and Retailer B respectively. We can see that red is much slower than blue during this time period. Yellow and green are the same websites (A and B respectively) except with all third party content removed. Not only did Retailer A gain a greater speed boost than Retailer B by filtering this content out, but it actually went from being the slower website to being the faster of the two. What’s even more interesting is that Retailer A reported disappointing revenues of this period, while Retailer B posted a strong revenue report.

Read the full report

This is just a sample of some of the data we collected and analysed, so make the jump to our website to read the full report for yourself.



Which retailers had the best mobile and tablet performance at Christmas?

As every retailer knows, Christmas is the most critical time of the year (even if “critical” isn’t quite as catchy as “wonderful”). It’s when retailers have the most to gain and the most to lose. You could make an argument that this is even more true in the case of eCommerce and online retail, since it’s so easy for consumers to open another browser window and take their business elsewhere.

Since we know how important this time of year is to online retail, and we know how much impact site speed has on conversion rates, user satisfaction, bounce rate etc, we thought it would be a good idea to monitor the performance of some leading retail sites during December and see how they were doing.

What did we measure?

  • Without revealing the full list of retailers, we had a shortlist of 13 leading sites, mostly in the fashion sector.
  • Since mobile and tablet have a growing market share of sales, we decided to dedicate our monitoring to iPads and Samsung Galaxy S4 smartphones over a 3G network.
  • Rather than looking at just the landing page response time, which is a fairly shallow measure, we looked at a two-page “land and search” transactional journey – the search term being “scarves” (it was cold outside, after all).

What results did we get?

Here is just a quick snapshot of the type of data we got back when we arrived in the office in January.

  • As you might expect, performance largely followed the trends and patterns of demand, with most sites slowing down on busy shopping days and speeding up after the last postal days had passed.
  • There’s a huge gap between the fastest and slowest sites.
  • Sites which are on average slower are also much more erratic and inconsistent (bigger spikes happening more often).
  • More in the full report!
There was a wide rift between fastest and slowest in baseline performance this Christmas.

There was a wide rift between fastest and slowest in baseline performance this Christmas.

Where can you read it?

We thought you’d never ask.

Click here to request your copy of the report.