Northern Tech Map - GP Bullhound

GP Bullhound recognises Intechnica’s growth on new Northern Tech Map

Intechnica is on the map of growing tech businesses in the North, according to leading independent advisory firm GP Bullhound.

Intechnica is featured in GP Bullhound’s newly launched Northern Tech Map, an interactive tool pulling data from years of research into the growing northern tech movement.

This follows on from Intechnica being named as one of GP Bullhound’s top 50 fastest growing tech businesses in the Northern Tech Awards 2016, alongside businesses like Skyscanner, Metronet, TyreOnTheDrive and TheLADBible.

This has been bolstered by the growth of Intechnica’s TrafficDefender SaaS solution, which has helped clients such as AO World and JD Williams ensure their websites remain online during their busiest sales peaks, as well as Intechnica’s professional services around performance and development.

Intechnica has also become a trusted advisor for businesses during mergers and acquisitions via expert IT due diligence services, working on valuations worth hundreds of millions of pounds.

Click here to view the GP Bullhound Northern Tech Map, here to find out about TrafficDefender, or here to find out about Intechnica’s services.

Are celebrity endorsements doing more harm than good to your retail sales?

Online retailers exploring the recent trend of signing up celebrities to endorse them to their millions of followers on social media run the risk of site-melting bursts of traffic, according to Intechnica co-founder Jeremy Gidlow.

In a talk at GP Bullhound’s event “Online Fashion: Where is the smart money going?” last night, Jeremy pointed to a recent example where Kylie Jenner drove an incredible amount of demand to the website selling her range of lipstick. However, because the demand was so great and was directed at a specific time, the systems behind the website couldn’t cope and the whole website was brought down.

This highlighted a common problem retailers are facing today: Sales reduce margins whilst increasing demand, and the technology needed to cope with the increased demand can be very costly – further cutting into profits.

TrafficDefender is a SaaS solution developed by Intechnica in response to this problem. TrafficDefender allows retail websites to remain online throughout large spikes in web traffic without needing to over invest in additional IT infrastructure.

Watch Jeremy’s talk below or find out more about TrafficDefender.

Grand National

Intechnica to reveal results of four years of monitoring Grand National betting sites and apps

Intechnica, the IT Performance Experts, will reveal the results of four years worth of end-user experience monitoring of the Grand National in a live webinar taking place on Thursday 12th May.

The Grand National drives the largest spike in traffic seen by betting websites and apps each year, with half the adult population of the UK placing a bet on the race.

Each year, many sites are troubled by traffic-related issues and errors caused by the massive interest generated by the Grand National, as evidenced by Intechnica’s monitoring activities since 2013. Common errors include missing content, slow pages and periods of unavailability.

Now, Intechnica are poised to reveal how the market has responded to this challenge, and will once again present findings in a live webinar.

The webinar takes place at 11am BST on Thursday May 12th and is free to attend.

Click here to register.


Intechnica in the top 50 fastest growing tech companies in GP Bullhound Northern Tech Awards 2016

Intechnica are proud to be finalists in the GP Bullhound Northern Tech Awards 2016.

Intechnica joins finalists such as UKFast, Sky Betting & Gaming, Missguided and Skyscanner as one of the top 50 fastest growing tech companies in the north.

The nomination recognises Intechnica’s growth over the past few years, and in particular the growth of TrafficDefender, our innovative traffic management and queuing SaaS product, which has been adopted by businesses like and JD Williams.

The awards ceremony takes place in Liverpool – watch this space for more information.


End User Monitoring – Reports of EUM’s Death Have Been Greatly Exaggerated

This post originally appeared on APMDigest.

Once upon a time (as they say) client side performance was a relatively straightforward matter. The principles were known (or at least available – thank you, Steve Souders et al), and the parameters surrounding delivery, whilst generally limited in modern terms (IE5 /Netscape, dialup connectivity anyone?) were at least reasonably predictable.

This didn’t mean that enough people addressed client side performance (then or now for that matter), despite the alleged 80% of delivery time spent on the user machine, and the undoubted association between application performance and business outcomes.

From a monitoring and analysis point of view, synthetic external testing (or end user monitoring) did the job. Much has been written (not least by myself) on the need to apply best practice, and to select your tooling appropriately. The advent of “real user monitoring” (RUM) came some 10 years ago – a move at first decried, then rapidly embraced, by most of the “standalone” external test Vendors. The undoubted advantages of real user monitoring in terms of breadth of coverage and granular visibility to multiple user end points – geography, O/S, device, browser – tended for a time to mask the different, though complementary strengths of consistent, repeated performance monitoring at page or individual (eg 3rd party) object level.

Fast forward to today, though, and the situation demands a variety of approaches to cope with the extreme diverseness of delivery conditions. The rise and rise of mobile (just as one example, major UK retailer quoted over 60% of digital orders derived from mobile devices during 2015/16 peak trading) brings many challenges to Front-End Optimization (FEO) practice. These include: diversity of device types and version; browsers; and limiting connectivity conditions.

This situation is compounded by development of the applications themselves. As far as the web is concerned, monitoring challenges are introduced by, amongst other things: Single Page Applications (either full or partial); “server push content”; and mobile “WebApps” driven by service worker interactions. Mobile Applications, whether native or hybrid, present their own analysis challenges, which I will address subsequently also.

This already rich mix is further complicated by business demands for more on-site content – multimedia and other rich content, exotic fonts, and more. Increasingly large amounts of client side logic, whether as part of SPAs or otherwise, demand focused attention to avoid unacceptable performance in edge case conditions.

As if this wasn’t enough, the (final!) emergence of HTTP/2 introduces both advantages and anti-patterns relative to former best practice.

The primitive simplicity of page onload navigation timing endpoints has moved from beyond irrelevance to becoming positively misleading, regardless of the type of tool used.

So, these changes require an increased subtlety of approach, combined with a range of tools to ensure that FEO recommendations are both relevant and effective.

I will provide some thoughts in subsequent blogs as to effective FEO approaches to derive maximum business benefit in each of these cases.

The bottom line is, however, that FEO is more important than ever in ensuring optimal business outcomes from digital channels.


Choosing the right APM – A fool with a tool is still a fool

I remember the days when to find someone arguing about a niche technology on the Internet, you had to use a BBS. Now everything that is published can be commented on via dozens of social networks. Links for Facebook, LinkedIn, Pinterest, Twitter and so on are present everywhere and everybody can express their thoughts. Sometimes the comments are even more interesting that the post itself.

As I’m passionate about software performance, I recently stumbled upon a post about Application Performance Management (APM) tools and their features. It was interesting because almost every person that was involved in the discussion worked for a company selling a specific tool and, of course, pushed the magnificent features of his product, sometimes playing down the competitors and making some errors in their attempts.

It was interesting because I had the chance to find out about Application Performance Management products that I didn’t know at all and I realised that the market is growing fast. There are a lot of different products, each with very specific features. So, now more than ever, it’s very hard to say that a product is better than another. In my opinion, you can’t say that one product is the best. It depends on which features are important for your specific needs.

So my mind went back at when I was an Enterprise Architect for a tier-1 telco where one of my duties was software selection. When a company like that decides to buy a product that can cost several millions of pounds, it explores the market, analyses the products, and evaluates them against a list of requirements. When a company decides to buy a tool, it must create value, satisfy specific needs, and ultimately solve problems. To make a long story short, it must implement a strategy.

What still surprises me is that the performance culture isn’t yet widespread, and often managers buy software or services that are very appealing or trendy, but aren’t actually an element of any strategy.

Web performance is a war that must be fought every day. Every day customers ask for new features and expect quicker systems. You can’t think that a tool like Application Performance Management is a magic wand that can solve all your problems forever. First comes the strategy, then the budget, and then, only then, you can look at the market to choose your tools. This is a process we often help our customers to understand.

Otherwise, you risk becoming a fool with a tool.

Cristian Vanti

This post was written by Cristian Vanti, a Solution Architect at Intechnica. You can connect with him on LinkedIn.

19 Ways to Guarantee Website Uptime on Black Friday

Black Friday

Make sure you don’t inadvertently close up shop on Black Friday

The predicted £1bn sales figure for Black Friday has made website availability and performance a board level concern for online retailers, with preparations for this year’s event starting as soon as last year’s ended for some. At Intechnica we have been working alongside several leading UK retailers to help them be fully prepared for whatever Black Friday brings.

Without revealing all of our methods, we’ve compiled a quick checklist of things retailers should have done to plan and prepare for Black Friday, along with some last minute checks and actions they might still be able to consider. How many have you thought about and do you feel fully prepared?

Planning – In the lead up to Black Friday

  1. Assign an owner and team dedicated fully to the specific task of preparing the website for Black Friday.
  2. Agree on the budget – performance doesn’t come for free. How much is uptime and speedy response times worth to the business on Black Friday? (We can help you figure this out!)
  3. Find out how much traffic is expected and its nature. Work closely with the marketing department to find these out based on planned activity along with overall trends. This information should include baseline traffic levels, the size of peaks, how fast they are expected to hit, where the traffic is likely to come from and how users are expected to behave on the site.
  4. Measure how much traffic the site can handle and where the bottlenecks are through thorough performance testing and analysis.
  5. Validate that your test results make sense and are realistic!
  6. Identify existing means to scale up the website’s capacity, e.g. capacity planning, cloud hosting, failover capacity, disaster recovery etc.
  7. Optimise performance in quick development sprints, focusing on low hanging fruit and test the results regularly.
  8. Move hits away from the domain using CDNs etc.
  9. Reduce the size of traffic hits, e.g. optimise image sizes, minify JavaScript etc.
  10. Cache content as close to the browser as possible, e.g. on a webserver rather than in the database.

Preparation – Last minute checks

  1. Ask “what could possibly go wrong?”
  2. Ask what the impact of such failures would be.
  3. Ask how you would know if such a failure was about to occur or in progress.
  4. Ask what could be done to mitigate this!

On the day

  1. Make a list of non-essential functionality that could be quickly switched off to boost performance if the site is struggling, e.g. predictive site search, live chat, etc.
  2. Empower team members to be able to deal with problems quickly without needing to wade through red tape.
  3. Make sure monitoring is in place and running (real user monitoring, business metrics, social media etc.).
  4. Know who you can contact in a pinch within the business and with third parties – have a list of numbers and emails at the ready.
  5. Have an insurance policy in place in case all else fails.

What if all else does fail?

Many top retailers suffered very public failures last Black Friday, with sites crashing under much higher demand than anticipated. Fortunately we have developed TrafficDefender, an insurance policy for if websites do reach bursting point come Black Friday.

TrafficDefender removes the risk of complete website outages if capacity can’t keep up with demand, and gives a better experience to any overflow of visitors than a generic error message by placing excess visitors into an orderly queue.

Even if you have followed our checklist from top to bottom, do you have an insurance policy for if all else fails?