Shift Left – Don’t wait until delivery to test things like Performance and Security

One of the more positive outcomes of the COVID-19 crisis is that people are thinking of “the Future of Work” in the Mutable Enterprise and of business continuity in the face of the next pandemic or whatever. This is giving an impetus towards digital transformation, but (as Bloor consultant Tim Connolly points out in Remote working: how strong is your trust culture?, 16 March 2020) this has to be done carefully, with due regard to maintaining digital quality and Trust in the new digital channels.

Which is why I was listening to a webinar from Eran Kinsbruner (Chief Evangelist, Perforce Perfecto) about moving testing into the cloud because (Eran says), “to ensure business continuity at all times, teams need to utilize a cloud infrastructure that is always on, available, secure, and scalable”.

I’d agree, and Perfecto is an important player at scale. However, I was particularly taken by a question about “non-functional” testing (NFT) in the Q&A after the Webinar.

“Non-functional testing” isn’t testing that doesn’t work. It is testing against the general expectations of customers that digital channels/apps will be trustworthy and usable – secure, resilient, reliable, performant, fraud resistant, offer a good User Experience etc. In other words, it tests the (often unexpressed) customer and business requirements that aren’t “functional requirements” (hence the name), built as automated business functions such as “display catalogue”, “make sale” etc.

Eran commented that NFT was important and that Perfecto had many capabilities that supported it. He also made the excellent point that it must “shift left” – in other words, you start it at as near to the beginning of the beginning of the project as is possible, you don’t wait until everything is built (when it is too late to find out that a digital channel has systemic security or performance issues).

I was interested to see Chris O’Malley (CEO, Compuware) say much the same about performance testing, in the context of Mainframe DevOps: “By automating shift-left performance testing, your teams can improve agility, deliver higher quality applications, reduce development costs and deliver better customer experiences”.

Nevertheless, I can still find apparently respectable web sources which imply that NFT is done after FT, which I think is misleading… In fact, I think the traditional “design, then build, then test, then check non-functional stuff if you still have time” is plain wrong, partly because it encourages poor testing when deadlines slip but mainly because it is wasteful. If you discover that the basic architecture or design of a system is insecure just before you go live, you may have to throw most of it away and rebuild, as well as impacting the business with a missed deadline.

So, how do you test things like security and performance before you’ve built everything? Well, no doubt the software vendors I’ve referenced will help with specifics; but, broadly, you can look at designs and requirements, identify performance, security etc. antipatterns and get rid of them early on, before they are coded. If the requirements include, for example, remote access through the firewall with nothing but a name check against a list of authorised people held, in clear, on the browser, this is wrong on so many levels, that waiting until the security officer throws it out just before (hopefully) it goes live would be silly. Fixing such a mess before anybody starts coding it, is much cheaper than waiting until it is embedded in code, and a skilled developer, with help from automation, can identify many potential issues very early on.

Now, what is in a name? Well, I think “Non-Functional Testing” has unfortunate baggage. It implies that it is less important than testing business functions and many people are unsure exactly what is included under that heading. There is an alternative name “UX/DX (User Experience/ Developer Experience) Testing”, but that term, to most people, misses out some stuff that still needs testing.

It’s difficult, because most techies say NFT, and know roughly what it means (I hope) but I think that the term misleads business stakeholders, who speak ordinary English.

My favoured terms would be Customer (or business) Requirements Testing vs Customer (or business) Expectations Testing (where Expectations are the, often unstated, non-functional requirements). Of course, if these requirements really are unstated, the first stage of Customer Expectations Testing is to get the expectations stated, or you have nothing to test against.

Do readers have my problems with the term “non-functional testing”? We expect (assume) that our systems will be performant, resilient, secure, easy to use, maintainable etc., without always specifically saying so. Perhaps the term should be System Assumptions Testing…

David Norfolk

My current main client is Bloor Research International, where I am Practice Leader with responsibility for Development and Governance. I am also Executive Editor (on a freelance basis) for Croner's IT Policy and Procedures (a part-work on IT policies). I am also on the committee of the BCS Configuration Management Specialist Group (BCS-CMSG). I became Associate Editor with The Register online magazine – a courtesy title as I write on a freelance basis – in 2005. Register Developer, a spin-off title, started at the end of 2005, and I was launch editor for this (with Martin Banks). I helped plan, document and photograph the CMMI Made Practical conference at the IoD, London in 2005 (http://ww.cmminews.com). I have also written many research reports including one on IT Governance for Thorogood. I was freelance Co-Editor (and part owner) of Application Development Advisor (a magazine, www.appdevadvisor.co.uk, now defunct) for several years. Before I became a journalist in 1992, I worked for Swiss Bank Corporation (SBC). At various times I was responsible for Systems Development Method for the London operation, the Technical Risk Management framework in Internal Control, and was Network Manager for Corporate group. I carried out a major risk evaluation for PC systems connecting across the Bank’s perimeter to external systems and prioritised major security issues for resolution by the Bank’s top management in London. I also formulated a Security Policy for London Branch and designed a secure NetWare network for the Personnel Dept. Before 1988 I was an Advisory Systems Engineer in Bank of America, Croydon in database administration (DBA). on COBOL-based IMS business systems. Before 1982, I worked in the Australian Public Service, first as a DBA in the Dept of Health (responsible for IMS mainframe systems) and latterly as a Senior Rserach Officer 2 in the Bureau of Transport Economics. Specialties: I have the ability to extract the essence of significant technical developments and present it for general consumption, at various levels, without compromising the underlying technical truth.

Have Your Say: