In this post I will argue for what I believe should be the minimum level of capability for Enterprise IT in 2020.
I use the term Minimal Viable Vision (MVV) because there is a propensity in the financial sector (in which I work), perhaps more than in other sectors, for IT to make claims that it rarely manages to fully deliver. The reasons for this are as much to do with the business wishing for the impossible as they are for overly ambitious employees attempting to provide the impossible. There is also the tacit acknowledgement that the claims for new strategies must be unrealistic in order to survive the process of approval by the board.
In this instance, I haven’t had to suffer board approval so here’s where I think Enterprise IT must be by 2020 if it is to become an efficient service provider to the rest of the business as opposed to an embarrassing cost centre.
Minimal Viable Vision Statement for 2020
Were would we be without a vision statement? Surely lost in fog and dim of confusion and perhaps, at best, navigating the Enterprise into the Doldrums. Our MVV is thus:
Recognising that the essence of technology is to increase the capacity and capability of human action, we will eliminate all IT that does not generate value for our customers, employees, shareholders and other stakeholders
Let’s be honest about Enterprise IT vision statements, striving for excellence, being a world class provider of IT products and services, being a recognised leader in this that and the other, all of these statements, while expressing positive aspirations do little to inform the purpose of IT or reflect the reality of its position within the organisation, i.e. that of cost centre and often a source of frustration.
Our MVV statement sets a clear goal for IT which, if executed correctly, will result in far fewer under performing assets, whether they are people, processes or technology and reallocate resources to those areas in IT that can clearly demonstrate value generation. I’ll discuss some the Key Performance Indicators that we can use to measure value later but in nutshell customers, employees, shareholders and other stakeholders define what value really is.
A day spent with Enterprise IT in 2020
Perhaps the easiest way to describe what success looks like is to describe a day spent with Enterprise IT 2020 in short story format. Before I begin, let me stress that I am not going to describe what I think is possible by 2020 but rather what is pragmatically achievable given the complexity of transforming the minefield of existing IT in global financial institutions.
I’m a developer working in the engineering function of a trading desk. My main way of generating value is to deliver the application or new feature requests that my desk is demanding as quickly as possible. My strategy for doing this is to reuse as much as I can viably get away with, write as little code as possible (following the principle of NoDev) and get my code promoted into production as fast as possible by virtue of speedy functional and non-functional automated event driven testing and timely approval processes.
My working day starts at home in London, there is no need to be present on site, I’m not supporting the trading desk today and I could use the hour saved in transit to be more productive. I have several meetings but these are all attended remotely; actually, arranging face to face meetings is now frowned upon by the team and just about everyone else.
Some of the middle management seem to be resisting the culture of distributed co-operation but in the delivery focused engineering function of the trading desk they get short shrift for attempting to haul people into the office to attend their meetings, a very inefficient and costly work practice.
I generally spend three days a week on the trading floor, the other two days working from wherever happens to be the most productive place to work from when I’m not directly supporting the desk or covering for my colleagues absence.
I log into my desktop using my own device to check messages from various channels using my preferred message reading application. I then open my application platform portal which I use to mange my entire application IT stack. I review the results of various tests executed against the last code commits submitted through the enterprise SDLC process, and view the web interface of the running application to see a change my colleague in New York pushed over night.
I browse the source repository, binary repository, my Java application log output and performance analytics, the database and messaging server all from this single portal. Only last year in 2019 some of these technologies had to be accessed by shortcuts in my web browser and very much felt like separate portals they were. I even needed to login to them separately which was somewhat reminiscent of the 90’s…
I receive a notification that since I haven’t been viewing the logs from one of our UAT environments for 4 weeks, I don’t need to do anything because my other activity suggests I probably will access it in the next 4 weeks, however if genuinely don’t need access anymore, I should click the link to revoke my access rights or make the appropriate gesture. I think to myself, “great the best user experience for an administrative system is not experiencing it at all, the NoIT principle in action,” and ignore the notification.
My task for the day was to integrate the output of my application to a downstream component for UAT testing configured to run in Singapore. When I say downstream component, it is actually a number of different applications including a database, but all assembled, pre-integrated and self contained in one deploy-able unit. It used to take weeks for Networks to process and implement change requests to allow different applications to communicate across network boundaries and even longer if firewall ports needed to be opened. All that changed with the introduction of the third party PaaS and Software Defined Network technology that we deployed to the PaaS in 2017. Our on-boarding and provisioning processes now fully automate network configuration.
From the portal application store I select the downstream component and spin it up with the configuration from SG. The application store is where we can find third party vendor applications as well as applications developed in house by the various communities. I can review statics for every product and service available, who owns it, how many projects use it, ratings, forums, instructions, how-tos and perhaps most importantly is live performance index. This is a single number that represents SLA compliance, security compliance and resource consumption statistics. We use the index to help decide between the applications that fulfil similar functions. In the long run the lower performing applications naturally lose adoption and are eventually decommissioned. This helps to reduce IT duplication organically but senior management have also used the index to arrive at decisions on the same matter, far more quickly.
Everything I have described hosted on our hybrid cloud which is made up of 3rd party PaaS vendors and our own proprietary API, it’s really the PaaS that allows the different teams in the Bank to quickly deploy their workloads for development, testing and production. The PaaS was, and remains, the key enabling technology for us to deliver our strategy and consumes commodity IaaS from the open market. We can consume IaaS from all of the major cloud vendors at the flick of a switch.
Anyway I make a couple of changes to my application and and can see the downstream component kick into life, excellent, job done in about 1 hour. I ping the community that runs the downstream application to let them know that its great when an application works without having to spend hours trouble shooting its installation and give them a 5 star rating. On the community page I sign up for a live streaming open Q&A event scheduled for tomorrow where I can ask some questions about the operation of the downstream component in London which, according to the road-map information, was going live in production in three weeks.
I spend the rest of the day coding and regularly checking in my updated code to watch the impact of the automated performance and load testing trying out a reflective API which was very flexible as opposed to the hard-coded version that would be faster. I decided to run with the reflective implementation as it met the response times stipulated in the SLA.
Before finishing for the day I had a look at the Runtime Application Self Protection logs to see if there was any interesting activity. Our Cyber Security team are informed if any attempts are made to compromise security and I like to have look to see the latest attempts to hack us. There were some interesting command line injection attempts being made which all failed to at the point where they tried to execute /usr/bin/bash and /usr/sbin/genccode. Our Java code runs on an RASP enabled JVM and is configured to simply disable any attempt by the application to execute an binary (exec or spawn calls). It also looked like the latest CPU from Oracle had been applied virtually to all our Java infrastructure. We only apply binary patches annually now as they require downtime, no more 90 day patching cycle for Java. The virtual patches are applied at runtime without any impact to the running application by the clever RASP enabled JVM. It is worth mentioning that RASP has taken us significantly closer to a Perimeter-less Security model.
Just as I was about finish up, I received a notification of all the components that had been upgraded in my technology stack including the OS. Since all of the DevOps circuit testing had run successfully all of the upgrades had been propagated throughout our environments automatically by our underlying third party PaaS technology, with only one approval required to upgrade production. I remember what a nightmare OS upgrades were in the past, requiring endless planning meetings, approvals and general acrobatics. The disruption caused the business from these activities was, as we decided in 2015, unacceptable.
The Long and Short of IT
I’m sure many of you would think that the above is setting expectations very low for Enterprise IT and maybe in your organisations what I have described broadly exists today, but in all honesty if you can spend 2 days out of 5 dedicated to coding in the enterprise you’re doing well, the rest of the time is spent chasing people to approve and provision the necessary infrastructure needed to get the day job done or dealing with the impact of downtime on non-production infrastructure.
In my next post on this subject I’ll relate the short story to future state technology stack, its architecture and capabilities which will include:
- A single Application Portal and API
- Application Market/Store
- Community Software and Support
- Pre-integrated Products
- Enterprise SDLC and Automated DevOps
- Runtime Application Self Protection
- Perimeterless Security
- Ever Green IT, always up to date without any disruption to the business
- Hybrid Hosting Capability (Monolithic as well 12 Factor/Cloud Native)
- All hosted on Hybrid Cloud enabled by PaaS
- Software Defined Everything (maybe only Software Defined Something by 2020).
The above reads more like a wish list than the the pragmatically achievable for 2020 but in most instances I’ll be referring to a nascent capability.
Also expect KPIs for IT based upon:
- Adoption of PaaS hosted products and services
- IT’s responsiveness to business demands reduced from months to days
- Cost reduction
And how to realise the vision with Trivector Transformation