As strong as the weakest link
Every business has an IT environment consisting of infrastructure, middleware and applications that are connected and dependent on each other. Every single component acting as a possible constraint upon the entire system. Since one limiting factor can influence the entire process, and thus has an impact on the user experience, it is important to identify and systematically improve it until it no longer functions as a bottleneck. This theory of constraints, where you are as strong as your weakest link, is the basis of our full stack vertical-philosophy: an end-to-end approach that integrates all layers within a vertical and automates as many steps in this vertical as possible to stay in control. It can be anything, from a vertical for Software Development, a mobile solution on public cloud or a High Performance Computing (HPC) solution, to a Hadoop or SAP HANA vertical for heavy analytics.
What defines our end-to-end vertical approach? In three terms: automation, DevOps collaboration, and analytics. The focus in our end-to-end approach is to automate the complete application landscape using infra-as-code. Automatically deployed by predefined templates, therefore fast and uniform, providing the end user a flying start. To realize this, a variety of teams are involved to manage the conditions of infrastructure, middleware and applications. Teams that used to operate separately. But end-to-end also means: an agile approach and look at the complete landscape from a multidisciplinary angle (DevOps), with the customer in the role as Product Owner. Only then are you able to manage and optimize an entire vertical, and really start to add business value in terms of productivity, quality and costs. As such, it goes without saying that also a metrical and thus analytical view on all data from the landscape is part of the end-to-end approach. If the automation is in place and we can seamlessly control all configurations it makes even more sense to drive this further by a consistent use of analytics.
Enabling you to achieve higher productivity, better quality and lower costs
One of the advantages of our end-to-end approach is the fact that it is possible to automatically deploy fixes, upgrades, improvements or scaling through code. IT environments for RD departments for instance, regularly execute jobs that immediately demand a huge compute and/or storage capacity. In an infra-as-code managed environment you can run standard templates to scale up your server capacity instantly. Or even better, use your data and analytics to fire triggers based on a series of conditions to run that template automatically. And – important from a cost control point of view – automatically scale down again as soon as load turns back to normal.
Tracking down scenarios that cause bottlenecks for end users
Simply monitoring metrics doesn’t really make your environment smarter. It only makes you better informed. Next step is to tie it to the behavior you want. Our end-to-end analytics combines correlated data with user experience. How? First, you must figure out which conditions such as responsiveness, availability or transaction performance, impact the user experience at the end-user side of the application. . Meaning, you truly need to understand your end users’ way-of-working. So, don’t talk about technical stuff but about their experiences: “My simulations take too much time” or “it takes a long time before my application is started”. These statements must be translated to the right metrics and baselined towards feasible requirements.
Best practice is to collect all sorts of data (e.g. network and compute stats, license usage) from the full vertical stack, networking layer up to application layer in the vertical So, evaluating the data in the simulation example, we could pinpoint a few scenarios that caused the user to experience a ‘slow simulation’: it showed a lot of traffic on the server and the network was congested at that time, or more difficult, the simulation should be slow due to the size of it. Diving a bit deeper: the user was working on a very busy WIFI hotspot and various other users started compute-intensive simulations at the same time, causing a 15% slower simulation. Such an example shows that you need substantial data, in one place accessible for analysis, before you can start improving. First step is to know what is happening on the front (i.e. understand your end user), then you can start measuring, analyzing and as a final step the optimization.
Using data to optimize
It is our job to find out how much load a system can handle and continuously tweak it to improve the end user experience. By managing the vertical end-to-end and applying data science, we provide our agile teams with insights to remove the waste and improve accuracy while operating. For example, optimizing peak load effects, which could be defined as a single job that claims a high number of cores within a time span of less than a minute. Feedback from end users, combined with data, tells us when peak loads cause performance issues. This information is then used in our scaling algorithm so that we can predict performance-impacting peaks and scale automatically according to the users’ needs. It’s evident that the longer we monitor the data, the better we get to know the end users’ way-of-working and relate in to the parameters as configured or measured in the vertical. Which in turn enables us to anticipate faster on their needs. Applying advanced data science techniques such as deep learning offers us the means to model the behavior of the vertical even better and with this predict its future behavior.
Every business has an IT environment consisting of infrastructure, middleware and applications that are connected and dependent on each other. By working together in an end-to-end vertical approach, you can automate complete application landscapes using infra-as-code. This means you free up time to focus on improving business results. We believe in team effort, DevOps ways of working, collaborating with our customers, and using analytics. To increase productivity by combining engineering and data science with a persistent focus on business value, costs and quality. Digital transformation for IT.