The DevOps Dilemma

Are we focusing on resource efficiency at the detriment of flow?

As many DevOps and Agile teams know all too well, teamwork very much makes the dream work in terms of flow efficiency. There are few things more satisfying than an efficient DevOps operation and ticking items off the to-do list. But with today’s DevOps teams often stretched thin, it’s easy to start focusing on the wrong things and neglect to consider the bigger picture. We’re talking about resource efficiency versus flow efficiency.

Prioritizing resource efficiency above flow efficiency could be holding teams back and causing significant discrepancies in terms of big-picture progress. In this article, we’ll discuss why examining and measuring how items flow through the system is just as important as assessing individual efficiency.

A better way to track teamwork

Focusing on the output of an individual contributor in a value stream could actually be harming the overall performance of the system. It might seem counterintuitive, but in essence, DevOps teams must begin to look at the bigger picture – in other words, the overall organizational efficiency – before highlighting and breaking down resource inefficiencies.

In order to achieve this, DevOps needs to start monitoring the right things. For example, tracking time spent on coding projects is great, but it only measures individual output, meaning you’re less likely to acknowledge the full impact of the collective group.

Instead of measuring individual resource input, managers could consider monitoring cycle time. Examining the duration from the start to finish point of each project could give a better idea of flow efficiency, before homing in on individual output. Aging is another metric that could be explored. Looking at how long something gets held back at a certain stage could allow managers to make better decisions in the future and shift allocations accordingly.

Likewise, it could also be worth monitoring WIP (work-in-progress) levels in each mode and aim to minimize this. Reducing batch sizes in both story size and movement of items between modes in the value stream could mean you have a steadier rate of progression. It’s also good practice to ensure that items progress all the way to completion through the value stream before allocating new tasks to the same team member.

Mitigating work starvation

One key challenge faced by DevOps teams arises when developers focus solely on completing their work in its entirety, meaning that they hold off on releasing tasks to the next phase until every task on their list is completed. This then creates bottlenecks, in turn resulting in inefficiencies in terms of wasted resources, time, and money.

Switching to smaller batch sizes could help to mitigate this issue. Large batch sizes often lead to ‘starvation’ of work in the testing or implementation areas of the value stream and tend to increase the cycle time. This is because the amount of work on someone’s plate at any given time can warp their sense of efficiency. Smaller batches enable speedier feedback on smaller iterations of new features and updates, allowing the project to progress quicker overall.

Making visibility a priority

To truly eliminate (or at least reduce) work starvation, and reach a smoother level of community effort, the entire team must gain better visibility of the entire process. Having a bigger picture view of the progress and status of the full value stream is essential in streamlining the flow of tasks throughout the product team.

Implementing a value stream management platform can lead to much greater clarity, enabling better visibility and control over every team, tool, and pipeline throughout the organization.

With the right software delivery dashboards, managers can better examine the rate of value delivery in contrast to desired business outcomes. More specifically, being able to analyze value stream flow metrics means businesses can view their overall production from a wider lens, empowering better knowledge and stronger decision making.

These valuable flow metrics can also provide better insight into the organization’s workflows in general. Naturally, achieving better consistency is the ultimate goal. With elements such as Cumulative Flow Diagrams (CFD), managers can see how efficiently any given task is progressing throughout the workflow.

Thanks to the clear way in which a CFD presents the data of a project, every team, and individual member can visualize how everything flows well, with no glitches, bottlenecks, or work starvation periods. Likewise, being able to see the bulges, inconsistencies, and discrepancies in graph form signals to managers that tasks are getting held up, not being completed, or aren’t being passed on to the next phase.

Occasionally, managers may notice that lines in a CFD can disappear altogether. That means someone is not getting work passed on from others, or one of the team members is keeping hold of their batches of work. Although progress will always be made – indicated by the trend of the graph never being in decline – managers will clearly see the areas where they will need to focus on honing better flow efficiency. By looking at the whole value stream in this way, project managers can synchronize their team’s tasks effectively and allocate duties so that everyone is working in tandem.

Essentially, many developer teams are unwittingly damaging the businesses’ overall efficiency simply by not seeing the bigger picture and focusing on resource efficiency which will often lead to flow inefficiency.

At a time when software development becomes increasingly competitive, agile and DevOps professionals must move away from the individual approach to value delivery and switch to a more system-centric way of managing to better optimize long-term flow efficiency.

BobDavis

Bob Davis, CMO at Plutora, has more than 30 years of engineering, marketing and sales management experience with high technology organisations from emerging start-ups to global 500 corporations. Before joining Plutora, Bob was the Chief Marketing Officer at Atlantis Computing, a provider of Software Defined and Hyper Converged solutions for enterprise customers. He has propelled company growth at data storage and IT management companies including Kaseya (co-founder, acquired by Insight Venture Partners), Sentilla, CA, Netreon (acquired by CA), Novell and Intel.

Birmingham Unveils the UK’s Best Emerging HealthTech Advances

Kosta Mavroulakis • 03rd April 2025

The National HealthTech Series hosted its latest event in Birmingham this month, showcasing innovative startups driving advanced health technology, including AI-assisted diagnostics, wearable devices and revolutionary educational tools for healthcare professionals. Health stakeholders drawn from the NHS, universities, industry and front-line patient care met with new and emerging businesses to define the future trajectory of...

Why DEIB is Imperative to Tech’s Future

Hadas Almog from AppsFlyer • 17th March 2025

We’ve been seeing Diversity, Equity, Inclusion, and Belonging (DEIB) initiatives being cut time and time again throughout the tech industry. DEIB dedicated roles have been eliminated, employee resource groups have lost funding, and initiatives once considered crucial have been deprioritised in favour of “more immediate business needs.” The justification for these cuts is often the...

The need to eradicate platform dependence

Sue Azari • 10th March 2025

The advertising industry is undergoing a seismic shift. Connected TV (CTV), Retail Media Networks (RMNs), and omnichannel strategies are rapidly redefining how brands engage with consumers. As digital privacy regulations evolve and platform dynamics shift, advertisers must recognise a fundamental truth. You cannot build a sustainable business on borrowed ground. The recent uncertainty surrounding TikTok...

The need to clean data for effective insight

David Sheldrake • 05th March 2025

There is more data today than ever before. In fact, the total amount of data created, captured, copied, and consumed globally has now reached an incredible 149 zettabytes. The growth of the big mountain is not expected to slow down, either, with it expected to reach almost 400 zettabytes within the next three years. Whilst...

What can be done to democratize VDI?

Dennis Damen • 05th March 2025

Virtual Desktop Infrastructure (VDI) offers businesses enhanced security, scalability, and compliance, yet it remains a niche technology. One of the biggest barriers to widespread adoption is a severe talent gap. Many IT professionals lack hands-on VDI experience, as their careers begin with physical machines and increasingly shift toward cloud-based services. This shortage has created a...

Tech and Business Outlook: US Confident, European Sentiment Mixed

Viva Technology • 11th February 2025

The VivaTech Confidence Barometer, now in its second edition, reveals strong confidence among tech executives regarding the impact of emerging technologies on business competitiveness, particularly AI, which is expected to have the most significant impact in the near future. Surveying tech leaders from Europe and North America, 81% recognize their companies as competitive internationally, with...