Like many large and historic organizations, Erste Bank has faced challenges digitizing operations. This paper examines challenges faced at George, Erste Bank’s online and mobile banking application, that we think are relevant to others. It describes how we tackled siloed structures by integrating Operations experts directly into development Clusters, implemented shift-left strategies to reduce handoffs, and used Value Stream Mapping to identify inefficiencies—ultimately enabling shared ownership, faster delivery, and closer collaboration.
1. INTRODUCTION
Erste Bank has a history going back over 200 years and is one of the largest banks in Austria and the CEE region. From a simple savings bank founded in Austria in 1819, it has grown into a banking giant with more than 16 million customers and 45,000 employees in 7 countries. Throughout its long history, the bank has gone through many changes and transformations to respond to changing market conditions.
Digitization has been one of the most important areas of focus over the last decades, and banks have had to adjust to increased online and mobile banking. Erste Bank recognized early on that this would become important and embarked on a bold strategy – rather than just building a web or mobile application within the existing structures and processes of the bank, they established a startup within the bank and, as a result, George was born. With a focus on design and usability, George quickly became the best and most user-friendly banking application in Austria and has since been rolled out to 6 other countries.
Since George started out as just an idea with a very small team in 2012, it has grown rapidly and undergone many changes. Today the app has over 10 million users, and there are currently more than 30 development teams dedicated to improving George. As more teams were added over the years, George has undergone numerous organizational changes. One part of the organization that was generally not involved in the various transformations was Operations, which formed its own silo, and continued to work in a rather traditional manner. In this experience report, we examine the problems that arose from this arrangement and how we have tried to tackle them.
The current organizational setup can be seen in Figure 1 below. Development teams are organized into 7 Clusters, such as Payments, Products, etc. Each of these Clusters is made up of several teams.
Across the Clusters, we have chapters organized around technical expertise. For example, Business Analysts (BAs) work within each Cluster, but also form their own chapter to make sure that standards and practices are aligned. Underlying all of this are the supporting departments, which interface with the Clusters, but are organized differently. Operations is one of those departments, which was seen as fully separate from development.
Carmen Gruber, carmen.gruber@erstegroup.com
Julian Beyer, julian.beyer@erstegroup.com Balint Puster, balintpeter.puster@erstegroup.com Copyright 2025 is held by the authors.

Figure 1. Organizational set-up with Operations located outside of the Clusters.
The aim of incorporating the above skills in the development teams was to give teams full ownership of their domain. However, the fact that Operations remained a separate department hampered effective collaboration and prevented full ownership. Communication between operations and the development teams was limited and artificial, often async and happening via Jira tickets, rarely face-to-face. Tasks were not always aligned between teams, and dependencies were not always clearly identified and communicated. As Figure 1 shows, other expertise also exists outside of the Clusters. However, the dependencies between the Clusters and Operations are the most critical, which is why we have directed our focus there. While there are other challenges as well, we do not aim to address them in this paper.
2. PROBLEM
Over time, as the complexity and scope of George has grown, the siloed nature of the Operations department has increasingly led to problems. What worked reasonably well for a smaller organization began to show fault lines as the number of people developing and maintaining George grew. The main issues we identified are the following:
2.1 Communication
Almost all communication between Development and Operations teams happened via Jira tickets. This led to constant “ticket ping-pong,” miscommunication, delays, blame-shifting, and frustration on both sides. Development teams could not understand why Operations was slow or unresponsive, while Operations had to manage an overwhelming number of systems they barely understood, with a ticket backlog that stretched for the next year and a half. The lack of real communication and collaboration also meant that development teams sometimes started work on new services, only to be blocked by something they needed from Operations. On the other hand, Operations often felt bombarded with last-minute requests and tight deadlines for tasks they had no prior knowledge of. This mutual frustration resulted in increasingly complicated processes, with each side insisting that the other follow strict protocols. Unsurprisingly, this only made matters worse.
2.2 Hands Off
The separation between Development and Operations teams led to over-engineered processes and numerous hand-offs, resulting in a “throw-it-over-the-wall” mentality. A value stream mapping exercise revealed that deploying a new microservice to production involved 7 different teams and over 15 hand-offs between 4 different Jira projects. This is shown in Figure 3 below, with each color sticky note representing a different team, and therefore a hand-off. It is a simplified view, but it gives a sense of how complex the process was.

Figure 2. Simplified Value Stream Map of delivery process.
This surprised some in management but helped explain why releases to production were so slow. As noted in The DevOps Handbook: “Even under the best of circumstances, some knowledge is inevitably lost with each handoff” (Kim et al, 120). This implies that with a conservative estimate of 10% knowledge loss per hand-off, by the end of our release process, almost no knowledge remained. This left those further along the chain working almost blind, disconnected from the context of the problem. This was seen, for example, in a real situation where a deployment request was moved between teams for two weeks, only to be cancelled on the day of the deployment because the Ops person responsible for deploying was missing a single value in Jira. At his point in the process, he relied solely on Jira being filled out correctly, and the entire deployment fell apart when this was not the case.
2.3 Lacking Ownership
Another critical consequence of the disconnect between Development and Operations was the lack of technical ownership. Developers felt responsible for their code but not for ensuring that it ran smoothly in the environments, that errors were properly handled, or that performance was optimized. Their responsibility ended once the feature code was written. In firefighting or incident management, this lack of ownership caused serious issues. Operations staff who were on-call got woken up in the middle of the night to fix problems with services failing to deploy, often with no documentation or context from the team to help solve the issue. The result: rollbacks and escalations from management about delays on the Operations side. Afterwards, potential solutions for these incidents were often half-heartedly prioritized by the development teams, who were already focused on their next feature delivery. Phrases like, “I’m here to write code, not fix operations problems,” were not uncommon.
3. SOLUTION
To address the problems outlined above, George decided to move toward a closer collaboration between the Development and Operations Teams. The term DevOps was intentionally not used. The
DevOps term is interpreted in so many different ways and already carries so much emotional baggage that we found it to be more of a hindrance than a help in achieving our goals. However, we wanted to take an iterative approach, so we designed the initiative to have a few specific areas of focus. One of those was to integrate Operations experts who are shared across multiple teams in our Clusters, thereby providing direct expertise as close as possible to the teams.
To have clarity around our goal, we first worked with management to craft a Statement of Why that could be shared with everyone and was intended to serve as our north star. It was intentionally formulated to be ambitious, and to provide direction rather than something that could be quickly and easily achieved. In the process of refining our purpose statement, we also came up with a name for the initiative: TOM-CAT (Target Operating Model – Care, Accelerate, Transform). Our Statement of Why for TOM-CAT is the following:
We empower our teams to take full ownership to build, deliver and maintain better George products.
With this objective in mind, we defined a few key areas to begin our journey towards a new target operating model that enables full product ownership.
3.1 Shift Left
To understand where we could start to eliminate hand-offs, we returned to the Value Stream Map that we had already completed, which exposed the complexity of the release process in starkly visual terms. Thinking about how to simplify the process proved to be extremely daunting, so we settled on a shift-left approach by asking ourselves “what is the first hand-off we can eliminate from the beginning of this process to shift the entire process one step to the left?”. One of the first things we identified was that a separate team was creating the Jira ticket for the deployment of new microservices to various environments. After investigating why there was a handoff, the sad but simple answer was: because nobody else wanted to take the responsibility. This separation of the team doing the actual work and the team writing the ticket often led to a lengthy back-and-forth because the team writing the ticket did not have the necessary information and thus created tickets that were not useful to the dev team. Not only did this cost time, but it also created frustrations on both sides. This made it an easy hand-off to eliminate and allowed a first shift towards the left. We also discovered other simple, routine tasks that had been inexplicably outsourced to other teams and began working to bring them back to where they belonged; namely to the team that had the most knowledge and was closest to the actual work.
3.2 ClusterOps
As mentioned in Section 3, lacking communication and collaboration between Dev and Ops teams working in their own silos led to a lot of inefficiency and frustration, in addition to making anything even remotely resembling end-to-end ownership virtually impossible. Bringing developers and operations closer together thus became a priority. We started very small by identifying a friendly development team with a new service to roll out and moved a willing person from operations directly into this team. Starting with just one team allowed us to have a small pilot to learn from. It was important to us that the Ops person join the Dev team fully and take part in all of the Scrum ceremonies as a regular member of the team. One early learning was that we would need to do Scrum trainings for the people from Operations, since they had never worked in an agile team. We also realized that we would need to set up some guidelines to help teams navigate this new way of working.
We also knew that just moving people from the Ops team into a Dev team wouldn’t substantially improve things, if we didn’t also work on distributing knowledge within the team. Simply having an Ops person in the team doing all the same tasks as before would only be moving the bottleneck, albeit with some improvements in the length of communication loops. We therefore also set up some guidelines and templates for knowledge sharing within the teams.
After two months of gathering information and improvements from our initial pilot team, we began rolling it out on a broader scale. Due to limited resources, it was not possible for every team to have a dedicated Ops person. Instead, as our dev teams are organized into Clusters, we slowly started to move most of the people from the Ops department into the Clusters, while simultaneously creating a new Central Ops team. While Cluster-based Ops focus on product-specific concerns, some operational tasks are best handled at a higher level. This central team sets organization-wide standards, oversees high-level monitoring and alerting, and manages cross-cutting initiatives like compliance, site reliability, or security guidelines. By centralizing these responsibilities, we ensure consistency and best practices across all Clusters, while also allowing each Cluster to remain focused on its own product or service. Additionally, we introduced a new Ops Chapter to align operations standards and practices across Clusters.

Figure 3. Operations moved into Clusters and organized into Chapter.
The intent behind this was firstly to bring Dev and Ops closer together, collaborating regularly towards achieving the Cluster goals. We hoped to improve not only communication by having more direct contact, but also greater understanding for the full life cycle of building, deploying, maintaining, and operating software solutions – and a transfer of knowledge and skills so that the teams could begin owning more of the process, together.
3.3 Automation
In addition to addressing hand-offs and communication gaps, automation became a key topic in the TOM-CAT initiative as well. While we already have many tools—automated pipelines, deployment scripts, and monitoring solutions—automation alone was not enough. What we discovered is that processes, not tools, were often the bottleneck. Many tasks still required human coordination: when to deploy, how to share information, or how to handle approvals. In some cases, steps like generating deployment notes or handling certificates were still done manually, even though the technical foundation for automation existed. To fully benefit from automation, we realized we needed to rethink responsibilities and workflows. Only by simplifying and clarifying who owns which part of the process could we make automation truly effective and scalable.
4. EXPERIENCES SO FAR
We began with our TOM-CAT initiative in March 2024. Over the past year, we have had successes, failures, and many insights and learnings along the way. In this section, we examine the extent to which we have mitigated the problems identified in section 3, and what challenges we have faced.
4.1 Handoffs
We have seen some successes in reducing hand-offs, partially through some re-organization and partially through automation. We consolidated two teams that were both responsible for different parts of release governance into one team. This means that they now work closely together and are aligned in their approach, unlike before when they reported to different managers and rarely spoke directly to one another.
Additionally, we have automated the collecting of change logs and release notes. This was previously done manually, which was error-prone and often involved some back-and-forth to check that it was correct. As an automated process, these hand-offs are no longer necessary. Another improvement that we are working on is to split up the app config, which is currently handled in a monolithic way. This has resulted in lacking ownership of configurations, and many handovers between development teams, cloud platform teams, and operations. Splitting this up so that every service has its own config that is owned directly by the team developing the service will eliminate these handoffs and leave the responsibility for these files where they belong. Teams can then automate their own configurations to the extent that it makes sense, but our focus is on eliminating the hand-off in order to enable automation in the first place. Like with everything, it’s a pilot that needs the right balance between centralization and fragmentation. We will monitor closely to see if this approach leads to increased overhead or divergent configuration practices that become unmaintainable
One learning has been that automation in and of itself is not a panacea for our problems. Although automation is often seen as a silver bullet to create seamless processes, much of the complexity that we face cannot be solved with automation. Looking at our landscape, we already have a lot of the automation that we need, but our main constraint is the complexity of our environments, which we cannot easily change due to agreements with our country banks. It is the coordination with our county peers that is our biggest challenge, which cannot be automated away. Thus, we have recognized that we need to simplify our human processes before automation really can be effective. In tandem with this, we also realized that we need to be mindful of what should be automated versus what we should just get rid of. Automation is expensive and needs to be maintained. Automating processes that should not exist in the first place is massively wasteful. Therefore, our focus is on shifting left first and automating second.
The concept of shifting left also helped us to introduce a mindset change. Looking at our change management and release process made a full DevOps implementation feel daunting, even impossible. This was demotivating and made it feel like a useless endeavor. However, by introducing an incremental shift-left approach, we could focus on small steps, and the fact that even a 10% decrease in handoffs would be a massive improvement. This helped to create focus because each small improvement feels achievable and has helped us to prevent perfection from being the enemy of progress.
4.2 Communication
Having Operations team members work directly in Clusters with development teams has led to shorter feedback cycles, less ticket ping-pong, and more direct communication. It may sound almost too simple to be true, but you’re far less likely to make a teammate wait than someone outside the team. You’ll also talk to a teammate differently than you would to someone in another department— you’re more direct and pragmatic. Most importantly, with each conversation, you gain new knowledge about the context, the product, and its technologies. As a result, once Dev and Ops have solved an issue together a few times, either role can handle it independently moving forward.
We can point to several instances where the roll-out of a new service was cut down from quarters to less than a month. Additionally, some long-standing tasks that never had enough priority previously were tackled and quickly resolved once Operations could focus on a specific product suite, and the necessity of the task was understood. Previously, Ops was bombarded with issues from all the different product suites, and the responsible person changed constantly. Being able to now focus just on a specific part of the product has reduced context-switching and promoted a feeling of competence and ownership for that particular domain. As a result, we have seen a greater willingness to jump in and directly solve problems that previously would have been shuffled around the Ops department without anyone feeling particularly responsible.
These results clearly demonstrate lean principles in action. By eliminating unnecessary handoffs and allowing operations to focus on one product suite, teams can quickly identify and solve issues, cutting lead times dramatically. It also provides focus, and with it a greater sense of ownership.
As such, tickets that had been open for months were suddenly resolved within hours. This showed the impact that just having people together in daily stand-ups had and was positively perceived from all sides. Figure 4 below shows the cycle time for the Financial Health Cluster, which was the first Cluster to integrate Operations. Although we did see an initial increase in both the average and
median cycle time at firsts, in the past months we have seen a significant improvement as the teams have found their rhythm.

Figure 4. Cycle Time development in the Financial Health Cluster before and after Operations integrated into Cluster, indicated by red line.
Although we have seen some successes in terms of shorter communication cycles and Clusters/teams distributing knowledge of some operations tasks, on the whole knowledge transfer within the teams and Clusters has proven to be a challenge. On the one hand, we have found it hard to carve out sufficient time for knowledge transfers. Unsurprisingly, teams have often taken the rather easy way out and have just treated their new team member as their dedicated Ops person who they could simply throw Ops tasks at. Because the Ops department previously also worked in a very traditional manner, this has not been met with much resistance. This means that in some cases we have simply moved a bottleneck from outside the team to inside the team but have failed to minimize or remove the bottleneck. We see a continued need to drive a mindset change in this regard, and to have patience for the time needed to transfer knowledge.
We also learned that effective communication would require more than simply putting people together. The success of changing the mindsets of individuals coming from an Ops background to a more agile way of thinking was and still is strongly dependent on the individual. Despite Scrum training, some view working in Scrum as just another way of working in a project management framework, while others have embraced agile working more. A factor here is likely the zeal and effort with which Clusters have initially adopted this new model. Those who were more successful in embracing their Ops person also see better results in terms of changing mindsets.
Integration of Ops has been uneven across Clusters. Some Clusters have embraced the new approach and made real efforts to incorporate operations into the teams, while others have made half-hearted attempts at best. Some Clusters took weeks to even reach out to their Ops person, didn’t organize a kick-off with them, and failed to invite them to the relevant meetings and ceremonies. This naturally resulted in a bad start with low levels of motivation, as it signaled that the Cluster was not serious about implementing any changes. To tackle this, we have begun to think about ways to incentivize more desired behaviors, as well as ways to ensure a minimum standard across Clusters.
4.3 General Learnings
In addition to those mentioned above, we have a few general learnings to share. Firstly, our initial 2-month pilot with just one team was too little. We should have had a longer pilot phase to collect insights, and ideally with a few teams instead of just one. From there, it may have been wiser to do a staggered roll-out rather than a big bang. This likely would have allowed more time to effectively communicate and support both the Clusters and Operations to manage this change. Indeed, the entire initiative should have been communicated better in general. Clumsy communication led to early problems that could have been avoided. Lastly, we learned how important it is to have the right people in key positions. The driver of our automation initiative proved to be very thoughtful not only about how to automate things, but crucially, WHAT to automate and what to just get rid of instead. This ability to take a step back and see the larger picture has been invaluable.
Moreover, we have found the shift-left approach very valuable. It has allowed for small but iterative improvements and continues to help us identify the next process step to improve. Starting with lowhanging fruit like eliminating the handoff between one team writing a Jira ticket for another team also helped us to gain momentum and notch some early successes. In general, starting small and experimenting has served us well, and is something we endeavor to do more of in the future.
ACKNOWLEDGMENTS
First and most importantly, we want to acknowledge and thank all the individuals in our organization who have been part of TOM-CAT. Without their willingness and support, none of this would have been possible. We also thank the Agile Alliance for considering this topic important enough to share with others. Lastly, we thank our shepherd Filipe Correia for his guidance in helping to improve this paper.
REFERENCE SECTION
Gall, Michael and Federico Pigni. “Taking DevOps mainstream: a critical review and conceptual framework”. European Journal of Information Systems. 14 Nov. 2021, pp. 548-567.
Kim, Gene, et al. The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations. IT Revolution Press, 2016.
Jiménez, M., et al. “DevOps’ Shift-Left in Practice: An Industrial Case of Application”. In: Bruel,
JM., Mazzara, M., Meyer, B. (eds) Software Engineering Aspects of Continuous Development and
New Paradigms of Software Production and Deployment. DEVOPS 2018. Lecture Notes in
Computer Science, vol 11350. Springer, Cham. https://doi.org/10.1007/978–3–030–06019–0_16
Manos, Tony. “Value Stream Mapping – an Introduction”. Quality Progress, vol 39, issue 6, 2006, pp. 64-69.