In this episode of CGI’s From AI to ROI podcast series, host Fred Miskawi, Vice-President - AI Innovation Expert Services Lead at CGI, is joined by Steve Zemanick and Kevin Beaugrand, both Director, Consulting Expert in AI.

Together, they explore the transformative impact of generative and agentic AI across the software development life cycle (SDLC)—from coding and testing to deployment, operations, and scaling.

The discussion covers where AI delivers measurable ROI, lessons learned and best practices from real-world implementation experiences. It also highlights how human and AI collaboration and agentic AI redefine how enterprise software is built, maintained and evolving.

Key takeaways from this episode:
 

1. The value of AI is maximized when applied across the entire SDLC, not just code generation.

Generative and agentic AI tools are transforming software development, but their true power lies in enhancing every development stage, from ideation and requirements to testing, deployment, and maintenance. While tools like GitHub Copilot accelerate coding, organizations must adopt a holistic approach with human-in-the-loop to see meaningful outcomes in speed, quality and reliability.

According to Kevin, it’s not just about producing more code. It’s also about ensuring it’s maintainable, secure and aligned with the broader system.

2. Human-in-the-loop design is essential for building trust, ensuring quality, mitigating risk and scaling.

Despite increasing automation capabilities, human oversight remains critical. It is essential that design workflows keep humans in the loop to mitigate risks like technical debt, compliance gaps, or misaligned output. Effective adoption and usage of AI across the SDLC depends on robust quality gates, human-in-the-loop design and clear measurement of impact.

As Fred points out, just because you can automate something, does not mean that you should.

3. Real-world programs deliver value and ROI faster, but only with governance, coaching and change management.

CGI’s client engagements demonstrate that using AI across the SDLC can improve delivery timelines and increase quality. However, AI tools and agentic frameworks only succeed if people know how to use these properly and effectively and if they are motivated and empowered to do so. This involves organizational change management, upskilling and internal communities of practice.

“Every project is different. Developers need to know not just what to automate, but how to build tools that match their specific context,” says Kevin.

4. AI is transforming what is possible with software development.

AI is not just enhancing existing development processes, it is also opening possibilities for software development. For example, Steve highlights the potential to drastically reduce technical debt from legacy systems and the importance of addressing vulnerabilities to improve maintainability aspects.

Additionally, Kevin discusses the possibility of developing business-focused applications directly for the business, where the priority is fast ROI and value instead of long-term maintainability. However, he notes that there will still be a need for applications with high quality, security and maintainability over time.

5. The future will require a new kind of developer who aligns human intent with agentic intelligence.

Looking ahead, the role of the developer is evolving into an "alignment engineer,” someone who can curate context, frame intent and ensure the outputs of AI agents meet both technical and business expectations. Agentic AI will enable rapid application prototyping, but maintaining enterprise-grade quality and security will remain a human-led responsibility.

Learn more and subscribe

Explore more episodes of From AI to ROI and learn how AI is transforming enterprises and government organizations. Visit cgi.com/ai for insights, resources and updates on AI-powered strategies.

Read the transcript
 

Introduction

Fred Miskawi

If you're a senior leader today, your desk is crowded with promises about AI. Promises to cut costs, boost productivity and transform your business. But you and I both know that the reality is never that simple, especially when it comes to the complex value production engine that we call the software development life cycle. Anyone can go out and buy an AI copilot. That's the easy part. The hard part is weaving this intelligence into the very fabric of how you build, test and secure software.

In this latest episode of our podcast, we're discussing how to get this right, how to navigate this real world of complexities of AI-driven development. And I'm convinced that the conversation is no longer about whether AI will change things, but about how we lead through that change and how we interlace human and AI agents together.

And to guide us, I'm joined by two of CGI's leading experts in applying AI to the SDLC, Kevin Beaugrand and Steve Zemanick whom I work with on a regular basis here at CGI, hailing from both sides of the Atlantic today.

Together, we bring you direct experience from major client accounts, and we're going to share the lessons learned from those implementations and the scaling associated. Today, we're going to go beyond the hype to give you the strategic insights needed to turn AI ambition into tangible ROI. Steve, welcome to the podcast. I'm going to hand over to you for an introduction and a little background of yourself.

Steve Zemanick

Awesome. Thanks, Fred. Glad to be part of today's conversation. I'm a director within CGI's emerging technology practice, and I focus on AI strategy and innovation for our clients. And one of the top use cases that we've been focused on is how generative AI could help our clients across the SDLC. So, I spend a significant amount of time in this space exploring what's possible and also really focusing on delivering ROI and value across the value chain. Interested in continuing the conversation today, happy to be part of it.

Fred Miskawi

Thank you, Steve. Kevin, what about you?

Kevin Beaugrand

Yes, thank you. Thank you. I'm Kevin Beaugrand. I'm working at CGI as a Technical Architect Director for AI. I'm mainly focused about how AI can be brought to software development and how we can build also some software, some AI agents, AI tools for software, the software developers to increase their productivity, but also maintaining the quality of our delivery. I think the discussion today will be very interesting.

Fred Miskawi

Very on topic. Thank you, Kevin. And I'm Fred Miskawi. I lead our AI Innovation Expert Services at CGI globally. I believe in AI's true power isn't necessarily in replacing us but augmenting our capabilities to deliver speed to trusted value.

We've got three different segments. Segment one, AI's transformative impact on software development. Segment two, we're going to talk about real world examples. What's happening within our client environments, digging a little bit deeper into the real lessons learned from these accounts. And then in section three or segment three, we're going to cover best practices, lessons learned. What does it take to go from implementation to scale? And what's next for agentic AI?

AI's transformative impact on software development

Fred Miskawi

All right, so let's get started in the first segment, AI's transformative impact on software development. Let's start with you, Steve. What's included in that SDLC and what makes it one of the areas most impacted by this technology?

Steve Zemanick

What makes it one of the most impacted areas by AI? And the reason is because the tooling has become very strong and it's become strong not only for software development, but also helping product owners refine the requirements and the activities that need to be completed by the development team and also having robustness of testing and also being able to deploy and support the system much more effectively by leveraging knowledge bases. The power of generative AI within the SDLC has given us a superpower you're referencing, enabling us to deliver code faster, with a higher degree of quality.

Fred Miskawi

Thank you, Steve. We know that these tools can accelerate the development processes, the generation of code or the brainstorming needed around that topic. But what you're pointing the finger on is that it is important to look at it in a broader kind of fashion.

Kevin, from your perspective, why would you want to focus it across the entire SDLC rather than just development?

Kevin Beaugrand

Yes, regarding the SDLC, we can see that tools that can be used to provide more productivity for software development are not yet in the same maturity level for every stage of the software development. For example, for the conception and design stage, there is no single AI for design methodology yet. The teams juggled prototyping tools, LLM-powered whiteboarding and ID generation tips, but without a clear practice.

For coding and testing, yes, we have the good maturity level of our tools. But this is not because you can increase your productivity, because you can produce more code, then you don't have to review that. It is very important to say that the AI generation process can make some mistakes. And we have also to take in, to keep in mind that we have the responsibility to be sure that every piece of code that will be imported in a project should be maintained and should be sure. We have to keep this control at every stage and keep humans in the loop during the process. It is also the same for the deployment and operation because you don't want to automatically update your application and deploy your application at every issue you are facing on your software development. You want to be sure that sometimes, some operations cannot be just automated but with control.

Fred Miskawi

And I say on a regular basis, Kevin, that it's not because you can automate that you should automate. And what we're seeing in terms of the trends in the industry is we're getting to a point where it is possible for certain streams to fully automate that stream from concept all the way to production. But I think where you're pointing the finger on is the fact that even though you can automate, you should not. And it ties directly into quality, into alignment with what is being expected. From your perspective, Kevin, how do we make sure that we design the human in that loop? How do we make sure that we balance human involvement, human intent, framing and context curation with what those agents can do?

Kevin Beaugrand

Our objective is always to know which kind of risk we can figure out when we are automating a task or a workflow. It is also important, regarding this risk, what are the measures to do during this workflow to ensure that the decision is made by a human with sufficient information, sufficient insights provided by AI, maybe, but also with their knowledge, their habits and their experience?

Fred Miskawi

Steve, when you're looking at that risk that we see by leveraging these solutions, the technical debt that can easily and quickly be generated and cumulative, how do we manage that?

Steve Zemanick

Sure, so the risk is, as you mentioned, technical debt. And we have this ability to generate many lines of code very quickly. And so having a human and an expert software developer review it, that would take a significant amount of time. What we've specifically done is we look at the complete value chain of the software development life cycle. When we think about developing code and having high coverage rates of unit testing and strong unit testing, integration testing using synthetic data, that helps build quality and trust. And then also, when we look at the DevOps pipeline and having quality gates for static code analysis, dynamic code analysis, that helps us identify if there's any vulnerabilities in the code, if there are certain security risks within the code that need to be addressed. And then having that robust DevOps pipeline, using and applying good CI/CD principles helps us stop bad code or low-quality code getting into production and being committed into our repositories. We build that trust in layers, and we also want to apply the best of breed generative AI technologies and reviewing that code as well. We'll use code as an existing to review the PRs and making sure that we're able to, kind of, expedite that review from a quality perspective.

Fred Miskawi

Yeah, and we'll get into some of these techniques as well, I think maybe the audience might be interested. Before we do though, I've got a question for you, Steve, on that topic, what can the technology not do? What are some of the things that we want to do, but we're not quite able to do today?

Steve Zemanick

Sure, so we all want to solve the biggest challenges as quickly as possible. And many organizations have millions and millions of lines of COBOL code, for example, in production. They may not fully understand the roles in there, and they may not have the technology skill sets needed to maintain such a large code base. So for many of our clients, we see converting from COBOL to modern solutions and such as COBOL to Java is a frequent pathway we see. But we also see clients looking to understand across those 15 to 20 million lines of code to generate documentation so they can understand what it is. And that's a big task in itself, right? Even if you were to generate high quality documentation, you would then need somebody to review it, make sure it's accurate, and make sure it's written in a style that you want written. That aligns to the usability of those tech docs. That's one quick example. And what we do is, obviously, it's better to then take a step back and do that work in tranches in a more controlled way to ensure quality and the results you're delivering and you're receiving from the auto-generated documentation or code, whatever that may be, is sufficient and robust. So, that's one area. I wouldn't light switch it. I would say at this point being more incremental and more pointed on what those outcomes are.

Real world examples leveraging AI in the software development life cycle

Fred Miskawi

Great point. And that brings us to segment number two: real world examples of how CGI and our clients are leveraging AI in the software development life cycle. The three of us are involved in one particular global account in manufacturing. Let’s dig a little bit, maybe a little deeper into the lessons learned from that account.

Steve, from your perspective, based on what you've seen and the guidance you've provided to our teams and our client teams, because we work in tandem,. what are some of these lessons learned that you've come across as a result of working on that account?

Steve Zemanick

Sure, and the goal for us was to complete the project three months faster than originally planned, right? And to do that also with the same team in place. So, not adding or removing additional members to that project team. And the way we approached it was one, understanding the baseline where we are today. We need to kind of really understand: Can we measure the impact that generative AI is making across our development activities? We wanted to have a good understanding of the throughput and the velocity of the current team and to understand the ways of working so that the technical developers were able to integrate generative AI tools with them. So, we implemented GitHub Copilot. They have the GitHub Copilot as a development assistant, but we also wanted to look at other techniques as well. So, how could we use agentic frameworks and agentic ways of producing code that was high quality and ultimately reducing the amount of effort within especially tedious development activities? For instance, we had a focused effort to go from Figma directly to Angular code that adhere to corporate design policies. And then, we also had the need to upgrade the backend that was based on a framework that's very legacy Java code-ish that needed to be upgraded to Spring Boot. How do we do that? And how do we use the right APIs along the way? It's just not making the functionality one for one but also using the right APIs that are performant and appropriate for the latest versions of Java. We took that multi-layered approach across different areas in order to really make an impact to the project schedule and to bring that in. That's our approach on that.

Fred Miskawi

And the client was amazing. We have a trust-based relationship. And I liked how we were presenting the proof points from an ROI perspective. And then, as we keep improving and keep showing the value that a holistic approach to this brings to the table, we are able to say together, “You know what, I think we can do it.”

Kevin, what about you on your side of the Atlantic? What are some of the experiences you've gained as a result of working with a client and the efforts that you're taking on within different parts of the organization?

Kevin Beaugrand

Yes, for this customer, we are working with them to develop and maintain a thousand applications for their industry. Those applications are supposed to provide an accurate digital system for their production systems. It is important for them to be sure that we have the sufficient quality before moving to production for their applications. We developed together with this customer a dual adoption framework that will balance velocity and control. The first objective is to have a global enablement and experimentation. We provide the way to skill our teams, from the business analyst to the developers and the support analyst to use correctly and efficiently generative AI tools for their daily tasks. And for some critical applications, we are measuring our impact on this program, on these projects, by following key indicators.

Just before, Steve mentioned some indicators like lead time, change failure rate for replication stability metrics. We have also the production quality for the business analyst which can impact all the software development life cycle and beyond. So, to resume that, we have this approach to accelerate and control where we have to control the productivity and the quality of what we deliver. It is very important for us and for our customers to deliver a clear ROI. We have also in that program some coaching base of usage of AI for their teammates. We provide them some work force to share the practice on AI for teammates and to increase the knowledge of what we can do and how to do that for their tasks.

Fred Miskawi

The idea is we're all improving together. We're leveraging these solutions as part of the life cycle so that we can all succeed together.

When we talk about measuring ROI, we're talking about these various tools that can do some amazing things. And now we start talking about vibe coding, which means that someone who's not necessarily a developer in a day-to-day perspective, like me, can start producing a lot of really cool, interesting value. But when you look at these solutions, our clients can use them too. They can sign up for a subscription. So, what value do we bring?

Steve Zemanick

That's a great question. And we've been working with generative AI across our own IP or our own software that CGI has as well. The first thing we bring is that experience. We've been there, done it for our clients who are just starting on the journey of using software development assistance. We have those lessons learned. We know how to navigate those waters. We also try to stay as up to date as possible on all the different platforms out there that support the ecosystem. And then we do our own benchmarking as well based on the use cases that matter very much to CGI and our clients.

And then the other thing we do too is training and coaching—making sure that developers are enabled to use the new tools the right way and they're so accessible.

And then thirdly, change management is a huge part of this. We're going to see zero efficiency gains if folks aren't using the tools at their disposal, right? We want to make sure that based on the persona, based on the behavior dynamics of each developer and technical user, that they're incentivized to use these tools, that it makes their life better. One way we do that through is just periodic polling and surveys to make sure that their perceived efficiency is up and that their satisfaction is also up, right? We want them to be happy and we want to make their experience of developing software a pleasure. By doing that, we feel that that's a key to maintaining adoption as well, to maximize the investment into the tools provided.

Fred Miskawi

Yeah, great point, Steve. And Kevin, we see organizational change management as a critical component of the acceleration of the deployment of this technology across our teams, our client teams. What are some of the techniques that you've employed to make sure that the developers, the testers, the business analysts that you work with learn how to best use these solutions to benefit them?

Kevin Beaugrand

Yes, the main techniques we have employed is to create dedicated learning paths for them, explaining how this kind of tool can be used for their daily tasks. They don't know how their job will change, how their job will shift to one way or the other way. We aim to provide them the key features that they can use using GitHub Copilot or Microsoft Copilot or ChatGPT and so on, and tell them what kind of tasks that they can automate safely.

For developers, especially for developers, we aim to also bring them the capability to build their own tools. Because we can see that we can create some agents and bring those agents to GitHub Copilot to provide more automations during their task. We also wanted to provide them the keys to build their own systems, their own tools, specifically for their projects because every project is specific. GitHub Copilot won't work ideally with every kind of project, and we want to be sure that they can embrace these tools for their projects, for their specifics and also for their practice.

Best practices learned from implementation and scaling

Fred Miskawi

And that, I think, Kevin brings us into the third segment for the podcast. We were talking about best practices, lessons learned from implementation to scale. And one aspect I think that we haven't necessarily covered just yet is how do we scale? We bring this to production. We can do things like what we call quick gen with vibe coding. We've got rapid development of a prototype or an MVP that can pile up the amount of technical debt.

But we got to take whatever it is that we're working with in tandem with the agents, and we've got to bring it to production. We've got to scale it. We've got to maintain it. And as we know, the life cycle of the software is well beyond that initial deployment to production. So, Kevin, how do we scale? How do we make sure that this is solid, secure in production?

Kevin Beaugrand

As Steve mentioned, we have some quality gates already in place for our projects. To scale the capability of using AI to provide more productivity, we have to be sure that we have every gate, every quality and security gate, at every stage of the project. We also have to be sure that we have the sufficient skills for the teammates.

The sufficient skills is also to tell them how they can be confident on the agent working with them, how they should take a decision based on the agent's work. My real question about that is how we can maintain the skills of the developers, and we skill them to be sure that they can use AI agents for the development, but also to maintain the application and the agent for them for the best practice for the software development.

Fred Miskawi

Yeah, and that's the human component, adaptability, right? And I think there are really three key skills that I think everyone needs to exercise over the coming years. Number one is adaptability, the ability to relearn. Because in some cases we've produced models that have been thrown away six months later. It's a context curation where I see the best developers succeed and grow in this space, especially as we're going beyond in production, and scaling is about the ability to understand what level of information to pass into the solution so that they can be successful.

And three, it's about intent framing—be able to articulate exactly what is the goal. What is the mission? What is the intent of the particular interaction or the outcome that we're looking to achieve in a particular environment in production? And Steve, from your perspective, anything else that we may have left out? How do we make sure that as we're deploying this value in production, we continue to be able to maintain, operate, scale, potentially reinvent later?

Steve Zemanick

I guess I would reiterate the adoption, right? Organizational change management, ensuring adoption, but that has to be measured, right? Making sure that we do have usage analytics for these tools, making sure that they are indeed being used, how they're being used is also interesting for adoption. Is it being used more as a reference or is it generating code?

If it generates code, what percentage of that code is making its way and being accepted into the code base, right? We want to be able to have those data points to ensure that we're trending in the right direction. And if there's teams that are using it more than others, we want to see how they're using it to do that knowledge sharing. You know, having leaderboards and different game mechanics is also just really a way to ensure it scales across. Keeping it fun, but also being super productive at the end of the day. That's a magical formula right there. I think that those are just two fun key components to look at.

What’s next?

Fred Miskawi

And now, let's cover maybe the future of this particular aspect of our industry, where the technology is going, what we feel, what kind of gaps might be filled by the technology as we move forward. Steve, going back to that topic and the work that you've done with different clients, where do you see this technology going? Fast forward a year, for example, what level of differences in value are we going to be experiencing a year from now compared to today?

Steve Zemanick

It's always exciting to think about the future with AI. You mentioned a year, I think six months, I could be completely wrong on this front, but the potential is just so significant. It's mind blowing. I think that we'll drastically reduce our technical debt of many of our clients’ concerns around what vulnerabilities and issues and maintainability aspects are within the hundreds and thousands of codebases out there for certain organizations. That could very well fall by the wayside. At the very least, I think we'll know where the issues are and then we'll be able to prioritize which ones need to be addressed very quickly. So, that's exciting.

Vibe coding continues to be very popular and it's interesting to think where that might go next. And the other part is, right now we're creating full systems with prompts using agentic development, but those apps are fairly simple, a couple of web pages. Being able to feed it a set of requirements that has sufficient detail, that's probably generative AI assisted and possibly generated. We'll see much more robustness, I think, around the feature completeness of apps that are developed agentically. A lot less churn and more fluidity across that whole value chain of using these tools is kind of what I envision. And then, what does that outcome look like? Well, it's whatever we put into it in requests. Our intentions of the request will be developed, I think, in a way that more closely aligns to what we intended it to be. That context curation will definitely be matured as you just mentioned earlier.

Fred Miskawi

Thank you, Steve. Kevin, what about you? Let's fast forward a year. A lot has happened in a year, multiple waves of changes. We've probably switched our tool sets a couple of times by the time we get to that point. What do you think that future looks like?

Kevin Beaugrand

In my point of view, we might see that some increase in developing fast application, quick application for business purposes directly for the business. Those applications might not be really maintainable like we see for our kind of applications yet now, but they might provide some useful value for the business directly more than now and provide more ROI for them.

But in the same way, I think we will still have some specific applications where we have to maintain the quality, maintain the security of the system, maintain the application over time. It will be interesting to see this change and how developers can work in these two ways. Maybe it could be two kinds of developers, by the way.

Fred Miskawi

Alignment engineers, I like to kind of think of the way that we're converging quality engineers and software engineers into these alignment engineers. And for me, Kevin, Steve, it will be interesting to watch or to listen to this in a year from now. But to me, there is a movement right now, the idea that you can push a button and the entire SDLC will be automated and boom, you're done. And that try for a hundred percent automation, I think is the wrong goal.

The goal that we should have in our industry and beyond is to understand when the human is required to make sure that we get trusted value. Where is the human role involved in the process? And I think when we fast forward a year from now, we're going to have much finer tuned control and understanding of where the human is critical to that process and critical for years to come.

Thank you both, it was amazing. But as we close this latest podcast episode, to me, it's clear that the conversation around AI and software development has to evolve. As we look forward to the future of more autonomous systems, the real challenge isn't just keeping up with the technology, it's about building the human framework around it.

Kevin, Steve, it was a pleasure.

Thank you for being here. And to our listeners, thank you for your attention.