How To Ensure Project Success Through
Risk-Adjusted Scheduling

Join InEight on this webinar as we breakdown how our risk-adjusting schedule feature can help your next project succeed.

 

Australian-based general contractor Decmil is transforming the planning and execution of its long-term CAPEX projects through the use of InEight Schedule to perform risk-adjusted scheduling. 

In this conversation-based webinar, you’ll learn how Decmil is improving productivity and minimizing risk by leveraging an interval planning solution that’s collaborative, instantaneous and highly predictive. You’ll also find out how this new approach massively drives execution confidence and achievability of the original project CPM schedule.

 

Transcript

 

Michael:

Welcome everyone to today’s livestream event, How to Ensure Project Success Through Risk-Adjusted Scheduling. Michael Maslen’s my name. I’m the APAC solution director for InEight, and I’m your facilitator for today’s session. We’re coming at you live from Sydney, Australia where we’re talking to Decmil, one of Australia’s premier CAPEX project delivery providers. Decmil offers complex multidisciplinary construction and engineering solutions to a wide range of clients in the resources and infrastructure industries. The company prides itself on delivering outstanding project management and delivery, regardless of scale and complexity.

Michael:

Today’s event is brought to you by InEight, the world’s newest tier-one technology solutions provided to organizations that specialize in capital projects. Serving more than 750 customers worldwide, InEight helps owners, contractors and engineers minimize risk, control costs, increase efficiency and provide greater visibility via connected data in a unique and contemporary way for superior project outcomes. Our host today is Dr. Dan Patterson, chief design officer at InEight, where he’s responsible for product strategy and innovation for the InEight platform. Dan is a globally recognized project analytics pioneer and three-time serial entrepreneur in the project management space, bringing both Pertmaster and Acumen to the masses. His most recent venture was BASIS, which was acquired by InEight in 2018 and subsequently rereleased as InEight Schedule to lead the industry with AI-enabled planning, scheduling, and risk capabilities.

Michael:

Dan’s greatest expertise is in complex project management, particularly advanced scheduling, risk management, project analytics and applying artificial intelligence to reduce risk and increase profitability. Dr. Dan is talking to us from deep in the heart of Texas in the USA. Joining Dan this morning is Reuben Burns, group manager of project controls at Decmil, where he is currently overseeing implementation of the InEight platform throughout the entire Decmil business. Reuben is a chartered construction manager, MCIOB, and project controls professional with a career spanning more than 18 years and dozens of large projects in both the UK and here in Australia.

Michael:

He has led delivery of dozens of multimillion-dollar projects across a wide range of sectors. With a passion for digital utilization and BIM, Reuben specializes in creating and implementing planning, 4D planning and integrated project control solutions to enhance construction management models and help resolve disputes. Rounding out our speaker roster this morning is Áine Flannery, Group Manager of Commercial and Risk at Decmil, where she provides continuous analysis of project performance. She reports on Decmil’s commercial, project control and overall risk functions by identifying potential for risk, working with projects to respond quickly to those risks and then implementing countermeasures. Áine has more than 15 years’ experience in the resource and construction industries, and is a qualified financial advisor, member of the Royal Institute of Chartered Surveyors and has assisted in delivering complex multibillion-dollar projects across the oil and gas, mining, renewables and infrastructure sectors.

Michael:

Reuben and Áine are talking to us from Perth, Australia. Now, before we get started, I’ll remind everyone, you have the opportunity throughout today’s event to ask questions via the chat box. Please feel free to key them in, and we’ll answer as many of them as we can during the Q&A period. And with that, I’ll hand off to Dr. Dan. Dr. Dan, over to you.

Dr. Dan Patterson:

Thank you, Michael. And also very big thanks for having the opportunity to spend some time this afternoon, this evening, this morning, depending on where you’re dialing in from, for waiting for us to discuss a topic that I am incredibly passionate about. I’ve given over 20 years of my professional career to really looking at how we can improve the science of what we call CPM scheduling. And so, now today, to talk about adding the dynamic and the dimension of risk on to scheduling, I’m especially excited about. And Reuben, Áine, the three of us, we go back in time with regards to working in the space with regards to cost and schedule. So thank you both also for spending the time and I’m looking forward to our discussion.

Dr. Dan Patterson:

So I guess before we get into the details of this perhaps, and maybe start with you, Reuben, a little background as to your role at Decmil and how that relates back to risk-adjusted scheduling.

Reuben:

Yeah, thanks Dan. So, I joined Decmil January of last year. And I think it’s fair to say my primary focus in that time has really been around the implementation of the entire InEight suite. And as we’ve sort of broke the back of that project in the project controls space to a naturally sort of naive focus back to more conventional time cost and risk management. So, yeah, we’re now placing a concerted effort on getting Schedule live on projects and in the pre-contract space. So it’s a timely chat to be having.

Dr. Dan Patterson:

It’s interesting, you mentioned pre-contract there, and I think this is something that the industry is finally waking up to, is the concept of actually developing a true CPM schedule in the very, very early stages of a project versus closer to execution.

Reuben:

Yeah, I think everyone in our space will be familiar with the fact that tender schedules are far too often thrown together in a tight tender period and perhaps not given the love and attention they deserve once contracts are awarded and teams are mobilizing on the site, and it’s almost too late at that point. So yeah, placing extra emphasis in the front end and getting our schedules where they need to be, and at least having a firm understanding of where they are risk adjusted as well stands us in better stead to deliver on our promise, and hopefully places more certainty and confidence in our clients. So, yeah. I think it’s equally as important in the pre-contract space, yeah.

Dr. Dan Patterson:

I agree. And Áine, if I’m not mistaken, you have quite the reputation with regards to enterprise risk and also cost management.

Áine Flannery:

Yeah, well, to be honest Dan, I’ve been with Decmil now just over a year. And part of that role, and from a group perspective, is giving that support to the project teams. And, as you rightly allude, from a pre-contract space right through delivery to project closeout. We found that all too often, you can have your schedule and if you haven’t really adopted the risk from the early, from the offset into there, as Reuben pointed out, sometimes we’re given a date by our client, and from that date, we’re scrambling to try and make sure that we can put our program together and hit those dates. It’s not always as structured, and that’s the industry as a whole. We’re always brought back to those dates. So it’s quite tough between both scheduling and cost to try and get that balance right within the programs from a pre-contract space, and then also try win the business.

Dr. Dan Patterson:

I agree, and I think we’ve come a long way in the last 20 years. Now, when you look back originally, the Monte Carlo and risk analysis has been around for a long time. But I think in the early days, we as software vendors and many projects using the tools, treated cost risk and schedule risk almost completely independently. You build a cost risk model, you build a schedule risk model. And thankfully, I think probably in the last three or four years, technology and understanding has evolved to the point now where the realization that a cost risk model is partially influenced and driven by the schedule itself. And so, I think it’s so important that the two models are intertwined, joined at the hip. That way, you can see the degree of cost risk exposure coming from potential schedule overrun.

Áine Flannery:

Absolutely. Because I think, in the industry, a lot of people have always thought primarily cost is king, really. So, you’re always getting… you’re looking at the cost risks straightaway, not necessarily the schedule risks. And if so, they’ve been quite independent. There hasn’t really been a collaborative effort between the teams, which is what Reuben and me are now trying to work on. And with the help of the InEight Schedule suite, we’re finding a lot of pluses within that. So there’s always been this degree of prolongation if the schedule gets moved out, but maybe that doesn’t pass straight back through to the quantity surveyors or the commercial people. They’re focusing on the actual change events that happen, but not necessarily picking up that prolongation that naturally flows through with any scheduled risk. So, that’s the beauty now of the system is having the two of those combined.

Dr. Dan Patterson:

And I think it’s been somewhat of a mindset shift or even a sort of a change in the way we think about risk again. I think in the early days, yeah, it was fine to have separate models. But now the realization of tying schedule to cost is so much more valuable. So, Reuben talking of Schedule, so the risk stuff is fantastic, and the insight, we’ll talk a little bit about risk insight and what it brings to the table later on in the session. But let’s just chat a little bit about the importance of building what I call a structurally sound schedule even before we get to the recipes. I’ve preached for years that a schedule mustn’t have any missing logic or hard constraints that cause negative float, things like that. And I know at Decmil, you guys are very pure and passionate with regards to making sure that the underlying schedules that you plug into your risk models are sound in that respect before you even get to the recipes.

Reuben:

Yeah, yeah. 100%. And I think schedule quality and the health of the schedule is paramount to even starting to think about scheduled risk analysis anyway. Without it, the results are going to be flawed. And so we’re making that the first step in our approach to any schedule risk analysis we carry out. But in general, too, I don’t think we’d be alone, I’m sure there are many out there who have come across schedules that just really are produced just to tick the box and satisfy a contractual provision at the start of the project. And they’re not really… They haven’t got the mechanics, the CPM mechanics, to effectively manage time. And that really plays out and becomes apparent when you start to look at any sort of forensic work.

Reuben:

And I think when you look back on projects retrospectively or try and sort of analyze any form of delay, that’s when it really becomes apparent to the wider audience how important it is. Because that’s when it really hurts. And so I’ve been witnessing that of late. And I like to think now we’ve got a really good quality team of planners who understand that, who get it. And we typically run just some fairly, fairly basic DCMA-type tests on our schedules going into tenders and per month, just to try and maintain that level of quality that we should. And so definitely a key factor in everything, not just risk, but everything – planning and project controls is that level of quality.

Dr. Dan Patterson:

And I think too from an owner perspective, if the owner can see that you’ve gone to the lengths of doing those checks and balances and that schedule critique, I actually genuinely believe that actually helps you during that bidding process somewhat irrespective of how you fare with regards to cost and schedule with other competing bids. The fact that you’ve gone through that diligence process.

Reuben:

Yeah, yeah. I think… Every project’s unique. And I think it’s the attacks on it, the fact that we see all too often these completion date constraint deliverables of a tender document, you’re instantly on the back foot, and it sort of forces you to ignore the forward pass kind of thing. Because you’re moving back straightaway. And I think that pushes planners into a position where they need to try and constrain or add certain odd logic into programs simply to satisfy a conforming bid. And so that’s the challenge, a challenge in a sort of contracting model you’re involved in on a certain job. If you’ve got to make use of constraints to fit in with someone else’s program, for example, we see that fairly often. And so on the other side, I don’t think it is quite as easy as just running a DCMA test and making sure there’s a lot of green, not much red. I think we’ve got to get a little bit more dynamic about the way we approach schedule quality. But we’re definitely using Schedule to enhance our process. And we’ve seen some early wins already.

Dr. Dan Patterson:

That’s excellent. So let’s talk a little bit about the evolution of scheduling and risk analysis from a collaboration perspective. Again, I think in the old days, the planner or scheduler would sit in front of his or her PC, and they would literally own the schedule, right? And their inputs were the basis of forming that forecast. And again, I think, thankfully, in more recent years, the recognition that planning, estimating and forecasting – of course it’s a collaborative effort. And so finally, the tools have caught up with that. And in recent years, probably two years ago, we introduced the concept of … we call it the ‘markup process’ where yes, a planner or scheduler or cost estimator can still develop their underlying forecasts, but then they can push that out to the team and ask for their feedback. And those team members get to do that in their own what we call a markup layer.

Reuben:

Personally, I’m not a seasoned risk practitioner. I had a good grasp on schedule risk analysis, and more so theoretical, say a year ago, more so than actual a lot of practice in doing it. And what’s been really enjoyable is seeing the other project team members go through that process too. And I think that the collaborative markup process allows them to take it offline, to go through it in their own space, and then come back in a collaborative sort of environment and talk about what they’ve done, what others have done, and what it actually means as an overall risk profile to the project. And it also thrushes out quality feedback from different views on the project. So by inviting commercial heads, by inviting supervisors, superintendents, project managers, and even senior management, they have a different view. They go instantly to different areas of the program and focus on different components. And I think by way of opening it up to the wider audience, you get more substance in the risk markup process. Whether you choose to adopt that or not is then in the next workshop stage. But you’ve captured that markup process. That’s really quite powerful.

Dr. Dan Patterson:

So Reuben, I made two notes there. So you touched first of all on overcoming the complexity of risk analysis and the likes of Monte Carlo simulation. And again, in the early days, we rightly or wrongly, probably wrongly, exposed the team members to that complexity. We would ask them, ‘Well, can you define the scope of work by defining a range of minimum, most likely maximum, and the distribution type, whether it’s triangular or uniform?’ And again, recently it’s been realized team members don’t care and shouldn’t care about the mathematical modeling. Their expertise is, ‘Well, I’ve worked on this kind of project before, and I understand how long this scope of work takes and how much money is involved, and the quantities and the labor and the materials.’ And that’s the knowledge that we’re trying to extract.

Dr. Dan Patterson:

And so making that Markov interface as easy as possible and hiding the team members from the mathematical complexity, I think, has been really key. And then secondly, sort of the concept of voice. We’ve talked about this before in days gone by. It was the loudest voice or person highest up on the org chart would have the strongest input into the risk model which just wasn’t fair and wasn’t right. And so, as you said, being able to independently go off and provide your input and that input is equally weighted and then having a constructive, interactive schedule review is… It sounds obvious, but it’s a much more effective way of capturing the true range of inputs to go into the model.

Reuben:

Yeah, bias is huge. Because you’ve got certain egos and you’ve got the project team members with a certain strategy. You know, the results of this process of what you’re doing are going to be presented at the end-of-month project review, and it will have a direct impact on the numbers you’re reporting. So there are a lot of reasons why the input through the markup and review process could be skewed. So yeah, we looked at that as a tool to get through that, to see past bias, to see past any strategy of certain members. And that’s been a real success. So we make sure that there’s… At the moment, there’s a minimum of two project team members, but I think as the software spreads through the business and people become more familiar with it, that will naturally expand to a wider team or audience being invited into that review process. And I think the more the merrier. The trick is how you then adopt or reject that feedback, and how you sort of work that process out in the final kind of workshop.

Dr. Dan Patterson:

So I think the concept of enabling team members to contribute and collaborate, I think that in some ways is validating the schedule. And then on the other side of the fence, as we’ve introduced, we call it ‘augmented intelligence’ or AI. That’s really more of a calibration exercise where over time, the computer can capture past prior projects at Decmil, store them in what we call a Knowledge Library and then, again, in the interactive schedule review that you alluded to, not only can you have a facilitator look at and agree on the inputs from the humans, but the computer also acts as a safety net. Because it can go back and benchmark against those prior projects and give you a very strong indication as to whether you’re within the realms of normality either from the base schedule itself or from the human input.

Reuben:

Yeah, yeah. And we’ve somewhat matured in our understanding and grasp of that concept. But it… Since kind of mid last year, we started populating as-built programs into the Knowledge Library, and ‘decorating’ I believe is the term you give to smarts and lessons learned today into those programs. And we’ve been actively doing that in the background to learn it. But at this stage, we’ve been doing that in our test environment. So that’s actually built up quite nicely over time. And now we’re understanding how to use it. Now we’re shifting to our sort of end state and production environment solution. It’s about drip feeding what was of value in the previous Knowledge Library and making that work. So that’s not something that’s come into full use at a project level just yet. But I think there’s a good bit of that they’ve already populated that’s ready to bring in from an AI point of view. So I think over the coming months, we’ll start to see the benefits of that because we have… Decmil as a business is uniquely positioned to use that kind of platform because we typically do projects that are fairly similar in scope, in size, in complexity. Mining camps, main roads-type jobs and renewables, wind farms, solar farms. So they’re all fairly similar in structure and somewhat repetitive. So I think the Knowledge Library and smart suggestions will really play out.

Dr. Dan Patterson:

And I think what you’ll find very interesting as you go forward is the concept of machine learning. So as the Decmil Knowledge Library makes its suggestions, if you’re on the team, either adopt those suggestions or even reject and push back. Having the knowledge, the ‘smarts’ as you call it, automatically adjust based on behavior and the feedback that you as the project teams give that Knowledge Library, I think you’ll find will be a really interesting journey.

Reuben:

I’m hoping it will come full circle back to the first thing we talked about, and it will help us inform reverse engineer durations for certain tenders and opportunities, and allow us to provide a slightly more objective view on durations to potential clients. And for us in that space, it is a great opportunity to add additional value on early contractor involvement type work where we can share past lessons and past productivities and performance sort of data. I think that’s my hope that that’s really where it will add just as much value in the pre-contract space and conceptual stage.

Dr. Dan Patterson:

Again, I think it’s going to give you more and more defense and justification as to the bids that you’re putting forward. So.

Reuben:

Yeah.

Dr. Dan Patterson:

Let’s talk a little bit about reporting again. In days gone by, I think we suffered as an industry from overly complicating the report, certainly with regards to risk. And again, I think what I’ve learned is, at the end of the day, there are really two key risk reports. There’s the, what we call, the ‘What,’ or the ‘risk histogram,’ which tells you, ‘Okay, your risk exposure is x and your range is y and here’s your competence level.’ And then there’s the ‘Why’ which is, ‘Well, which areas in my schedule or a cost estimate of risk events are causing that particular level of risk exposure?’ And I know you guys have some thoughts on how to present the results upwards to executive stakeholders and leadership.

Reuben:

Yeah, yeah. I think working in the project control space, you can get pretty sick of charts and graphs quite quickly unless they’re actually meaningful. They tend to blur the lines a lot. And for reporting purposes, you need to get to a point where you’ve got a balance where what you’re looking at makes sense and quickly informs you to make decisions. And so I think a lot of that is ‘less is more,’ especially when you’re talking about risk and the mechanics of Monte Carlo simulations, tornado charts and the like.

Reuben:

We need to sort of group the output of a fairly high-level order to real, sort of meaningful trends if you like. And so we’re just doing that now. And also consistency of reporting. That’s a big one for us. Prior to looking at InEight, we had various… We work in different regions in different sectors and are fairly consistent in the construction industry, you have a certain level of turnover, therefore, you have a lot of different project managers who have different ways of doing things. And that translates to a lot of different types of reports that come through. And I think half the battle is getting some form of consistency and meaningful data. So we’re looking at Schedule right now and the reporting out of the Monte Carlo simulation of tornado graph type reports and looking at what they really need to know as a snapshot. They need to know what they need to know to make decisions. If nothing needs to happen, they don’t need to know about it. But they really need to know data trends and analytics that are meaningful, that prompts decisions or discussion.

Dr. Dan Patterson:

So, let’s see. You bring up the topic of trending. And before we touch on that, I think, again, one of the great steps forward as an industry we’ve made in the world of risk management is really that reporting importance is less about risk, because that tells you the bad stuff. At the end of the day, you really want to understand what is your confidence of achieving the forecast, again, from a cost and a schedule perspective that you’ve put together. And to add to that, on the trending front, don’t just do a one-off risk analysis and say ‘I’m done.’ And Áine, I know this is something we’ve talked about, this concept of doing repeated risk analysis and looking at the trend of confidence over time and using that as a yardstick almost in the same way that you would use, for example, in value metrics.

Áine Flannery:

Yes, and I think that’s been key for us. When we previously spoke, I think we talked about some of the topics where a lot of change events had happened on some of our previous jobs. And while we would have done a risk analysis on both the schedule that was getting updated from a scope-of-work point of view, naturally, that change wasn’t getting fed to the planner. And so they were both running concurrently rather than that collaborative approach with the risk. So we were not picking up the change as it was live.

Áine Flannery:

So what we’ve tried to do now, as we said, from the markups, we encourage that they happen on a monthly basis, and then are reviewed by end of month at an executive level. And the point of that being that we predominantly have a live risk analysis both from a cost and a schedule basis. Because we don’t want to end up three-quarters of the way into the job and then looking retrospectively to see, ‘Okay, where did it go wrong?’ The beauty of the system now is that by having that information live, we’re able to mitigate and have that foresight to see what’s coming down the road. Because at the start, when you do your risk analysis, obviously you can’t pick up everything. As the job evolves, things are happening and changing. And if you don’t keep that information live, and you just do your risk schedule and put it on the shelf, you’re going to get into trouble. And these days, with the contracting model, you really have to get your notices in to the client from a cost and the schedule basis. So, if you’re not keeping live with them, you can already be time barred as well as the risk. So it’s really important for us.

Dr. Dan Patterson:

It’s interesting, you use the concept of live documents, live schedules, live risk models, and I agree 100%. I’ve always believed that you don’t just build a schedule or a cost estimate or a risk model. This thing is… It’s a living, breathing entity that should be continued from all the way from concept select through early stage, through a war, through detailed design, execution all the way through to closeout, and I think the industry now, these days, the terms like ‘Digital Thread’ coming into play, which represents this continuity of information, because as you said, in the early stages, you may only have a limited understanding of what those risks are. And then you evolve those over time. Not having to start from scratch every time you do that risk analysis I think, again, is a huge benefit.

Áine Flannery:

Yeah, I mean, it’s going to feed into the Knowledge Library too. So, everything that you can learn, if you’ve got another job starting at the same time and you’re a little bit further down the road in one of your other projects, as long as you’re keeping that live and you’re feeding that Knowledge Library, then you’re getting those early wins on your other projects as well. So it’s quite circular, the benefits.

Dr. Dan Patterson:

Yeah. We’re really excited about the concept of the Knowledge Library and this augmented intelligence that helps make those suggestions. And it’s not… Honestly, I’m less excited about the technology to be honest. I’m more excited about what it can actually bring. Because again, that feedback loop is so powerful. Anything we can do to avoid a project starting from scratch and starting with a blank sheet of paper and starting with a blank CPM schedule and reusing lessons learned, even reusing lessons learned against projects that went terribly wrong, is okay. Because-

Áine Flannery:

Yes. That’s where most of the lessons may be.

Dr. Dan Patterson:

Exactly. So.

Áine Flannery:

Yeah, we have a lot of people in Decmil. We’re lucky we’ve got people who have 30-35+ years of experience. And as he said, the system or the technology is one thing to bring it together, but you can’t buy that experience. And we’re really lucky with people in the in the infrastructure industries, specifically. And some of that knowledge that’s coming through to our engineers. And I mean, it’s just key for us to be able to then build a program and in a different state or a different project. And they can go into that Knowledge Library and they can see straightaway, ‘Okay, this is what you need to be looking out for.’ And you can’t buy 35 years of experience. So yeah. Putting that in there, me and Reuben have been keen to try to lock down some of those, and then it’s going to be quite exciting in the future for us to be able to work with the teams and deliver some of those knowledge tags when the schedules have been done.

Dr. Dan Patterson:

So you mentioned knowledge tags. So the concept of being able to classify historical knowledge and say, ‘Well, this project was a certain type or a certain geographical location.’ And that way then the computer, when it goes off and makes its suggestion because it understands the context of the project you’re working on today, it can factor in that historical information and say, ‘No, you worked with a certain subcontractor previously. Their track record was mediocre at best, therefore, we’re going to make an adjustment accordingly.’ So again, the concept of knowledge tags is definitely a big help.

Áine Flannery:

Yeah.

Dr. Dan Patterson:

So Reuben, I know in our previous discussions you’ve come up with a lot of really good ideas and you’ve been asking when some of these things are going to make it into the real world. So maybe we can chat a little bit about your vision going forward – maybe Reuben on the schedule side and Áine on the cost side – where would you like this to go as an industry? And what are some of the pain points today that you feel we should still be addressing?

Reuben:

I think one week we talked about, which I’m not entirely sure how to frame it right now, but one thing we’d like to look at perhaps is that unfortunately claims happen and disputes happen. And when they do, they involve quite an intense body of work and to retrospectively look at the programs and look at cause and effect. And we’ve had some experience recently where we’ve tried to be quite smart by looking at SPI trends and CPI trends on projects. And look at how certain factors on a project really influence loss of productivity and efficiency in areas. And certain things we’ve been looking at, we don’t want to let go, we want to sort of frame that, take it forward, and then what we’ve been…

Reuben:

What we were thinking the other day is if you could somehow sort of track SPI and general productivity across, say, various windows or looking back on the as-built schedule, and then do some comparative analysis as to risks you forecasted at the beginning. Obviously, we can’t push schedules that have been and gone. But what we want to do is, when we add discrete risks as part of the markup process, we want to maintain them. When were they raised? What did they inform us of? And don’t just let them roll throughout the project. Look back and see how they’ve influenced productivity. So if you report a risk in the first month and then the following month, you’re actually updating your risk model with an updated progress schedule, how did that risk play out? What did the productivity against that activity show us? And if we could start looking at that, then I think you can start to use the risk framework or the risk model for more than just risk management. For communication purposes.

Dr. Dan Patterson:

So, I have a huge smile right now, Reuben. We could probably talk about this for three hours. And I know we only have a few minutes left, but it’s so interesting because when you talk to people about projects that overran, first of all, they blame execution. And then secondly, they say, ‘Oh, it’s because of poor productivity.’ Well, product productivity or poor productivity is not the root cause of poor execution. As you said, it’s factors that occur – whether it’s risk, scope change, so on and so forth – that result in productivity being less than what we forecast. And so rolling that back, if we built our risk models, surely then we should be able to correlate the areas of high risk with the areas that resulted in poor productivity so that the next few around the computer can say, ‘Hey, look, you’re probably going to have a less than optimal productivity in this area, because historically this risk happened, this risk happened, these quantity growths occurred,’  and things like that. This would really get us into next-generation or next next-generation risk analysis because you’re tying field execution, as-builts and naturals back into that feedback loop to the Knowledge Library, which helps with the risk analysis.

Reuben:

Yeah, and also for progressive adjustment in terms of how you deal with a certain risk. So if you have a lot of risk events in your program and then they’ve been reviewed and then there’s this general discussion as to, ‘Right, how do we treat that? How do we actually adopt that risk in terms of what additional costs are we going to forecast?’ You make that call in the first month, let’s say, and then if you move into the second, third, fourth month and you realize that actually, perhaps the contingency we put aside to deal with that risk isn’t quite enough. Look at our productivity, look at the impact of it. We need to treat this a little bit more seriously and really have a deeper look at what’s happened here. And I think that’s a proactive step to keep going back to these risks, keep looking at them. I don’t know how that’s gonna work just yet. But that’s your job, I think.

Dr. Dan Patterson:

So, maybe a question or comment on your… I mean, I think now we’re getting into the treatment of risk and risk mitigation. And I think if we can provide more insight into a cost benefit of planned mitigation strategies, that helps with sort of the proactive approach to risk rather than reactive.

Áine Flannery:

Absolutely, and we’re starting to use that a little bit within the change module. Because we’re obviously getting the issue live from the guys in the field through compliance, picking up issues, and that can be an issue that they think is going to come up that doesn’t have to be. We’ve hit the issue with the foresight of what potentially could come down the road, and we’re using those guys to give us the kind of rough-order magnitude estimate of what they think that may cost and then working with the quantity surveyors and the commercial people to then see that as a change event. For me, my wish, in terms of schedule would be somehow to interconnect that with the schedule model, so that we could see how those potential change events, if they do impact the schedule, the cost as well that that would incur. So that you’re not having to do that separately, but that they would be integrated as such, so those change events could potentially flow through, and then we would see from a cost point of view the risk, both from that specific change event, but also the prolongation of the schedule. And it would be kind of reporting from that point of view, similar to the Monte Carlo that we currently use for schedule. But from a risk perspective, a cost perspective.

Dr. Dan Patterson:

Okay.

Reuben:

That’s one thing I think, I won’t say we at Decmil, but generally, the industry is pretty poor at. And that’s capturing change from a program point of view. And so I think that’s exactly right. If we can bridge the gap between the change management workflow or system to prompt that additional scope to be generated and input into the schedules with the correct logic. I think that’s a big win because we’re always trying to perform this analysis, and it ends up being on original scope because it’s not dynamic. It hasn’t changed with the way that this played out. So yeah, I think if there’s an event software that can help us in our behavior and our approach to planning, let’s definitely… Change management is the big one that we would benefit from.

Dr. Dan Patterson:

I’ve always described change as the anti-plan. The two always fight each other. And so with that, we are, in some ways, de-emphasizing the concept of a traditional risk register and moving more towards what I think of as an overarching project register, which contains not only negative risk in the form of threats and positive risk in the form of opportunities, but pulling in those changes, those change requests, scope changes, quantity variations. Because essentially, at the end of the day, whether it’s a risk or a change, or whatever it is, it’s a lever that’s being adjusted that you need to reflect in that forecast. And so, we are definitely moving down the path of incorporating change into these risk-adjusted forecasting models.

Áine Flannery:

Yeah, that would be great.

Dr. Dan Patterson:

Well, I know we’re coming up on…

Reuben:

What’s the date for that to happen?

Dr. Dan Patterson:

Reuben, you know you can’t ask me questions like that.

Reuben:

Of course.

Dr. Dan Patterson:

So, with that, I think we’re gonna hand it back to you, Michael for Q&A.

Michael:

Absolutely. So there’s some excellent points there in that discussion, particularly as it relates to change management, and in fact, early warnings, as being a key attribute to projects and project success. And thank you for that handover, Dan. And yes, we do want to make sure that we answer some of the questions that are coming in. The first question on the chat is for you, Dr. Dan, and it’s relating to already that… The question is, ‘How is the capability that you’ve described in the session this morning different from other tools in the marketplace? For example, Primavera risk analysis?’

Dr. Dan Patterson:

Right. Interesting question because, obviously, we’ve been very heavily involved in the development of those other tools in the last 20 years. I think the honest answer is all of the products use a very similar simulation-type approach. In other words, they’re emulating the execution of the project over and over, thousands of times, and then looking at the distribution of results. I think, where we’ve really taken a step back so that we can take a huge leap forward is eliminating the complexity of statistics, Monte Carlo analysis. We’ve eliminated or removed that from the team members, such that team members feel a lot more comfortable now in being able to participate in the building of these risk models. And then I think the second biggest difference is our embracing of technology, specifically, around augmented intelligence, and finally realizing that if we can digitalize the team’s expertise, then it’s actually very simple for the computer to turn around and provide that expertise in a digital format back to the next project.

Michael:

Excellent. Excellent. The next question on the chat here is for Reuben. And Reuben, this question relates to external stakeholder management. So obviously, Reuben, at Decmil, clients impose schedule constraints that, on occasion, you might believe are unrealistic. So how do you… The question is how do you manage those expectations? And do you submit schedules to the clients that comply with the contract? Or do you submit something that’s, in your opinion, more realistic? And if it’s the latter, how do you, if you like, manage and sell that to the client?

Reuben:

Yeah, I don’t think there’s a broad answer to that. I think it depends on what the tender is, who the client is. And I think contractors generally, not speaking on behalf of Decmil, will have a different appetite to how they approach that, depending on what the market is doing, what their business as a whole is doing and how much they want that job. But what I can say is we’re trying to see past that. So we’re trying to have a neutral view on our approach. And obviously, there are harsh realities of and consequences of submitting non-conforming bids. You just simply have to conform to a bid. So more often than not, our programs will have to satisfy a nominated completion date.

Reuben:

But it becomes very evident through the tender process and when we were planning the works that if it’s not achievable, and if we move it into the space of that confidence level dropping to a point where we’re unsure whether we can actually deliver on this, that’s when we need to really take a stand and present some feedback to the client, or potential clients, I believe. And the only way to do that is objectively and really have a very clear view on where the risk sits. And I think over time, we’re going to start to use that more and more effectively. And I think the more we do and the more we talk to potential clients about exposure on completion dates with that level of substance and clarity and transparency, I think it will become a more confident discussion. And a healthy one as well. And I know Dan’s view on the world when we talk of confidence, and that’s probably a big thing. Should tenders be based on a confidence level more so than a completion date?

Reuben:

I haven’t got the answer. And there are certain practicalities around that. Obviously, contractors want to win work. And I think it would be unfair to make contractors submit a confidence level at this stage, and I think it’s probably a second-wave discussion upon contract award when you’re trying to establish the approved construction baseline maybe. I think that’s probably an appropriate time where contractors have been awarded a job based on merit, and then they can have that next-level discussion on certainty levels and completion dates. I think that’s probably where it will go first.

Michael:

Excellent, excellent. I did have a question for Áine, but it seems like she’s having some technical…

Reuben:

I think she’s having…

Michael:

Technical difficulties. But I’ll pose the question to you Reuben, and I’m sure you’d be able to answer this one. I guess those… The strategy, the rules, the policies and the processes around your risk analysis and schedule analysis workshops, they need to be formulated and agreed upon before those risk workshops are carried out. So the question’s really around how do you, I guess, set that expectation and communicate those new working methods using technology as an underpinning with regards to those processes around risk workshops? So, really, it’s how are you using, I guess, technology to, firstly, facilitate those risk workshops? But also, what about the, I guess, the human element of the strategy and rules and processes and expectations that you need to set with the team before those are carried out?

Reuben:

Yeah, yeah. See, we want to make it simple. And we want to make it comfortable and engaging. And we don’t want to overuse terms like ‘AI’ and ‘machine learning’ and the super-technological side of things. Because I think perhaps that sort of puts people on the back foot. And it, I think, also can scare execs off and senior management off of what they spend their money on. But when you see it in practice, it’s way different. It’s way more simplistic, it’s logical, it makes sense and people embrace it. So the way we’re approaching workshops is fairly open; we don’t want some extensive process and procedure and guideline for people to really stick to govern that process. We want to make it open and show the way the platform’s fairly intuitive, and it’s open to all to get involved with. We do follow a simple procedure, and that’s basically four or five steps we’ve got.

Reuben:

First of which is a scheduled quality check. And then we look at uncertainty first, that’s the first sort of wave of feedback if you like, so project team members comment on durations and apply a certainty of value against them. Then we look at getting them to input discrete risks. And from there, there’s an initial review and then a more collective collaborative review before we produce the analytics and the report. So, I mean, simplistically, that’s the kind of structure. And we don’t want to over complicate it too much at this stage, but I’m sure over the coming months and years, there’ll be tweaks to that. And the end users will probably be more informed than us at that stage. And they’ll be telling us how we can refine that process.

Michael:

Excellent. I had another question for both you, Reuben, and for Áine. I’m not sure to whom to pose this, but I’ll pose it openly to the both of you. So you’ve talked about the fundamentals of creating the knowledge base, that library of schedules, the golden way, the best way that you’ve run your projects. So what are some of the, I guess, some of the lessons learned and some of the techniques that you use to, I guess firstly, instantiate that Knowledge Library and, as well as that, how do you go about keeping that fresh and current as you’re using the solution more and more over time?

Áine Flannery:

I think for us, Michael, obviously we’re quite new in our journey in terms of InEight Schedule. So we first, as I kind of alluded to earlier, we’ve used some of our senior members who have been around for a very long time to get the foundation right. So we’ve been building it with information from key individuals within the business. We then started trialing new projects that Decmil have recently been awarded that we’re running the whole InEight suite out on. And so we’re in a very early stage. So in terms of key lessons learned from live projects, that’s one of the exciting things that we’re looking forward to, filling that back in through the Knowledge Library. And so to date, we’ve used historical projects, the dos and don’ts and lessons learned. As said, we’ve used people experience from both project controls through to commercial, because a lot of the time, we found with the Knowledge Library, even now, Reuben jumps to construction or the delivery.

Áine Flannery:

I naturally jump to, ‘When is the contract going to be awarded?’ The procurement can’t be done in that small space of time, so don’t even think that you’re going to be laying concrete by x date because just as much as you want it to be there, there are always going to be issues in terms of commercial clarifications, etc. So it’s just that, as we’ve said, it’s that collaborative approach to build a Knowledge Library. No one person is going to look at it from a very well-rounded experience. And that’s what we can all bring to it, our expertise from our own areas, and then building it up so far. But we do look forward to, when we’ve got it up and running, on towards the back end of jobs, that we can learn those lessons and also add them into the library. But we’re doing it as live as we can. So, as I said, those jobs are quite in the early phase at the moment. But any lessons that we’re learning from them, we give them the knowledge tag and then feed it back into the library.

Michael:

Excellent. And, Dr. Dan, just to sort of round out that discussion there about building the Knowledge Library, what sort of insider experience would you offer to the attendees to build on Áine’s point there around tools, techniques, key considerations and so forth, around how some of the attendees could, with the appropriate tools in their own organizations, build on or start to build a Knowledge Library and have that, I guess, granularity and clarity of information?

Dr. Dan Patterson:

And it’s a question we get asked very frequently. I think, first of all, it’s important to understand the concept of, a lot of people use the phrase, Big Data. No, suggesting we need hundreds or thousands of historical data sets in order to do meaningful pattern recognition is actually not true. In fact, we actually, and I push this quite hard in that, you don’t need a very large data set for that Knowledge Library, excuse me, to be useful. In fact, we have many organizations that literally start with an unpopulated Knowledge Library. And the way it works is very simple. In the first schedule review, as the team members, collaborators, are providing their input, that input behind the scenes automatically gets scraped from the project and populated into the Knowledge Library. So in the extreme case where there is no historical information, the Knowledge Library is self-populated. So going sort of more towards the midpoint and the norm, taking a select number of historical projects that typically have both a baseline schedule and also an as-built. And, again, it’s okay that the as-built doesn’t necessarily match the original baseline. And because that in itself is useful because the delta between the two gives you insight into historical performance. The baseline schedule says we can progress the rate x.

Dr. Dan Patterson:

In reality, though, the as-built, the delta between the as-built and the baseline, will tell you the variance in that productivity. That variance in itself is super useful in the Knowledge Library. So I think, really, the biggest takeaway is don’t be intimidated as an organization by having to build this all-encompassing, historical knowledge base. That concept isn’t really applicable and, in many ways, wouldn’t help anyhow.

Michael:

Excellent.

Reuben:

But from my point of view, I think it will grow in two ways at Decmil. One’s organically and one’s more manufactured. The organics and me and Áine are going to catch up with InEight fortnightly. And the purpose for that is to look at scheduled risk analyses that are ongoing, that are live, and we’ll decide which risks that have been populated we should add into the Knowledge Library and bring them in for future years. I think that’s just a kind of rolling wave effects in the Knowledge Library, how that’s going to build up in detail. But then we will add, there’s been a fair discussion on actually… Once projects are completed, that tail end of a project can be pretty tiring for a lot of people, the final push. And once teams demo and they’ve got a chance to look back on the project, I think we should use Schedule and the Knowledge Library to capture proper lessons, learn workshops and pause that hard discussion about what went wrong. And that needs to involve project team members and senior management, maybe even execs, to really have a cold, hard look at the facts and draw the positives as well and capture them in that type of approach as well.

Dr. Dan Patterson:

Makes a lot of sense.

Michael:

Yep, that’s a very, very powerful takeaway, guys. Thank you very much. And that takes us to the end of the time that we have today. For those of you who didn’t get your questions answered, we’ll be compiling them after today’s event and then sharing them in a follow-up email to all attendees. Thank you everybody for attending today’s event, How to Ensure Project Success Through Risk-Adjusted Scheduling. And of course, a very special thank you to our speakers, Dr. Dan Patterson from InEight and Reuben Burns and Áine Flannery from Decmil. We appreciate you taking the time to be with us today. And this concludes today’s livestream. I’ve been your facilitator for today, Michael Maslen from InEight. Have a great day everyone and stay healthy. Thank you.

Dr. Dan Patterson:

Thanks, guys.

Áine Flannery:

Thank you.

Reuben:

Thank you.