Why Everyone in Construction Should Be a Great Forecaster

May 3, 2023

Originally aired on 5/17/2023 | 59 Minute Watch Time
As a construction professional, you are already aware of the crucial role accurate forecasting plays in your capital projects. You know that a well-executed forecast can protect funds and guard against lost revenue, ensure compliance with protocols and contractual requirements, and give you greater control over delivery costs and project payments. But short of a crystal ball, forecasting can be frustrating at best and disastrous to a project if you get it wrong. It may be tempting to just “leave it to the experts.” However, that’s not your best course of action, and here’s why.

In truth, everyone on a construction project uses some element of forecasting, whether dealing with cost or revenue, scope or duration, or labor and resource allocation. Therefore, everyone should aspire to be a great forecaster. But where do you start, and what does superior forecasting look like?

Join InEight’s Product Director, John Upton, and Client Success Director, Megan Siefker, plus special guest, certified Superforecaster and CEO of Good Judgement, Warren Hatch, as they explore and explain what it takes to become a great forecaster no matter where you are in your construction journey.

In this webinar, topics covered will include:

  • How looking into your project’s “rearview mirror” can lead to your best future outcomes
  • Avoiding the dreaded optimism bias by realizing that your data doesn’t lie
  • Why accounting for the future always means accounting for costs within time-phased forecasts
  • The best way to integrate scope, cost, and schedule for timelier, smarter decisions
  • What it means to be a Superforecaster, and how you can start using their best techniques now

Transcript

Ellen McCurtin:   

Good afternoon, and welcome to this webinar, why Everyone in Construction Should Be a Great Forecaster. This event is brought to you by Engineering News-Record and sponsored by InEight. I’m your moderator, Ellen McCurtin, custom content editor at ENR. Thanks for joining us today. As a construction professional, you’re already aware of the crucial role that accurate forecasting plays in your capital projects, but short of a crystal ball, forecasting can be frustrating at best and disastrous to a project if you get it wrong. While it might be tempting to just leave it to the experts, in truth, everyone on a project uses some element of forecasting, whether dealing with cost or revenue, scope or duration, or labor and resource allocation.

Today we’ll explore what it takes to become a great forecaster, no matter where you are in your construction journey. Now let’s get to our presenters. John Upton is a product director for InEight. He spent over 15 years holding positions such as field engineer and superintendent before delving into the construction technology space. Since then, he’s focused on helping customers bring projects in on time and under budget by strategizing product development and managing roadmaps for the project cost management and field execution management solutions at InEight.

Megan Siefker is a client success director for InEight. She works with construction companies across all markets to map business processes and provide innovative solutions to optimize workflows and drive overall project success. Megan specializes in earned value management and helping companies define standard processes and use of InEight technology to ensure project certainty. Warren Hatch is CEO of Good Judgment, Inc. He joined Good Judgment as a volunteer forecaster in the research project sponsored by the US government, became a super forecaster and is now CEO of the commercial successor, Good Judgment Inc., a world leader in applying innovative probabilistic solutions to real world decisions to forecast the future. Warren has assisted governments and the private sector to improve their foresight in quantifying uncertainty. I’ll rejoin our presenters at the end to answer your questions that come in throughout the webinar, so don’t forget to submit them in the Q&A section of your webinar console. And now I’d like to hand it over to today’s first presenter, Warren Hatch.

 

Warren Hatch: 

Everybody for joining us today. And for this portion what I’d like to do is just briefly share a little bit of the background about what Good Judgment is, and why you should be interested in it. My father, by the way, was a civil engineer, so I grew up around engineering and construction. He was a highway engineer in particular, so I got a lot of respect for the many challenges that all of you face. We’ve also done some work with construction and engineering firms with the company. Be happy to talk about some of that as well. And then I just want to walk you through some of the steps that you might find useful as you’re thinking about an uncertain future. And really that’s all forecasting is the way we think about it. Just thinking about an uncertain future using probabilities.

And so in that sense, pretty much every decision you’re thinking about that you need to make is a forecast. You’re going to make a choice about what to do with an implied probability, whether it’s explicit or not, that that’s going to improve the odds that you’re going to get what you want from that decision. And what super forecasting is is basically a process, a validated, empirical driven process to improve those odds that you get your desired outcome.

And it came out of a research project that you can see here on a slide. About a decade ago, the US government was really looking deeply for ways to improve the skill sets of intelligence analysts after some pretty significant forecasting failures with 9/11 and weapons of mass destruction and other things. So they launched this big initiative to see if there’s a way to improve on the wisdom of the crowd. Wisdom of the crowd works. That’s a great way to get a rough-hewn forecast about anything, an estimate of jelly beans in a jar, or what’s going to be the most likely candidate to win the next US election, and all kinds of things.

So the question was can it be done better? And there was a number of research teams. Most of them did not do that well. The team that came from the University of Pennsylvania led by Barbara Mellers and Phil Tetlock did very well. And that’s what you can see in the red bars, where they were exceeding the goals by wide margin to improve on the wisdom of the crowd that the US government had set. The other, best of the rest in the gray bars, did not do so well. So they shut down the competition and focused the remaining resources for the last two years of the project to see if there are other things that the team at Good Judgment could do.

And the approach that they took, as I found out later, was really interesting. Rather than have a big idea about what works, they had a lot of small ideas of things that might work. And when you add them all up can become a significant improvement on forecast accuracy. And what do we really mean by that is simply this, that you get the best probability estimate as early as possible. And it’s that time advantage that is really valuable. You get an early warning signal that things maybe aren’t going the way you expect. You get a very early read that perhaps that an opportunity is arising that you might otherwise miss. And that’s really what it’s all about, to improve the judgment that you’ll be making and improve the possibility of you understanding the issues that you’re grappling with on a day-to-day basis.

And it’s a process. And a process is a checklist. And it’s good to have a checklist for anything that’s process oriented, including forecasting. And you’ll have your own list. But here’s some things that we found in the research and in the company are really valuable to have on any checklist. And I’ll just run through some quick examples that we can explore in more detail later.

The first is the idea of starting with a base rate. So when we’re first confronted with a question, whether is this project going to be done on time, who’s going to win the next election? Anything along those lines. What usually happens is people will focus on the question itself. The specifics of the project, the personalities involved in the election. And what we know is that it’s better to take a step back, get outside of the question itself, and look for examples from history. How do things like this usually occur? And let those comparison classes or base rate is another term that gets used, what does that tell us about things like this happening?

And then look for the specifics. Why might this time be different? And it might be that this time is different. But it’s important that that be a conclusion that’s reached rather than an assumption that is made. The example that Phil gives in his book is imagine you go to a wedding, and somebody’s sitting next to you and says, “So what are the odds that this will succeed, will last?” And after you get over the horror of being asked such a rude question, most people are going to say, “Well, it’s a happy couple. Of course it’s a match made in heaven. It’ll last forever.”

But the thing is, and the point that Phil makes and base rates show us, is that imagine you go to 100 weddings, not all of them are going to make it. What you find is maybe half of them will. That’s the base rate historically, about half of marriages end in divorce. Same thing with projects. How often do projects end on time? That’s the place to start, and then start looking at your specific projects. So step, one base rate. Really important. If you do nothing else, that’s going to pay off dividends right away.

The next step is to write down your forecast. So attach a probability, write it down, and provide a reason for that. A very brief one that forces you to crystallize your thinking. It also allows you to go back when you get an answer to whatever you’re forecasting, and see if you’re thinking aligned with reality. Where you right for the right reasons?

And it also allows you, and this is the third thing, to compare your reasoning with other people. You can pool information. We all have limited information by having different perspectives. And pooling it together, we can all benefit. Teams end up being a very important way of getting a very accurate forecast. That’s the third step. And ideally you can do this anonymously. So we’re not getting anchored, is the term, on the team leader or somebody who’s very loud or anything. Make it as anonymous as possible. So you can even just write it down on a piece of paper and circulate it, or put it in a spreadsheet so that we don’t know who’s saying what. Very valuable that way.

And now once you’ve had the benefit of everybody’s different perspectives, fourth step, make an update. If the circumstances change, or you have new information, or someone has a more compelling explanation that would cause you to change your forecast or anything, well, you want to go ahead and do it. And in fact, if you know nothing else, then somebody who has updated versus somebody who has not updated on any particular forecast, that’s all you need to know that the person is forecasting more accurately is going to be the person who’s making updates. So those are four things.

And then the fifth, to really make a payoff, is you want to keep score. Get feedback, find out when you say an 80% probability of a project being completed on time, is that actually the case 80% of the time, which means 20% of the time it isn’t. And now one of the things that follows from this is when we’re thinking probabilistically is that when we are saying something between zero and 100, in a probabilistic world, we will look like geniuses a certain amount of the time, but we may look not so smart the other part of the time. So if we say 80% probability on 10 different events, we should see an outcome that occurs eight times, but it won’t occur two times. And that can sometimes be uncomfortable, but that’s the way we get the feedback to become more accurate and get confidence in the probabilistic forecast that we’re making.

So those are five key steps. Your own list will be lengthier, but those five things are things you can do actually pretty quickly. And if you run through them even quickly, you’re going to end up with a better result over time. So that’s a very simple building block. And from that you can do increasingly sophisticated things. The real world is more complicated than just a simple five step process, but you can use that process to tackle even more complex fuzzier topics out in the world.

And one example of that is to think about scenarios. And in a sense, a lot of very complex projects are themselves scenarios. We’re expecting things to play out in a certain way. And way upstream in the planning stages we may even have multiple scenarios about how a project may unfold. And scenario analysis is a creative process where we’re really challenging ourselves to come up with all the possible worlds that might be out there. And we might even ask ourselves, what are the more plausible worlds to inform the decisions that we need to be making?

But then there’s one more step that we can take, and this is what super forecasting can help provide, is to narrow that set down to what are the more probable worlds that we should really focus on for decision makers. Because just because something sounds plausible does not at all make it probable. And taking that extra step can save a lot of time way upstream, so that we can focus our resources more efficiently. And that’s what this graphic is showing there.

And just a stylized example of how this can work is that we can better understand the probabilities of a scenario by breaking the scenario down into smaller questions. The scenarios, almost by definition, are very complex. It’d be very difficult to come up with a single question that would tell us everything about a major project, a major scenario, or anything real world like that. But what we can do is come up with smaller diagnostic questions. It can tell us something meaningful at a very narrow level and wouldn’t on its own really be decisive. But when you add them all together, and if they’re all moving in the same direction, can give us more confidence that something is more probable than something else. And this is kind of a stylized way to think about it, and we can go into more specific examples too later on.

So let’s break down a scenario, pick anything, and we might come up with five different questions. So here’s one I was talking about earlier today, and it’s outside of construction, which might be helpful to get the idea is let’s think about what’s going to be happening next in Ukraine and Russia, right? So will there be peace between Russia and Ukraine, whatever that means. That’s a big scenario. How do we define what peace looks like? We’re all going to have different interpretations of what that might be. But we can have smaller questions. We could say, for instance, one is will a ceasefire be announced between them? Will there be 120 days without a fatality? Those sorts of things. And those are small diagnostic questions that on their own really won’t tell us if there’s peace. But you put them all together, and maybe they’ll start to give us more confidence about how probable that scenario will be.

So we’ll have a series of questions, then we pose them to our crowd, and we’re going to get probabilities. Now we have probabilities. We’re getting a better sense of what things are looking like. But we can do one more thing. We can go to experts and say, “Well how important is this question compared to that question?” Because not at all questions are going to be fully important. And experts can help us decide how much weight to attach.

So here’s just a random example. Maybe the first question gets double weight, the next one gets a one fifth weight. Now we have probabilities that we can multiply by weights and create an index. And that’s what I got showing over here on the right. And for a decision maker now, it’s all getting wrapped up into that one index. And if it’s something that’s important to monitor over time, as those individual questions get updated, the index level will change. And if it changes, that’s a flag for decision makers to pay more attention. If it doesn’t change, then they can focus on something else. So that’s kind of a stylized example of how it can work. It starts small, it’s very simple. It can be very effective at a very simple small level. And it be can become very helpful as things get more and more complex. To take a complex real world situations, and turn them into forecastable propositions. So those were the key things I wanted to hit on here. Now let me hand over to John to run a poll.

 

John Upton:

Thanks Warren, I appreciate it. Thanks for giving us some insight into what super forecasting means and the analytical approach to super forecasting. I guess before we get to the poll, I was going to ask, artificial intelligence is gaining a lot of traction in our industry. It’s a huge buzzword. And I was curious in your research and findings, what’s the right level of AI versus HI, human intelligence, and what does that ratio look like in your findings?

 

Warren Hatch: 

That’s a great question, and certainly very timely. We’re all thinking about AI in different worlds. We certainly are, over in the Good Judgment world, and we’re thinking about it in a few ways. And we’re also getting a lot of inquiry. So there’s some people who are quite concerned about the risks represented by AI to the future of humanity. So low probability, high impact events on the negative side. Others are thinking about the low probability, high impact event on the positive side, or the opportunities.

And what sometimes happens in the discourse that we see is that there’s kind of a conflict that’s built between the two. That it’s either artificial intelligence or human intelligence. And where we land is, well, just like other innovations in the past, the bullion is not so much or, it’s and. And when these new innovations come along, whether it’s a typewriter or a computer or big data or artificial intelligence, I think the way forward is going to be finding combinations of the two.

And what we see from the base rate in these sorts of innovations is that machines, whether it’s a typewriter or a computer or artificial intelligence, can take over a lot of the tough analytical tasks, the computational tasks, and allow us more time to make the judgment about what we do with that information. What are the consequences? How should we think about how it fits into human agency? And while I suppose there’s a risk that artificial intelligence may take over that role for us, I think that’s a case to be seen in the future. For now, we’re on the side of seeing lots of opportunity. And what that mix looks like, like with these other innovations, will no doubt depend on the context. But that it will be a part of the way we think about engaging in decision making and judgment and forecasting. I don’t think that’s in doubt and very much look forward to seeing how that unfolds.

 

John Upton:  

I agree. All right. Well thank you very much, Warren. Before we go onto our next topic, we wanted to throw a poll out to the audience, basically asking where do you capture or maintain your forecast today? We’ll give you a few moments to answer this question. But are you meeting up with a buddy over some cold beverages? Do you have the most mind boggling Excel sheets in the world? Or have your own IT shop that you’re doing your own forecast? Or have you bit the bullet and purchased a third party solution to help you with your forecast? Give a few more moments.

Got a few results still coming in. People might be changing their mind. All right. 54% in Excel. And I don’t think that’s probably surprising to anybody on the call. Excel’s a powerful tool. The disadvantages are the connected data, and how you get the values in and out of Excel, and the brain power that created that Excel sheet, and all those formulas, and how does that transition from project to project? But yeah, I don’t think that’s surprising.

And what we’re going to transition to next is talking about forecasting more at the construction level of things. And I wanted to throw a magic eight ball as an option on this poll to see if anyone would bite, but I didn’t. But we all wish that we had a crystal ball to tell us what our forecast is going to be, but we don’t. And instead we have to rely on other data points to help predict those future and events.

So in order to identify these other data points, Megan and myself are going to take you on a ride, a journey of forecasting if you will. So before anything starts, we have to look into the past. We’re going to look in the rear view mirror, see what’s behind us. Because at the beginning of a project or even an operation, we don’t know what we don’t know at that point. We’re relying heavily on the estimate as really our baseline forecast.

And we’re very hopeful that those estimators took into considerations all the risks, contingencies, as well as leveraged data as-builts from other projects with similar scopes of work in order to get the best pricing and the most accurate budgets for us to utilize on the project. And while estimators aren’t 100% accurate all the time, there are tools and opportunities to leverage existing data within the organization to help provide the most accurate estimate, and that’s what we call benchmarking. So Megan, did you want to elaborate a little bit on what benchmarking is [inaudible]?

 

Megan Siefker:

Yeah, so I think it’s really important that we kind of highlight the estimate is your initial forecast, the foundation of your forecast. And until you start performing work, that’s going to be your baseline. And so we’ve got to do things in order to be a great forecaster. First, you’ve got to be really a great estimator. And what helps with that is what we call benchmarking. But it’s just looking at historical past data points.

How you kind of start as an organization going down the path of capturing enough data to start benchmarking is you’ve got to start by classifying your operations. So classifying could be a standard coding structure. That’s typically the easiest way to do it. But if you think of it like a tagging mechanism where I can tag the same type of work that I’m doing on every project so that I can start referencing that specific type of work when I estimate future projects.

First, that might be at a high level. So we’re doing foundations, we’re doing trenching, we’re doing piping. But eventually, you want to get to a point where that classification also contains descriptors for the different variables that are driving your cost. So things like the type and depth of a foundation, the size and material classification of pipe. Adding those descriptors into your coding structure then can allow you to really compare to true like work in the past that you’ve done, and give you better predictable unit rates as the outcome.

That’s the other kind of key there is the unit rates. So I could look at five different jobs, and maybe the piping cost range from 500 million down to five million. What’s the reasoning for that is likely due to the amount of piping on a job. So you’ve got to get to a point where you have unit rates which requires then some form of measurable quantity. And so if you think of it like house shopping, there’s key metrics in house shopping that you can look at to compare is this a affordable house or not? We look at dollars per square foot, dollars per acre, things like that. So we want to get to a point where your operations and construction have metrics like that that you can use to compare. So man hours per foot. How much were we installing it on these past five projects that were very similar and had the same type of work? That’s the end goal that allows us to have more accurate estimates, which then contributes to more accurate forecasting.

 

John Upton:  

Yes, I agree Megan. And in addition to benchmarking, you can even leverage work that’s been done on a project already that could influence future work on the same project. For example, if you had three cast-in-place box culverts on a project spread across the duration, and the first one came in 10% over budget, well what’s the likelihood that the other two are going to be much better without understanding any additional risks? And why not just forecast those two additional box culverts at 10% over budget until you get to begin that actual work, and more variables are defined and adjust accordingly? So it’s better to forecast those overruns sooner than later so you have time to pivot and adjust. And like Warren mentioned earlier, everything’s about timing. The earlier, the better we see these forecast fluctuations, the better so that you have time to pivot and make the necessary changes.

But as we move forward now, back in the cab of the car, we’re looking at where we’re at currently, utilizing the data that we have. The primary objective in forecasting is to be timely and accurate. In order to do that, you need real time data that’s accurate. And you’ll notice the window’s a rose colored shade. And we did that because we have our rose colored glasses on, and we have the dreaded optimism bias. And I know I can speak personally as a superintendent out in the field, more times than I’d like to admit, but working on an operation in your budgeted units, you’re supposed to do it for $5 a square foot, and we’re coming in at $7 a square foot for the first 60% of the operation.

Well me as a proud superintendent, I know my crew’s going to improve because I don’t want to trust the data, or this and that happened, but rarely does that ever turn out well. So in that scenario I’d be better off using a three week rolling average straight line forecast. So it’s utilizing the data that’s at my fingertips, and it’s very objective versus my subjective view of the operation.

The other thing is more and more projects today have an abundance of data. I mean we’re getting it from drones, we’re getting it from sensors, we’re getting it from everywhere. And all of this data is great. But in reality, if you don’t know how to interpret it, it’s just a bunch of white noise. So Megan, can you shed any light on the optimism bias?

 

Megan Siefker:

Yeah. I was going to say, I’ve sat in way too many forecast review meetings where superintendents told me they’re almost done with something, and then next month, same story, and the forecast just kind of continues to creep along and grow as you process that. And I think optimism bias is such a natural tendency. We all think something’s going to be easier than it is. I know with just house projects, my husband tells me something’s going to take two days. Three months later, I’m still wondering when it’s going to be done. And with the house project, we’re probably not doing our due diligence on the estimation front. Generally don’t have as much experience in the past. But we can’t let those type of excuses enter the construction space because we do have the past experience and we have ways to kind of correct this optimism bias.

And the key there from my perspective is really quantities and using measurable quantities to derive how much scope is remaining. I think a lot of times I see with customers you’re using percent completes for everything. Which is a great starting point, and you’ve got to get something to understand what’s our earned value, where are we roughly at? But when you ask someone for a percent complete, typical answer is 75%, 80%. You don’t often get a 73.9% complete because no one’s giving you that value. But if you’re talking a $100 million job, if I am reporting I’m 75% complete, but I’m really only 73% complete, well there’s a $2 million swing right there in our forecast. And so we’ve got to get to a place where we can have more measurable quantities that are driving what’s remaining.

And measurable quantities just means I could send an auditor out to the field and get the same results that someone else reported. So multiple people can report on the same measurable quantity. And so I think that’s really a key to eliminating optimism bias. It’s hard to hide from factual numbers that can be audited. So take a look at your operations. Not everything is going to be able to be defined by some measurable cubic yards of concrete or linear foot of pipe. Understanding that that’s not as easy as it sounds. With everything, there’s indirects, there’s engineering, there’s different things. But you’ve got to look for what are some different variables that we can inject so that we can kind of predict where are we at from a more objective space than just subjectively throwing out percent completes, and using that as our basis for forecasting.

 

John Upton: 

Yeah. And I think-

 

Megan Siefker: 

I think we’re going to-

 

John Upton:

… another point to add-

 

Megan Siefker:

Oh, go ahead.

 

John Upton:    

… I guess, sorry Megan, is that, with an integrated solution, which is obviously the ideal state, so that you’re getting information in timely, and hopefully it’s accurate, based on the user’s inputting that data, but the goal to be an accurate, or a great forecaster, in my opinion, is having that data available for you to analyze. You’re not worried about the inputs and everything else that comes with that, but you have more time to focus on the numbers, analyze them, discuss them with your team.

As Warren mentioned, this isn’t an individual activity. It should be collaborative with the team. And if you’re too busy trying to input values, whether that’s claiming quantities or claiming goods receipts, you need that data available and at your fingertips so that you can analyze it, rather than spending time searching Excel. I still get kind of leery when I see Excel because that brings me back to my previous life. But you don’t always get real time data with Excel. You’re relying on maybe a paper time card from your foreman or an invoice that’s been in your foreman’s truck for three weeks, and you didn’t even see it. Just trying to capture that data and get it into one consolidated area is a nightmare if it’s not integrated. So I think one of the main points is having that data available, and spending your time analyzing it rather than trying to capture it, or find where all those details are coming from.

Then lastly, we’re looking to the future. And we’re not promoting, we’ll just call this a traffic jam up ahead on the navigation. We’re not foreshadowing an accident by any means. But looking into the future, we’ve looked at the past, we’ve looked at the present, now we’re going to look out ahead in this journey of our forecast. And it’s really about situational awareness. Going back to my cast-in-place box culvert, maybe that first box culvert was basically green field. I was all by myself, no other trades around, easy work. But looking at my 90 day look ahead, or your CPM schedule, you realize that there’s going to be five other trades in the same area, and now my laydown yard isn’t right next to the operation, it’s three blocks away. That’s the type of forward looking you need to plan and work through in order to facilitate that accurate forecast for those last two CIP box culverts,

Other foreshadowing events, maybe you have labor escalations based on union agreements, or you’re working in Northern Alberta, Canada and you have a winter shutdown for two months. We’ve talked a lot about forecasting cost. But really forecasting depending on your market, or if you’re the owner or the contractor or the subcontractor, there’s price, there’s cost, there’s resources. What’s the resource demand for your crews over the next three week look ahead? Also with equipment. People and equipment are obviously valuable resources, and forecasting the need for those is just as important almost as what’s your end cost?

Because if you started a project with company run equipment, and I come from a self-perform contractor world, so a lot of my examples are based on that. But there’s times where we go into a project, we’re using our company run equipment, but then lo and behold, we get another mega job, and we got to pull that equipment over to that job. So now the remaining percent complete for that operation is going to have to be completed with outside rent. So obviously we have to take that new updated unit rate for that equipment into consideration for our estimate to complete forecast. Megan, did you have anything around risk?

 

Megan Siefker:

Yeah, absolutely. So I think identifying some of those risks. And a really good way to keep track of that upcoming risk is, from an estimating point, we usually do a risk matrix, and kind of do a diligent job understanding what are all the potential risks that are we’re going to face on this project, assessing what’s the probability of those. But you’ve got to continuously look at that. So month over month you should be checking in on that risk matrix, updating it, adding things to it, understanding if those risk events have happened or not, or if that risk has since passed and didn’t occur.

And then I think the other thing is too, when we’re forecasting, we often put all of our focus on where are the areas of risk? Obviously that is the most important. But we also want to leave room to look at what are the areas of opportunity? Which is really where we can beat our margins in some areas. And so the one thing that’s really nice about, we saw most of the audience is using Excel, which I think is then tough, you’ve got your one forecast, and one of the things I really love about the InEight project cost management solutions is it allows you to maintain multiple forecasts.

So where we need to be submitting a more conservative forecast so that we’re not seeing big forecast fluctuations month over month, you can also maintain a more aggressive forecast, you might call that your financial plan, that has those areas of opportunity identified. So we understand this is maybe our most likely forecast that we’re going to use as our conservative forecast for corporate level reporting, but where are some areas where we have some opportunity? Let’s note those, and then target that, and what do we need to do to get to that point so that we are making those savings and identifying those?

So I think it’s really important, not just to do your general forecasting, but also look to those opportunities. And how do we have a second or third forecast that we’re sort of maintaining in the background to make sure that we are capitalizing on all the opportunities that do exist and really kind of with a fine tooth comb going through the project scope and understanding, okay, superintendents, what are you going to be responsible for and on the hook to meet those opportunities? We don’t want to just make everything set up so easy with a super conservative forecast, and then, great, at the end of the day our forecast didn’t change, but we also didn’t improve any of our margins. So that’s one area where you can see some upside and positive is if you do have that ability to maintain a couple options for your forecast. And of course you want to be submitting the most likely for financial reporting there.

 

John Upton:

Yeah. I agree. I mean that collaboration is a huge component within the forecasting world. And I guess in closing, to be a great forecaster, you can’t work in a silo. I mean Warren talked about it, we’ve hit on a bunch of times. It’s a team effort, and communication and collaboration is key. And lastly, you have to trust the data. A lot of times, not only trust it, but you got to be able to analyze it. If you have so much that it overwhelms you, and you haven’t built out your reporting. We talk a lot with customers about starting with the end in mind. What do you want to report on? And once you know that, then you know the details and data that you need to capture to fulfill that. But you have to trust the data. Don’t let optimism bias drive you crazy and down a dead end road. And then as Megan mentioned, we need objective results. Being subjective is great, it’s good to have your opinions, but trust the data and be objective with your analysis. So with that, I will turn it over to Ellen.

 

Ellen McCurtin:

So much then. Thank you all for a great presentation. Great information that you shared with us today. Before we have our presenters address some of your questions, I’d like to remind everyone that we would love your feedback. So please take a few moments to complete our webinar survey, which you will see on your screen now, or you’ll be redirected to it at the end of the program. And now for our first question, Jim asks, “What is the best way to normalize your existing data?” So I’d just like to turn it over to the group actually.

 

Megan Siefker:

Yeah, so I might jump in. So I kind of talked about that a little bit with the benchmarking discussion. But normalizing the data so that you can compare and then use it for future work, I think really you’ve got to define what’s driving the cost, or if you’re looking at man hours of that data, so that you can have the same variables in each setting and kind of get down to a more detailed level of that. So taking a look at your operations, how could I define this operation, and then make that definition of an operation the same on the next job I go to. So am I including scaffolding, am I including the forklift cost in that? So really taking a finer tooth comb look at what is included in this operation that’s developing the costs for it so that when I go to the next project and use that as a reference point, I’m including those same items. And that does take some effort for sure.

So I mean we usually suggest customers start more at a high level, and then build up to that level of detail. As you start to get more data in, then you start to realize, okay, maybe we need more detail here. So that’s kind of baby steps in that process because it is a long journey to get to that level of sophistication where exactly what’s included in a unit rate, and what’s been done in the past. But it’s highly beneficial once you get to that point.

 

John Upton: 

Yeah. I might add that you got to determine where you want to manage that budget. Do you care that your cost versus budget is losing at foundations as a whole? Or are you looking at foundations that are one to two foot thick or two to four foot thick? As Megan mentioned, there’s a lot of organizational work, and trying to get the entire organization on the same page is probably the most challenging. I know in my previous employer, when we were trying to normalize our data across tens of twenties of districts, trying to get the civil guys and the structure guys and all the different disciplines to agree on the right level of detail was not easy.

But after a few rounds of back and forth, and creating a team at the organizational level to drive that, it’s paying off dividends. Because now not only can we see what my unit rate is on my current project, but I can compare that to five other jobs that are doing the exact same type of work. And if their unit rates are beating mine, I then now also have contact information or links to their work packages to see what kind of crew makeup they have, what equipment are they using? And having that interconnected solution really makes it easy to drive additional details out of the system to then incorporate into your operations to hopefully improve what you got going.

 

Ellen McCurtin:    

Thank you both so much for that answer. And I’d like to also turn this one over to John and Megan. AJ asks, “Do you have any tips for forecasting subcontractors that often lack the detailed progressing to get a clear picture?”

 

John Upton:

Well, that’s a great question. And again, coming from my background from a self-performed contractor, we took a lot of pride in managing our self-perform to the Nth degree of detail. And the same could not be said for our subcontractors. But there’s been a big push in recent years to start tracking those subs to the same level of granularity that we hold ourselves accountable to. So there’s a few enhancements or features within the InEight tools now that you can track your subcontractors at that more granular level of detail.

So POs typically are high level line items that we now allow you to break down into more granular details, which we call schedule of values, but that breaks down that PO might be a one lump sum into 1,000 pounds of rebar, or 100 square foot of formwork. So having those additional details allows us to get a much more accurate percent complete and earned value. As Megan was alluding to before, it’s tough to track a one lump sum line item, but if you break it down into the additional details and have measurable objective quantities such as how many linear feet or how many square feet, and you can track those on a daily or weekly basis, that’s going to drive earned value, and also give you that remaining value that’s left for that contract. Also being able to include pending change orders in your forecast is another big ask that we’ve seen and included in our software so that you’re getting the full picture, not only what’s been completed, but anything that’s potential outstanding that you could still factor into your forecast.

 

Ellen McCurtin: 

Thank you so much for that answer. Larry asks, “How are they teaching this topic forecasting in the CM schools today?”

 

Megan Siefker:

Yeah. So I can’t probably speak to how every school is teaching, and I know I come from a CM background too many years ago to count. But I think there is kind of a focus in CM on managing contractors and more contract based. And what we kind of delve more into today with quantity based forecasting is really looking at the actual work that’s being performed. And when you start getting into earned value management, I think those are some key lessons that hopefully are being taught in our CM schools. I know as InEight we do outreach programs. So we link up with an ASU, and we’ll go in and teach some classes, bring in some real world experience, as all of our folks on our teams are from the construction industry, have been out on job sites in the field, and can help teach some of those lessons. So we’re always looking for opportunities to do that. But I think definitely these forecasting topics and estimating topics are hopefully being taught in schools today.

 

Ellen McCurtin:  

Thank you so much for that answer. If we could scroll back to slide six, Sean asks a question regarding slide six. He says, “Regarding the question clusters used in the scenario analysis slide six, are these like survey questions answered by SMEs about the scenario, or is it something else?”

 

Warren Hatch: 

A lot of this process can have a useful division of labor. Coming up with the right questions to be asking is itself a skill. To really be penetrating what is it we need to understand, what should we be asking ourselves? And it might be an entirely different population from the people who are assigning probabilities to those questions. And typically, the first task, coming up with the questions, those would be subject matter experts. They will help us understand how we got where we are. They’re the ones with the experience, they’re the ones with the very sophisticated models of how the world can work, whether it’s in their heads or in an Excel sheet or a dividend discount model. And so they’ll surface the questions, and they’re also the ones who can help us understand how much importance to attach to those questions.

But one thing that’s really interesting is that the people who are very good at coming up with the questions, the experts, are not always so good at coming up with the probabilities, for the very same reason that they have models of the world. And so when things are in a lot of flux, there’s a lot of uncertainty in particular about something, their models by definition will filter things out that might have not mattered in the past but matter more now. And having people who are very good at thinking about probabilities and identifying maybe subtle shifts out there, without those fixed models, they’re going to be better at capturing those new things that we might want to be factoring in when we’re coming up with the probabilities. So when it comes time to put together an index, we have both. We have the experts to attach the weights, we’ve got skilled generalists who are coming up with the forecast, and we put it together.

 

Ellen McCurtin:

Thank you so much for that answer. Another one for you Warren, are experts good forecasters?

 

Warren Hatch:

Well, follows right on from that. And what we have seen in a lot of the work that we have done is that experts tend to have, well, an optimism bias. They tend to overweight the probabilities of events within their area of expertise. We see that with AI. People who are deeply immersed in AI attach greater weights to whatever the question is. They think it’s more important than skilled generalists. We see that in the military if we’re forecasting conflict. It tends to be higher with people with a military background. We see it with civil servants, we see it in finance, we see it everywhere. And that’s really interesting is that they tend to attach higher weights.

The next thing that we see is that if we can observe their forecast and get an answer and keep score, is that they tend to not do as well in their scores. They’re not as accurate as skilled generalists. So that’s another reason that you want to have both. But here’s the really fascinating thing that we’re seeing is that experts who apply themselves and make forecasts on a lot of things, including areas in their domain expertise, and get that feedback and get better, they can become good forecasters in areas of their domain expertise. So that’s a long-winded way of saying that experts at the outset are not necessarily good forecasters. We don’t know unless they have a track record. What we see is when they have a track record, they tend to lag, but they can close the gap, just like anybody else can get better.

 

Ellen McCurtin: 

Thank you so much for that answer. William asks, “Historic data evaluation. Is this data from your unsuccessful or unsuccessful bids or of other sources?”

 

Megan Siefker:

Yes, I think ideally you, of course, can pull from past estimates to get historical data. But really the best data is what did we actually perform? So it’s looking at completed jobs, or jobs that are already in progress, and have hit a certain percentage complete on operations. And using that real-time data, that’s not just what we estimated, but what did we actually install this at? So that’s where we pull our historical benchmarks in from the project execution side, and they make the full circle trip back to our estimating tool to provide that actual feedback. So you could even compare at the end of the job, here’s how it was estimated, here’s how they actually performed, and what are the more realistic rates that I want to use now in future estimates?

 

Ellen McCurtin:  

Thank you. Another question for you, Megan, and also John. Larry says, or he comments that, “Starting with a very detailed accurate schedule of values will help in determining percentage complete. And do you agree with this? If not, or if yes, why or why not?”

 

John Upton:

I mean, I would agree. I think the reality of a subcontractor going down to that level is maybe challenging to get them to buy into that versus the more higher level buckets of just line items on a purchase order. But I mean, if you can work with the subcontractor to break their contract down into that level of detail, that would be great, and good for you if you’re able to do that.

 

Ellen McCurtin:

Thank you. Another question for you. In your opinion, what’s the most important variable you can focus on while creating a forecast?

 

Megan Siefker:

Yeah, so I’ll jump in there just because I’m always so passionate about earned value management, which is I think the basis of that is quantity driven. So getting your quantities right, that’s going to define your whole scope. And whether that quantity is a measurable quantity that we can very clearly define, or whether it’s a unit of time for indirects, or a number of deliverables for engineering, I think being able to fully understand what your scope is so that you can have some accurate values to represent what has been completed to date and what is the true scope that is remaining, that’s going to be the biggest bulk that drives then getting your accurate forecast values.

 

John Upton: 

Yeah. And I definitely can’t argue with that. But if I had to pick something else, secondly, maybe second most important, especially from a self-performed, you hear lots of organizations looking at productivity factors. Are you making your budget? Are you on schedule? But that only tells you half the story. And so we have multiple variables or values that we can compare. In addition to productivity factor, we capture compensation factor. So how are you paying your guys compared to what your budgeted dollar per man hour is? So not only can you tell from a PF if you’re ahead of schedule or behind schedule, but from a CF, the compensation factor, are you paying your guys more or less than you estimated or budgeted for?

And then to round that all out, they both tell one side of the story. We have what we call a labor efficiency index. And that’s just the product of the two. So you might be ahead of schedule, but you’re paying your guys more than you had budgeted for, whether that’s you got journeymen instead of apprentice, or you’re working a lot more overtime to try to get back to schedule, multiple factors for why you could be paying your guys more. But yeah, productivity and compensation, and what we call the labor efficiency index, I think are other valuable data points into driving an accurate forecast.

 

Ellen McCurtin:

Thank you so much for that answer. And last question I’m going to put out to the group, how should we think about low probability high impact risks?

 

Warren Hatch: 

I’ll kick off. And this is one of the fascinating areas of some of the newer research that Phil Tetlock and his colleagues and others are doing because most of the kinds of forecasts, most but not all, are kind of within a normal distribution. We see some things go this way, some things go that way, but it’s kind of normally distributed. And our task is to come up with a high quality forecast about just where that is within the distribution. But there are also times when there are things that are super low probability that would have a huge impact if they occurred. And how are the different ways to think about that?

And one of the ways is, I’ll just mention two that are pretty fruitful, one is with a low probability thing, there may be things that would occur before. So if we’re concerned, for instance, about a pandemic, a recurrence of a pandemic, that’s low probability, high impact, what are the sorts of things that we might expect to see before there’s an actual pandemic? There might be an outbreak of some new disease, or there might be a pickup in social media on chatter in certain parts of the world, those sorts of things. And those can become forecast questions within the normal distribution.

But another way to think about it is being in the tails themselves. And when we’re looking for reference classes or base rates, or what does history tell us about these things? Rather than thinking in terms of a regular distribution, we might look for instances of a power law distribution because if they occurred, we may find the tails start to get thicker. And we see that say sometimes with earthquakes and volcanoes and the like. And we can use those sorts of models to think in terms of a power distribution instead when we’re looking for reference rates.

 

Ellen McCurtin:

Well, that is all the time that we have for questions today. Please join me in thanking John Upton, Megan Siefker, and Warren Hatch for their presentations, as well as our sponsor InEight. If you have any additional questions or comments, please don’t hesitate to click the email us button on the webinar console, and we’ll share them with our presenters so that they can respond directly to you. If you didn’t have a chance to fill it out earlier, you’ll be redirected to the post-event survey shortly. We look forward to hearing how to make our programs work better for you. Please visit enr.com/webinars for the archive of this presentation to share with your colleagues, as well as information about our upcoming events. Thanks again for trusting us with your time and have a great day.

FOLLOW US:

FEATURED:

Related Resources

Sundt’s Blueprint to Building the Most Skilled Workforce in America

Sundt’s Blueprint to Building the Most Skilled Workforce in America

Since its founding in 1890, Sundt Construction has led the North American construction industry. Today, the full-service contractor continues to maintain its place at the forefront of innovation as it leads the most skilled workforce in America.

Through initiatives emphasizing career awareness, comprehensive training, and inclusive environments, Sundt Construction continues to integrate people, processes, and technology to ensure quality performance and foster a culture of excellence and innovation.

Join InEight’s own CPO Brad Barth, along with a suite of Sundt’s development leaders in an exploration of how attentive organizations can build elite teams despite a global workforce shortage.

This webinar will discuss how organizations can:

  • Obtain ideas on effective workforce development strategies.
  • Standardize processes across disciplines for enhanced operational efficiency.
  • Harness construction technology for superior operational excellence.
  • Gain insights into assembling and nurturing a top-tier construction team.
Why Your Most Complex Projects Demand Next-Level Document Control

Why Your Most Complex Projects Demand Next-Level Document Control

As labor shortages and changing project landscapes continue to complicate delivery, organizations worldwide experience greater pressure to improve performance, identify efficiencies, and protect their profits.

Next-level document control offers teams the means to reign in the documentation and workflow processes and reclaim the near 12% of every project cost that goes to rework. Join Brad Barth, Chief Product Officer at InEight, along with industry leaders from Graham, Sundt Construction, and Orion as they share insights and experiences highlighting their document control evolution.

This expert panel discussion will include:

  • Navigating unique document control challenges
  • Managing cross-organizational workflows
  • Finding the balance between standardization and customization
  • Maintaining accountability at every level

Don’t miss your chance to learn from the capital construction experts and bring renewed control to your projects.

Better Planning, Better Outcomes: See InEight Schedule in Action

Better Planning, Better Outcomes: See InEight Schedule in Action

Revolutionize your scheduling process. Join, InEight VP of Product Nate St. John as he demonstrates our re-engineered scheduling solution, InEight Schedule. Experience how this innovative approach to planning invites more stakeholders into the planning process to help teams anticipate risks, build more reliable schedules, and deliver project certainty.

InEight Schedule offers:

  • Collaborative Markup Features: Enable seamless communication and feedback among team members, to support improved collaboration, efficiency and schedule accuracy.
  • Integrated Lookahead Planning: Anticipate and address potential scheduling conflicts in advance, ensuring smoother project execution.
  • Practical AI: Build your organizational knowledge base and suggestion engine and provide intelligent recommendations and insights that enhance your scheduling decisions.

This webinar promises a comprehensive understanding of InEight Schedule’s innovative features and how they can transform your scheduling workflows. Don’t miss out on this opportunity to define the future of scheduling with InEight Schedule.