Plan, Don’t Panic – Part 3

Originally aired on 4/16/2020

33 Minutes

Request A Demo

With two information-packed episodes of Plan, Don’t Panic in the books, we conclude the series with more discussion on successful project planning while working remotely.

Dan and Paul return to answer questions such as: What do you do as a lead planner/scheduler when you can’t reach your colleagues and project stakeholders? And, what if you’re a new planner who doesn’t have the history with your new company to grasp historical perspective?

Also, get ready to hear about “Boring AI” and why these concepts have proven to be some of the most impactful applications of this type of technology. You’ll learn that Boring AI actually translates into some exciting results for you and your planning team.

TRANSCRIPT

 

Paul Self:
Hey, everyone. It’s Dan and Paul. We’re going to give it just a minute or two. We got a message saying that it was taking a little bit longer than normal to start to GoToWebinar. So we’re going to hang here for just a second and allow people to continue to get in and join. We’ll start in about a minute, minute and half.

Paul Self:
Dan, you should be playing some elevator music in the background while we’re waiting.

Dan Patterson:
I can try. [crosstalk 00:01:03]

Paul Self:
All right. Well, let’s … There we go. We could try that. We could try that. All right. I think we’re good now. Let’s go on and jump right into it. Week three of our Plan Don’t Panic series here, and week number five of working from home. I am going a little stir crazy here. I am thankful for one thing. We’ve done probably 60, 70 video conference calls at this point, and it’s clear that some of our colleagues are taking advantage of this as an opportunity to skimp on personal hygiene. I won’t name names, but they know who I’m talking to. I also hear that’s why Zoom delayed their new scratch and sniff feature. You know like the sticker you used to have when you were a kid.

Dan Patterson:
Well, I’m not sure how to respond to that. I know for one I’m getting very bushy. I need a haircut. I think I’m on week seven of not getting my hair cut. But, I did go for a run before this, Paul. So, I have showered and even put on some man perfume for it.

Paul Self:
That’s too bad we don’t have Zoom’s new scratch and sniff feature. Moving on, we wanted to accomplish a couple of things today. One, shed a little light on the real-world value behind the adoption of AI solutions for specifically our projects, and two, making sure organizations have a roadmap for introducing some of those basic capabilities now instead of waiting for the future to do so.

Paul Self:
I thought we’d start out with … So when I talk to my friends and family about the fact that I have the opportunity to work with AI solutions, they move very quickly to some exciting examples, things like autonomous vehicles. That’s always the number one thing they come up with. Even my brother, who’s a project manager, he comes up with the, “Oh, what about those trucks that mining sites use that are now autonomous?”

Dan Patterson:
That’s interesting, Paul. Because while I am passionately excited about what I do for a living, when I try and explain it to my friends and family, they just tune out and move on to a different discussion.

Paul Self:
So, what’s up with this FedEx truck that you asked me to include on this?

Dan Patterson:
Oh, yes. You asked me to give an example of exciting AI, so there’s a picture of my front yard and a FedEx truck. On a serious note, so the reason why I think this is so fascinating … So I, like many people, use a home security system by Google. It’s called Nest. They recently just upgraded the intelligence in the system. They call it Nest IQ. What it is, Paul, is t camera is smart enough now to … It’s using pattern recognition to pick up things like familiar faces and also familiar objects. What it does is it eliminates, for example, false positives and doesn’t set off your burglar alarm when the FedEx guy comes up to your front door.

Dan Patterson:
You can even set it … And this is really cool. You can set it, for example, if it starts to recognize the Amazon guy, you can link it to your home automation lock so that the front door opens and the Amazon man can put the parcel in the house, close the door, and the parcel isn’t left outside. It’s really cool stuff.

Paul Self:
That is cool stuff. On a scale of 1 to 10, that’s probably a six and a halfish on the exciting scale, I would say it was good. It was good.

Dan Patterson:
I’d give it an 11.

Paul Self:
But, that’s why I believe … Honestly, that’s pretty cool stuff. I have to kind of break it to them that I actually work with boring AI solutions. I have to give an example of something they can relate to, because they aren’t project managers. Bookings.com is the example I always go to. Their version of boring AI is the fact that they use a bot to answer customer inquiries. And they’ve reached the point where 50% of the thousands of inquiries they get each day is answered by a bot. That’s kind of boring AI.

Dan Patterson:
Yeah, yeah. Paul, I’ve already tuned out. But, I didn’t like the fact that you said the stuff that we do is boring AI because it’s not.

Paul Self:
Honestly, I believe it’s actually pretty exciting stuff. There is a study that Pricewaterhouse conducted that kind of confirmed that as well. In 2020, much of the AI excitement is going to come from results that are going to sound kind of mundane. It’s incremental productivity gains. It’s improvement in in-house processes. The key is creating solutions that can navigate a whole bunch of internal systems and look at a whole bunch of data behind the scenes, and then deliver the information we need back very quickly and do so in a manner that we can actually take action on it as individuals, as humans.

Dan Patterson:
So not to toot our own horn, matey, but you highlighted those four or five. So, we manage risk very well. We automate routine tasks. In fact, we actually-

Paul Self:
We do.

Dan Patterson:
… [crosstalk 00:06:28] generation of [inaudible 00:06:28] literally. We help people make better decisions. And gathering forward-looking intelligence. I think that means predictive analytics. And if it does, we a little bit of that as well.

Paul Self:
I agree. So, we’re squeezing lots of transactions and time out of our processes and delivering better results while we’re doing it. I think we’re genuinely on to some exciting concepts here and things that we put in place.

Dan Patterson:
What are the percentages?

Paul Self:
The percentage are ultimately where these individuals are going to make investments in 2020 based upon a study they did of executives. They said, “Rank the top three areas where you’re going to make an AI-related investment and the benefit you expect to get.” And this is what the executives at the interview came back with.

Dan Patterson:
That’s cool. Well, I think if nothing else, it’s proof that what we believe in and what we’re pursuing in the project management space is meaningful, right?

Paul Self:
It is. It is.

Dan Patterson:
I guess tying that to what we do, I think in the last couple of sessions, we focused very heavily on risk, risk management, and what we call risk intelligence. I know last week we took a bit of a deeper dive into AI or augmented intelligence. I think given this is the third and final session in our little podcast series, I think what we really want to do today is bring together all of those different domain areas. How does augmented intelligence and the risk intelligence piece that we talked through, and then adding to that the concept of human intelligence … And I know we touched on the computer being able to be a portal for project collaborators to provide their expert opinion and then catch that expert opinion, throw it into the mix with the AI historical stuff, and then make informed decisions and suggestions. I think what we’re going to walk through and talk through in the presentation today is the convergence of AI, HI, and RI.

Dan Patterson:
I think the other thing that’s really, really important to note, and we haven’t touched on this to date, is this concept of front-end planning and what I call field execution planning. Unfortunately, they both have the acronyms FEP. But in the old days, I think project management theory was you plan the work, and then you work the plan. It was very sequential. And to the point where I think I was brought up to believe that once you’ve done your planning, as a planner, you’re done. You then throw it over the fence to the execution and the construction folks and off they go. What I’ve realized in recent years is that’s actually complete and utter rubbish. Because, yes, of course you do planning in the planning phase. But when you get to the construction and execution and even closeout phase, on a daily basis, you’re still planning. You’re still re-planning. And it’s because you’re reacting to the reality of what’s happening in the field.

Dan Patterson:
So, this emerging science of field execution planning, I think, is uber exciting. And again, if we can leverage the what is being captured in terms of as-builts in the field, if those as-builts can be an influence of back into the AI engine, then that helps with what we call machine learning because the computer is absorbing reality literally out in the field, and that’s making the predictions on future projects more accurate.

Paul Self:
We promised everyone today a roadmap for how to adopt AI in the real world in a pragmatic manner and demonstrate how they can do that quickly. I know you took a little bit different approach at the onset when developing an AI solution. And there are two primary schools of thought, one around big data and one around the approach that we adopted with InEight Schedule. So, why not go the big data route and go the route that we chose?

Dan Patterson:
Before we do that, are you going to let me do my PowerPoint slides that I didn’t do last week or you’re not [inaudible 00:10:52].

Paul Self:
No. No. Let’s get into product.

Dan Patterson:
All right.

Paul Self:
Let’s show people how this can actually work in the real world.

Dan Patterson:
All right. Well, let me answer your question first of all, so this concept of different approaches to AI. The underlying thinking of AI, Paul, is, really, it’s about intelligent pattern recognition. There’s different ways of doing that. There’s neural nets. Neural nets in many ways rely on what we call big data. A neural network is incredibly powerful if you have very, very large data sets. The problem with a neural network approach is that it’s very much a black box approach in that, yes, it can pick up a pattern, but it won’t … The computer can’t tell you what it’s picking up that pattern. It’ll just say, “Hey, I found the answer,” but it won’t be able to say, “Well, this is why.”

Dan Patterson:
Now, the alternative approach to pattern recognition in AI is more of a knowledge-based or a knowledge library-based approach where you have what we call an inference engine. The inference engine, Paul, what it does, it uses multiple attributes to try and make those pattern matches. The benefit of the inference engine approach overall a neural network, certainly when project planning and scheduling, is that when the computer comes up with its suggestion or answer, not only does it come up with a suggestion, it will actually tell you why it believes the suggestion that it’s making. Does that make sense?

Paul Self:
It does. It does. But, that’s got to come from somewhere, right? I mean, there’s got to be a brain for this AI engine to draw from. So, where does this knowledge come from?

Dan Patterson:
The knowledge and the brain are two different things. The knowledge can be as simple as historical, CPM, Primavera, Microsoft Project, schedules. I’ll just give you a quick example. This is simply just capturing either an as-built, an baseline, an in progress, or a completed schedule. That’s one type of knowledge.

Dan Patterson:
I think, though, what we realized early on was let’s not just stop at what the computer have provided previously. So, we took things like the concept of capturing an as-built schedule and said, “Well, if the computer can understand the deliverables that are associated with the work, the as-built work, then you can simply do a calculation. You divide one by the other, and you get productivity rates.” Well, that starts to get quite intelligent. Because now, all we have to do is throw a historical schedule in the knowledge library. The knowledge library will automatically scrape the durations, the costs. It knows their units, or the quantities rather. And from that, it can derive historical productivity rates. Well, that’s absolutely huge, right?

Paul Self:
It is. It is. That’s kind of how we start to all of that information. What else do we have outside of just historical schedules and productivity rates? What else do we need?

Dan Patterson:
Sure. I get asked a lot, “Well, should we just store the good stuff?” My answer is no. Because if the computer is aware of the bad stuff … So, bad stuff is typically in the form of threats, and threats get modeled in things like your risk register poll where you have your risk event. You have a probability, an impact, blah, blah, blah, blah, blah. Now, if the computer is aware of this bad stuff, and not only that, if it is aware of where in previous projects those risks previously happened, then, again, when we start to look forward and build a new plan, then the computer should be able to say, “Hey, you should be aware of the fact that historically these things have happened. These things have hurt us from a cost and a schedule perspective,” and actually build that into the forecast, whether it’s a cost forecast and/or a schedule forecast as well.

Dan Patterson:
Now, you asked about the different types of data that that get stored in the knowledge. Not to bore you senseless, but, I mean, there’s multiple types, even to the point where if you want to scrape your organizational resource pool and store that. Really, the concept here is that we are storing different types of information. We’re moving away from traditional structured SQL databases into a more flexible unstructured-type data storage environment. Again, the benefit there is when the inference engine gets its hooks into that knowledge, it’s not confined to certain fields, certain field types, so on and so forth.

Dan Patterson:
And in fact, this concept of the inference engine I think is so fascinating because we’ve got these multiple attributes. If I asked you, “Paul, what’s probably the most common field you’d go and search for or use to search when trying to go into an activity match?”

Paul Self:
Yeah, you’d look at description, right?

Dan Patterson:
Well, description, in many ways, is actually less important than things like the associated WBS or where in the project the activity lies, the phase, so on and so forth. And in addition, we’ve also extended the concept of … We haven’t just come up with, say, a dozen attributes, right? Different organizations focus on or have varying degrees of emphasis. They may segment their projects by location, by project type, by contact type. It’s like the type of subcontractor that they use. So by allowing the organization to model those attributes, the computer, believe it or not, can actually use these as additional hooks when going to do its cast the net out and trying and come back with an informed set of results.

Paul Self:
So, we can easily … This is a knowledge library with our history. Just take a Primavera schedule, a bunch of Primavera schedules, put them into the knowledge library, establish a couple of knowledge tags that are consistent with how I contextualize the projects that I work on within the organization. What do I do once I’m done with that? So, great, I’ve established … I have my knowledge sitting there. How do I take advantage of it?

Dan Patterson:
So really, I think what you’re asking is, okay, this is great. We digitized our organizational project management and construction knowledge. Now what do we do with it?

Paul Self:
Now what do we do with it? Yep.

Dan Patterson:
This is where you actually leverage the brains, as you call, the inference engine of the computer. Again, we by design took a really different approach to building a CPM schedule. If I just call this Dan Project, very creative, I’m going to give it an estimated duration, an associated estimated cost. What I’m doing here-

Paul Self:
This is a wizard kind of … This is a wizard-driven approach? And why take that approach? Why not just start, like we do in every other scheduling tool, with a blank slate?

Dan Patterson:
The reason being, Paul, is what I’m doing is I’m actually giving the computer context here. I told it project erasure, and I’m giving it some indication as to the type of project, the geographical location, even things like the contract type. What I’m doing is I’m giving the computer some hints as to those hooks that I referred to it for the inference engine. Then from that, the computer will actually come back with suggested … You call them templates or frameworks, but building out the CMP schedule. What is so powerful about this is it’s going off. It’s coming back with a suggested, in this case, WBS structure. It’s also coming back with suggested durations.

Dan Patterson:
Now, in many instances, the level of detail that you may have in your historical schedule may be too detailed if you’re starting with a … Ideally, you should do a top-down plan and then marry it up with a bottom-up.

Paul Self:
Sure.

Dan Patterson:
So, you can actually ask the computer to roll up the levels to the level to which you are comfortable to build out your structure. Again, this is AI smarts here. It’s doing that roll up, that expansion of detail for you. And the next result is you end up with not only a WBS structure here, but you end up with a WBS structure that have what we define as planning packages that have associated deliverables, historical quantities. I can look at things like their historical cost. And again, what is so cool about this, Paul, is it’s looking at whether those costs were derived bottom-up, or top-down, as plug values.

Paul Self:
Sure.

Dan Patterson:
Then, as we start to build out the schedule, what I envisioned from day one in this thing was I didn’t want people just to start with either a blank sheet of paper or basically plan kind of in the wild west. The whole concept of this thing is plan with AI guidance.

Paul Self:
I get why this is helpful. I understand that now I have an outline. It can be as simple as a WBS with some durations and cost tied to it. But, I mean, while that’s helpful, it only gets so far, right? I mean, I still need a detailed plan to execute against.

Dan Patterson:
Okay. You’re challenging the brilliance of this thing a little bit.

Paul Self:
Just a little bit.

Dan Patterson:
I mean, if you want, you can go the old fashion route and just create CPM activities just like you would in any of the commercial tools that have been around for donkey’s years, right? Not very exciting. Not very innovative. The usual activity, logic type of stuff creation.

Dan Patterson:
Where we really focused was we said, “Look, let’s build in accelerators.” So against, let’s say, permitting here, this is our top-down forecast for permitting. Well, I know that I have seven permits that I need to obtain. Well, I’m just going to tell the computer how many permits and hit build. And kaboom. The computer’s going to sequence them for me. And not only that, the computer’s saying, “Hey, Dan, you missed a what we call a precedence logic link here.” So not only is it accelerating the creation of activities, it’s also acting as my mentor, if you like, for what we call sound structural integrity. So if I tie off that dangling or open start there, it will sign off, and it will get rid of the error.

Paul Self:
Coming back to one of the things we talked about … One of the areas in which organizations are investing in AI is to help employees make better decisions, more informed decisions. This falls kind of right into that vein. And not only that. So yeah, you made it easier and faster, but you also … Are we guiding them in terms of … I don’t know what this green box that’s sitting around all of it means.

Dan Patterson:
The green box, this is indicating that a project where you got preconstruction with design and permitting as the predecessor scope here, typically, then procurement would fall within the realms of this green box. So it’s not just guiding me through duration, it’s actually guiding me through what we call phasing as well. For example, here, construction typically would fall within this phase.

Dan Patterson:
But, let’s go back to procurement. Because at this point, I haven’t actually detailed out my procurement scope. I’ve just got my fabrication of my modules and then submittals, approvals, blah, blah, blah, blah, blah. Again, I could use kind of the accelerator and build out my activities manually or … And this is really where the inference engine and the AI smarts come into their own. On the right-hand side here, the computer is saying, “Hey, Dan, hold on. Before you go off and manually do this on your own, take into account the fact that historically this is how long it took for those corresponding procurement activities.”

Dan Patterson:
And actually, what I can do, Paul, is I can actually drill down. And not only does the computer give me the duration, it will also show me that risks occurred on procurement. It shows me the duration. It shows me the logic links to the point where I can embed that logical link … sorry, that subnet into the schedule and actually see how that’s going to show now. Remember your question earlier about the little green rectangle, Paul?

Paul Self:
Yeah.

Dan Patterson:
This is great. I’ve got a subnet, but my subnet is sitting outside of where we typically would expect to see procurement in order to tie into the preconstruction stuff. So again, this is a guide for me to say, “Hey, let’s go ahead and push out the schedule.” Interesting, now, this is actually good. Because what’s happened is … Well, it’s good in a bad way, I guess, because I’m getting a warning saying, “Hey, Dan, your procurement scope now is later than it should be in order to satisfy the upcoming construction scope.” It’s this balance, Paul, if you like, of … It’s like rules of thumb, right? It’s saying, “Look, yes, your procurement activities are good, but they are too late to satisfy the upcoming construction scope.”

Dan Patterson:
And this gets us into the world of the likes of what’s being touted as advanced work packaging, or AWP, where instead of working left to right, you work right to left. And everything you do is driven towards satisfying this construction or execution scope. So in this particular example-

Paul Self:
So it’s almost like a constraint on your project at the front-end planning phase.

Dan Patterson:
It’s kind of like a constraint. What we’re trying to do is end up with a deconstrained execution phase. So if I were to adjust the lag on this, then I would start to pull that back. I would actually see that eventually, now, look, I’m within the realms. It’s gone from red to green. I’m still a little bit late, but I’m early enough to satisfy my upcoming construction scope. I mean, honestly, the simplicity almost hides the power of what this thing is doing.

Paul Self:
That’s interesting. We’ve got a question that came in from one of the folks that’s on the line, and that’s … So when we were looking at the knowledge library, when we were pulling in the subnet, as you’re doing here, that comes from a source from that knowledge library, do you have to create your own? Do we ship the knowledge library with a whole bunch of shared data in it?

Dan Patterson:
I think the question is pertaining to what I call community knowledge. So, does the tool set come with a predefined library? Is that a correct understanding?

Paul Self:
That’s basically the question. Yep.

Dan Patterson:
So by design, no. The reason being is … First of all, I don’t think we are smart enough to understand all of the nuances of multiple industries, whether it’s Govcon or commercial buildings or infrastructure, so on and so forth. Secondly, and this has been pointed out by several certainly contracting organizations that have said, “Look, we want to compare against our own historical stuff, but we’re less interested in compared to the rest of the world or other organizations, especially if it means you have to cleanse and sanitize the data.” In other words, coming back and telling me I’m in the top 50th percentile doesn’t help me. Coming back and telling me that I’m better than I was previously on my last project is much more useful.

Dan Patterson:
We haven’t tried. I think it’s a little pretentious almost to try and believe that we could come up with this global knowledge library of stuff. Again, we’re taking a very different approach. And no, the organization is better off loading in their own historical data.

Paul Self:
In categories in that data in the right way so they can make use of it.

Dan Patterson:
Exactly. Just as you were doing that, look, my schedule now … By the way, my preconstruction, my procurement, my construction are all in lovely sequence. Look. Green, green, green.

Paul Self:
Nice. I see that. I see that. Surely, the team members that we need … We got a planner scheduler. We talk about the collaborative aspect of the planning process and the need for feedback from our discipline needs. Surely, they don’t have to go through this same process, right?

Dan Patterson:
No, they don’t. And by design, they don’t. Not to spend too much time on this, but we’ve got this simplified version. It’s a subsection of the scope. The idea is that perhaps Dan as a contributor can simply come along and say, “You know what? Yes, I’m very comfortable with certain parts of the schedule,” or, “No, I’m concerned. I need more time.” Or perhaps on my structural scope here for steel deliveries, I’m concerned about a particular risk event here. So again, I’m pulling from the knowledge library, so I’m not having to just start from scratch. I can associate that risk with that particular element.

Dan Patterson:
Then, again, kind of going full circle and kind of closing the loop on this whole planning life cycle, that contribution, along with other contributions from other team members, all ends up in what we call the interactive schedule review view. It’s no coincidence. Look, there’s my contribution. There’s a couple of other contributions. That gives me my what we call our uncertainty distribution. There’s the risk event that I just identified as a contributor. What we’re doing here is we’re consolidating and bringing together all of those expert opinions in a digital environment.

Paul Self:
That’s super. That’s super interesting. I get why you with your deep background in risk analysis and all the stuff on risk stuff loves this. But, I mean, isn’t this just kind of some traditional Monte Carlo simulation-type capabilities?

Dan Patterson:
No, it’s not. I think we’ve actually turned the concept of Monte Carlo on its head, and the reason is this. In the old days, you’d have to take your schedule, load it into a third-party risk hall, go through this ridiculous process of coming up with uncertainty ranges, do the risk mapping, blah, blah, blah, blah, blah. This environment is completely different.

Dan Patterson:
We’re capturing team member expertise. That forms the inputs of the uncertainties. The risk events are already identified. And not only that, Paul. The analysis itself, I can do multiple what-if analysis at the click of a little lever here. All I did here was I said, “Look, account for those uncertainties. Account for those risk events.” And I think this is most powerful thing we’ve ever done in terms of risk analysis here. This concept of looking at the schedule in real time through a risk-adjusted lens, I can choose any scenario, whether it’s the best-case scenario at P0, the worst-case scenario at P100. Most organizations, and we’ve been pushing this for years now, at least go for a 75% certainty. So, this dynamic lens allows me to see the impact of those uncertainties and the risk events.

Dan Patterson:
Then, really to close it off, I can look at … And again, I’m using a modified Monte Carlo analysis here. We got rid of the whole random concept. I still get my ranging, my best case through worst case. I get my corresponding contingency. And again, from a reporting perspective, gone are the days where we’d have these ridiculous correlation-based tornado charts. This is so powerful. Because at a click of the button … Or two clicks, actually. But at two clicks, I can see not only which activities are my risk hotspots, I can see whether those activities are hotspots because of the fact we’ve got an aggressive schedule, in other words the green chunks, or is it because of specific risks, the red chunks. I can drill down and look at those and see those specific risks.

Dan Patterson:
And right now, I’m looking way down in the weeds at the activity level. Very often, I’d want to say, “Well, okay, is it construction? Is it permitting?” Actually, this is really interesting. I would’ve thought my biggest risk was in construction. Well, actually, the early phase permitting and early site work here is actually one of the biggest risk drivers. So again, the insight that this thing gives is night and day to what we used to be able to offer. Again, it’s partly AI, but it’s also the HI and the RI piece as well.

Paul Self:
I get why this helps organizations with the predictive side of what they would like to get from AI. I get why it squeezes some of those transactions out of the process. So, you took us through the effort of building a plan and leveraging some of those AI capabilities. You took us through the fact that we can get the human interaction as part of that planning process from team members that aren’t necessarily planners. We can look at a potential range of future outcomes and in real time view my project through different risk lenses. But, did you really stop there? Because so often, the plan at this point in time just gets thrown away and the team in the field executes against whatever plan they come up with.

Dan Patterson:
This goes back to the concept of front-end planning versus field execution planning, or FEP versus FEP.

Paul Self:
FEP.

Dan Patterson:
But to accommodate that, we have what we call an interval planning module here. And again, let’s go down to the structural activities that I was working with earlier. We were looking at floor two. What this is showing, Paul, is remember earlier we had those activities and we had risks associated with them. That CPM schedule is like big picture. It’s a two-year look ahead. Well, in reality, in the field, you’re not going to look out that far. You’re going to do your one-week, two-week, three-week,- 30-day, 90-day-type look ahead.

Dan Patterson:
What this enables me to do as, for example, a foreman or a superintendent is take the CPM schedule that the planner put together and … Let’s take steel deliveries first, and then we’ll do decking. In this one. I could just create a very simple set of steps. What this will do, Paul, is it’ll create daily steps for that steel deliveries activity.

Dan Patterson:
Now, more usefully, let’s go to this longer activity here. I can select a specific crew. And the crew, this is where it becomes really powerful. This is where the AI engine comes into its own again. Because the productivity rates that we saw earlier in the knowledge library, it’s coming back and saying, “Okay, this welding crew working eight hours a day, two people, this is how much they’ve historically been able to achieve.” Because of that, it calculates the output.

Dan Patterson:
Now, what’s really nice about this is if for whatever reason we know that the output, or rather the deliverables that need to be installed is different, all we have to do is type in that quantity, and boom. I don’t know if you noticed. Look, the duration updated on the right-hand side.

Paul Self:
The duration updated. Yeah. So, it gave us-

Dan Patterson:
The computer is reverse … Sorry.

Paul Self:
I was going to say it gave us the initial guidance, and then it allowed me to take action on it.

Dan Patterson:
Yep. It’s reverse calculating the duration based on historical productivity. I can even override the number of people in the crew and say, “Actually, in this case, three people in the crew.” Boom. It updates the duration. It’s linking the number of people that are available with productivity rates. Then, I can subsequently start to manually schedule this stuff so I’m not constrained, for example to my CPM schedule. You’ll notice as I start to fall outside of the realms of my CPM schedule, the blue bars, the computer’s going to highlight this in orange and say, “Okay, Dan, you’re out in the field. I’m going to let you do this stuff. But just bear in mind you’re not actually in alignment with the big picture CPM schedule.”

Paul Self:
So making sure that you maintain alignment with it. So instead of throwing the plan away when we begin execution, we can continue to rely on the plan through the execution phase.

Dan Patterson:
Exactly. Yep, yep. Sorry.

Paul Self:
I mean, that’s an interesting step forward, right?

Dan Patterson:
It is. And it’s also a step away from CPM scheduling. I think for too many years we’ve tried to ram CPM down the throat of field executions. The reality is there’s low … They’re going to be planning on a daily basis on a whiteboard or a grease board in a construction trailer. They’re not going to want to use a complicated piece or CPM software trying to wade through Gantt charts that have 10,000 activities. The magic here is we’re marrying that big picture, long-term, CAPEX plan with the simplistic daily plan.

Paul Self:
Kind of taking us to full life cycle, right?

Dan Patterson:
It is. And again, I think … Not sure I can even say this on a webinar. I’m almost embarrassed that it’s taken us so damn long to realize this concept of planning is all about front-end planning. Then, the planning and execution is just as important, but the vehicle for doing FEP and FEP is very, very different. CPM meets the interval planning.

Paul Self:
That’s great.

Dan Patterson:
And again, I think we harped on about this this last week, and certainly in the current climate that we’re in, risk is, of course, a hot topic. But, why would we ever try and put forward a forecast that we don’t have a high degree of certainty of achieving? This concept of real-time risk analysis, modified Mote Carlo, driving towards at least a 75% certainty, it makes absolute sense.

Dan Patterson:
Then, I think the other thing that is still evolving, and we’re in the early stages, but we’re having a lot of fun with, is this concept of CPM schedules are all about work. Yeah, that’s great. But then, when it comes to execution, you have to have labor, materials, equipment in order to execute that work. So now we’re starting to push the boundaries with AI in what used to be called kind of resource management, so telling me, the planner, how many people I need to assign to this in order to achieve the work. And you started to see that in the interval planning that I showed you where the computer was saying, “This is how much output you can expect based on the hours per day based on the crew size.”

Paul Self:
That brings us kind of towards what we talked about at the very beginning where organizations are actually expecting to get benefit from these boring AI concepts that to you and I are actually pretty exciting concepts. So, thanks for letting me jab at you a little bit while we were going through that.

Paul Self:
But, talking through them just real quick, so managing risk, that’s clearly what we do. And to answer one of the questions that came in during the session as well, that risk component with InEight Schedule is embedded within InEight Schedule. It’s part of the product. It’s part of the experience we expect every user to have an opportunity to go through, correct?

Dan Patterson:
By design, I wouldn’t even allowed it to be turned off. You should not schedule in absence of looking at your schedule through the lens of risk.

Paul Self:
I love that. I love that. The automation of routine task, we no longer have to manually search our historical projects for tasks, right? I mean, we’re automating routine tasks. It’s just squeezing that piece and that time that it’s a hugely time-consuming component out of the process. And as a result, we’re removing the errors, the human errors, that we’re typically prone to make when we’re trying to do that and helping us make better decisions as part of that plan development process, all with the lens of … Go on.

Dan Patterson:
Sorry, Paul. On the decision-making, I mean, we’re actually helping. Or, the approach of AI, RI, HI is helping with bidding as well. Because the intelligence is coming back and giving insight into over or under allocation of contingency, which gets you into commercial margin aversion and things like that. So, it’s not just decision-making during planning and execution. It’s actually decision-making before the project even becomes a project.

Paul Self:
I mean, and that’s exactly what some of the risk intelligence component is intended to do for us. It’s to help us predict a range of potential future outcomes. Looking at InEight Schedule, it’s a real-world application of AI. Now, albeit, we term it boring AI concepts, but it can be used today. I term it boring AI concepts.

Dan Patterson:
I was going to say. For the record, I think you are terming it that. I take offense to that, and I think it’s very exciting. I think what we’re doing is … And I know you agree with this. But, I think what we’re doing is unquestionably helping projects drive a higher degree of confidence in their forecast and put together better schedule forecast.

Paul Self:
And it’s something that you can use today. You don’t have to wait for some sort of futuristic AI solution. This is a pragmatic approach to utilizing AI inside of your organization today for … And it delivers a better planning experience.

Dan Patterson:
It’s the real deal. This is a commercially available and widely used application that started off as kind of a wild sketch on a whiteboard, what, three, three and a half years ago. I think you still have-

Paul Self:
Three years ago.

Dan Patterson:
… the photograph.

Paul Self:
I do.

Dan Patterson:
We’ve come on a long way. I’m equally excited about where we’re going in the future with this thing as well.

Paul Self:
Great. Thank you. I appreciate everyone joining for our third installment. We’ll do this again. We’ll do this again sometime soon. If this was interesting to you, you can join us again for … Watch our own AJ Waters and Construction Business Owner to learn how to simplify the process of adopting new digital technologies. That’ll take place on April 23rd at 2:00 PM. You can register at ineight.com/upcomingevent. I’m sorry, upcoming webcast. We look forward to continuing the conversation. Thanks very much, Dan.

Dan Patterson:
Thank you. Very welcome.

Show full transcription

REQUEST A DEMO