Hello and welcome to this webinar, “Your Journey from Digital Thread to Digital Twin.” This event is brought to you by Engineering News-Record and sponsored by InEight. Hi, I’m Scott Seltz, publisher of ENR, your moderator for today’s webinar. Thank you for joining us. Today we’ll discuss the flow and enrichment of information, known as a digital thread, for early capital planning through delivery to post-construction operations via a digital twin, which organizes the artifacts of design and construction into a single-structured repository of mission-critical information. We’ll dive into it how, together, these helped drive project predictability.
Please welcome today’s presenters both joining us from InEight, Dr. Dan Patterson, Chief Design Officer, and Max Risenhoover, Executive Vice President. As a globally recognized project analytics thought leader and software entrepreneur, Dan Patterson has over 20 years of experience building project management software companies. He founded BASIS, a company that developed an artificial intelligence planning software tool that was acquired by InEight in 2018. Max Risenhoover joined InEight’s leadership team in 2018, and is responsible for solutions in virtual design and construction, quality commissioning, and advanced work packaging categories. He came to InEight following the company’s acquisition of M-SIX, which he founded. And before we begin today’s presentation, please enjoy this brief instructional video on how to use your webinar console.
Welcome to this webinar. Before we begin the presentation, I want to provide you with a few housekeeping items. On your screen you will see a Taskbar with icons. Each icon is assigned to a particular element of today’s webinar. Click on the person icon to learn more about today’s speakers. Throughout the presentation, you can network with others, or submit questions to the speakers in the Q&A and chat box next to the slides. Download resources from the cloud icon. After the webinar is over, please take our survey to tell us how we did. Today’s event is being recorded and archived and will be available within 24 hours. For on-demand questions or comments, send us an email by clicking, “Need help? Email us.” If you experience any technical issues today, please refresh your browser by hitting F5 for PC, or Command+R for Mac. And now I’m excited to turn it over to today’s moderator.
Don’t forget to submit your questions during the presentation and later in the program, our presenters will address as many as possible. Today’s event is being recorded and archived on enr.com/webinars so you can share this presentation with your colleagues. And now I’m excited to turn it over to today’s first presenter, Dr. Dan Patterson. Dan?
Dr. Dan Patterson:
Well, thank you, Scott. And first of all, hello, and welcome, everybody. I know Max and I are very excited to share our thoughts and insight on what we describe and define as a digital thread and a digital twins. And I think it’s probably worth mentioning and sort of starting off with the fact that the combination of a digital thread and a digital twin or DT squared, as we like to call them in combination, are relatively new concepts but in the short period of time that they have been leveraged and utilized, the results from adopting a digital thread and a digital twin are very, very encouraging.
And I think, really, one of the reasons why they are so powerful is the fact that together really what they’re doing is they’re overcoming the age-long challenge with regards to both capturing information during a project lifecycle, but in also leveraging and understanding and analyzing that information. Historically there’s been a very ad hoc disjointed approach to knowledge management. And Max, I know that this topic is very close to your heart, but just before I hand over to you in very sort of simplistic, high-level terms, I like to think of a digital thread as really a representation of the work that the contractor is executing. Whereas on the other side of the equation, the digital twin, I see that really more pertaining to the owner organization who ultimately, while they’re interested in the progress and the status of the contractor, ultimately what they really care about is the delivery of the asset that they’ve asked the contractor to build.
Well, as you mentioned, these terms are being shared and they’re evolving, they mean different things to different teams. That’s sort of a sign of the fact that it’s a relatively new concept for a lot of people. Really, at least the term, the idea digital thread, was originated in military aviation and manufacturing. And I think in our industry, we’ve often looked at adjacent industries to look for good ideas and best practices around how we can improve the way our teams build these complex projects that we build.
And I think that it was something that, when I was founding M-SIX and was trying to raise money for back in 2007, before the venture capitalists started to see our industry as a really important area to invest in, when I was trying to raise money on Sand Hill Road in Silicon Valley, there was a kind of an arrogant attitude that, “Why can’t the construction industry be more like manufacturing? Why can’t it be more orderly?” And they really didn’t get that we have most of the same challenges of a complex flow of information from team to team to build these incredibly complex projects that we do. But it’s in the mud and a pickup game of different teams from project to project and different systems from project to project. So I think, really, the idea of the digital thread is to try to remove friction from the handoff of information throughout the building lifecycle and across traditionally siloed teams.
So, really excited to have started to do that in a crude way back in those years when my company began and refined it over the years, and it’s such a natural and organic participant in the desire to start with the end in mind, and to start using another piece of jargon that’s been used a lot, the digital twin, the idea of delivering to our customers, to the owners, a replica of all of the things that were built and all the information needed to operate that facility post construction. So there’s a very natural and organic handoff of information that’s collected all the way back in early design, throughout construction, throughout quality and commissioning, and then handed off to post-construction operations. So they’re evolving terms, but it’s a really exciting area to get in. And we’re excited to show you some of the stuff we’re up to and participate in this kind of group conversation we’re having about what this means.
Dr. Dan Patterson:
And Max, I think it’s really interesting that you bring up the comparison of the construction industry to arguably more technically embracing industries such as manufacturing. I think when you look back over sort of the traditional project management/construction management type profile, the traditional relationship between an owner and a contractor has been somewhat firewalled in many ways. That the contractor historically, I think, has worked under sort of the guise of a closed book, I think, even in the environments where there is willingness between an owner and a contractor to share information, because of the lack of protocols and mediums to easily share that in a frictionless manner, as you alluded, I think it’s been a real challenge.
And I think also under the focus of trying to get the job done, historically recording what was done and what issues cropped up and level of effort expended and materials used, so on and so forth, I think historically there has been a really an absence of that record. Thankfully, I think because of the likes of digital twin, digital threads, I think there is a rapidly changing and emerging way of thinking about the concept of integrated project delivery where the owner and the contractor really embark on a project as if they are part of the same business or same venture, I think, is breaking down those barriers with regards to communication.
I know later on we’re going to touch on the concept of forecasting and tracking, again, both from an owner and a contractor perspective, through what I call the lens of risk. And so instead of looking at the project from really a best case scenario, I think finally the realization today is that actually it’s okay to take into account some of the potentially bad things that could happen during execution. And so because of that, it’s even more important that we track those potentially bad things in our digital thread. And then again, the importance of being able to not only capture information, but visualize information and really move away from very long pages of tabular-type visualization more to an intuitive 3D modeling-type environment where we can decorate that model with the work attributes, the asset attributes, I think is highly effective. So very excited that finally the construction industry is moving in this direction.
I think just to quickly add to that, as well, one very important point to note and, again, you touched on this in the introduction, is that this flow of information and this frictionless flow of information can actually and does actually happen throughout the project lifecycle now. So, again, traditional project management theory was we plan the work and then work the plan and then the project’s finished. Well, that thought process has thankfully evolved and really, this sounds a bit silly, but a project actually is a project even before it becomes a project. So during the concept select or very early pre-planning phase before project sanction or FID, that there is the concept of “Well, if we can reuse historical information, then that will help us in forming a better cost and schedule forecast.” And then from that very early point all the way through your detailed design, obviously execution, construction or installation, through to commissioning startup and even flowing into operations, and I know you’re going to talk about a huge value of a digital twin in the handover process to operations, this concept of this information flow throughout the entire project lifecycle I think is absolutely key.
And then, during that project lifecycle, the concept of certainly the digital thread being continuously enriched… again, you’re going to laugh Max, but I think if this sort of the analogy of an acorn in the pre-planning phase and then that acorn develops into a sapling, into a well-established tree throughout the project lifecycle. And in order to facilitate that growth and that increase of depth of information, we have this framework and, again, rapidly emerging methodologies such as advanced work packaging or AWP whereby the thinking is we start with a top-down where we break out the scope of the project in a very high-level of construction work areas or EWA and then as the project progresses, and more information is available, we then further break out those work areas into what we call construction work packages.
And I know, Max, you’re going to talk through the concepts of not only developing those construction work packs, but then what is needed prior to that execution phase in the form of finally procurement and, in turn, timely engineering and design such that the construction execution part of the project is what we call constraint free. I think the other huge benefit of DT squared is really it’s also driving the marriage between what I call front-end planning and field executional workspace planning. Again, traditionally project management, I think, has sort of had the mentality of, “Well, we’re going to do all of our planning before execution, and then we’re going to flip the switch and flip into execution mode.” Well, that’s not the smartest way of managing a project, as we’ve seen. There is huge value in continuous, iterative, more and more detailed planning all the way through execution and closeout. So, Max, I know, again, a topic very close to your heart there, so I’m going to hand over to you to walk us through more of sort of the platform concept to support the digital thread.
Well, and there’s a parallel there in the sense that a transformation where all of us, all of these teams, are trying to have smoother flow of information from discipline to discipline. We’re doing that within the InEight platform as well. And, the siloed teams, they’re not truly isolated, it’s not they don’t communicate with each other, it’s just that there’s some human nature that makes it a little easier to work more on the information within your silo than working on being able to have a smooth integration with other teams who depend on that information. So the idea of a one-way flow and a waterfall approach, which as a software developer, having witnessed this transformation over the last couple decades from a waterfall approach, where you move in one direction only from design through the execution and testing and delivery of software, there are parallels with the building industry and, in fact, a lot of the software folks that I work with borrowed terms and processes from the building industry.
But what many, many software teams move toward is this idea of an agile kind of collaboration that has more iteration and more back and forth and more frequent flow. So if, traditionally, these siloed teams communicate by collecting a bunch of information and then kind of throwing it over the wall in a way that, “Yes. The next in line can ingest it and do something with it,” there’s enough structure to it that, obviously, we’ve been able to build buildings and factories and chip fabs and things very successfully for a long time. But there’s enough friction in that interchange that it doesn’t happen frequently. And when change happens, which it inevitably does, it becomes a little harder to have that flow go in reverse, and have some adjustment made and then flow back down the stream.
So within our platform, we’re going through this exact same transformation, we’re adding more and more powerful integrations between all the solutions on the platform. And whether it’s software communicating directly via API, or it’s a more manual process between teams, in essence, the process is the same. We define data contracts, we get shared terminology, we define naming conventions, and shared data structures and the cadence that we’re expecting those pieces of information to be handed off between teams and building lifecycle phases. And so within our software we’re doing that as well.
And even though you don’t have to embrace advanced work packaging to use our software, a lot of great ideas in it, and the CII is doing great work formalizing some of the concepts of the digital thread. So while there’s nothing in our software that requires teams to embrace advanced work packaging, most well-run projects are using best practices similar to AWP, so one of the places we started was by creating a shared definition of hierarchy of AWP data structures from course-level of detail like construction work area, to construction work packages, to installation work packages, to daily plans, and have this free flow of information such that we’re indifferent about what theme and what software is used to create items in that structure.
And depending upon your project, depending on your industry, depending upon comfort with various kinds of technology, this might begin in the schedule, it might begin with a model, it might begin with more traditional processes, it might be herding together a bunch of Excel spreadsheets. But what we’ve tried to do is create this flow of information where, depending upon your role, depending upon the kind of project, you can use the right tool to create these data structures, and then anyone else can update them in their tool. So simple idea, easy to say, but hard to deliver and execute. But we’re making great progress with this, we’re starting to be able to demonstrate really, really powerful stuff. So excited to show that.
And it’s facilitated by a messaging bus. So, basically, when any of these tools create or update these data structures, they’re loosely coupled with each other, each solution is loosely coupled. So they can publish a change, subscribe to different changes and that’s why basically we wouldn’t know the difference if you just look at the database level between installation work package that was defined in the model from the planning tool on the InEight platform, from the schedule. So this is something that we’re building on and building more and more powerful ways to have that information flow back and forth between teams across this mist.
Dr. Dan Patterson:
So, Max, you’ve actually got my mind racing now. So you brought up the analogies with software development and moving from a traditional waterfall to more of an agile environment. It’s interesting, because, again, historically, I think it’s fair to say that major CapEx projects always considered engineering then procurement and then construction, it was highly sequential or waterfall-esque in nature. And, I think, one of the downfalls of that approach has been that even before you get to day one of instruction or execution, the chance of those preceding engineering and procurement elements all completing on time and handing over to construction on time, the chance of all of that sequential uncertainty and risk all stacking up, the probability was very, very low.
And so I really like your concept of working backwards and packaging up the execution scope into those construction work areas and then breaking those down into those construction work packages, it starts to add or introduce more of a degree of parallelism. It’s like the old-fashioned computer processors, if you gave a computer 10 tasks it would go to talk one through 10 whereas now, with sort of multiple cores, you can achieve those 10 instructions in parallel and it reduces the dependency and the risk of delay through that dependency. So concepts are very, very cool and they make absolute sense.
I think, just touching on that project lifecycle that we introduced, the concept of pre-planning, one of, again, the big benefits of developing the digital thread, digital twin or DT squared is the fact that we have the luxury of really stepping away from traditional critical path scheduling and really saying, “Okay. Let’s talk with a high-level, top-down cost and schedule forecast.” And again, with recent technologies and advances and ways of thinking, the likes of AI, artificial, or as I prefer to describe it, an augmented intelligence, if the computer now for the first time can start to intelligently mine historical as-built performance on analogous-type scope, then surely we should take advantage of that and use that as, if nothing else, a benchmark to establish that top-down initial thread. And then same thing on initial twin, Max, on the right hand side, maybe you can walk me through the schematic where you’ve got your stage one through stage three there.
Sure. First, I have to say that your team and my team joined InEight around the same time. So I mean, I think the example I’m going to use to demonstrate very, very briefly, but in the live software because some of this, I think, is more compelling and easy to understand when you start to see concrete examples. But it was born from the very first really challenge high profile project that my team worked on back in 2012 or so, which was the Bradley West Gates, the Tom Bradley International Terminal at LAX. And we didn’t use the term digital twin, back then, but the joint-venture, the Walsh Austin joint venture that was building that $2 billion terminal, went to the owner and said, “Our contracts obligate us to deliver all this information that you need for post-construction in this,” basically a shipping container filled with DVDs and as-built models and stacks of O&M manuals and warranty information and things like that, “and you know that you’re going to have people typing for a year to put all that information into your CMMS system,” they happened to use IBM Maximo.
And JD basically said, “Well, what if we delivered all of that information in a structured database. In a way that you kind of intuitively know you’re way more likely to take advantage of it?” And that was kind of the birthplace of something that we would call a digital twin today and didn’t at that time. But we got to do it a few times, both there on that campus and with other mega projects and really complex situations where, in essence, we were trying to work backwards from the day one readiness of delivering all that information to ops. So, I think that in today’s world, we would do it differently because the thrust of advanced work packaging, whether you use that term specifically or your teams, maybe in vertical build folks don’t use that prescriptive definition all that much. But in essence, you’re trying to define these processes in a way to ensure that you’re collecting information and doing high quality work safely, on budget, on schedule, et cetera.
And AWP is really powerful, because when we start to decompose this big complex thing into finer and finer level of detail and containers, that gives us the opportunity to make sure that folks in the field are doing efficient work, and that’s by removing constraints. We don’t have enough time to go into AWP and I’m sensing that I’m already spending too much on that at the moment. So I’m going to just go ahead and give you some concrete examples of what we learned along the way making this happen. So first of all, I would say that this idea of a digital twin doesn’t have to mean a 3D model is in the middle of it. It’s all the information that the facility owner or the owner of the project needs post-construction, but an as-built model is a nice visual index, to all that information. I’m going to go ahead and start sharing my screen and show you that example that I mentioned earlier.
So this is the Bradley West Terminal at LAX. Over time, we actually became the standard for the owner to aggregate all of this information for the entire capital program there, so we have the entire campus. But this is the one that I have some good examples in a separate environment where I’m not going to step on toes of anyone in production. And the idea is that we don’t want to link information directly to model elements, we want to link to a shared definition of materials, equipment, locations, issues, inspections, commissioning workflow steps, in part because design intent models are valuable for a while, and then they’re replaced by fabrication models, which are often replaced by as-built models. We don’t want to repeat that work each time geometry changes. And even more importantly, there are plenty of cases where things aren’t modeled that we very much care about in the digital twin.
So a really simple example is you might have a breaker panel that’s modeled in three dimensions, but it would be unlikely to have all the breakers in that panel modeled in three dimensions and yet in a digital twin, we want to understand the relationship of where are the breakers, what equipment is fed by them, can we walk up and down these connected systems from the office or in the field from a mobile device, for troubleshooting, for finding information. So we’ve done a lot of great work to be able to decouple this idea of physical stuff represented by vertices and polygons, and XYZ points in a point cloud, and just let them be the kinds of things that we can link together in this very useful way, with or without 3D. That said, we’re really proud of this 3D engine that we built, and it’s been very, very useful.
So back in 2012, as I said, we didn’t use the term digital twin. But here we have the Bradley Terminal and if you were looking over my shoulder, you would see that this is very, very interactive. It’s basically a video game, kind of 30 frames a second frame rate here, and there’s a ton of information here. If I turn off the architecture and show the systems, from a distance you can tell that this is an enormous amount information. This is every system, every sprinkler head, every motor in the baggage handling system, every people mover, escalator, elevator et cetera.
Plus all the information that’s useful to this owner post-construction, so documents, photos, issues, O&M manuals, warranty information et cetera. To make this a little bit more manageable and to peek in a little more detail about what we mean by digital thread and digital twin, which is really just integrations and smooth flow of information, let me make this a little bit more manageable and just go to the core terminal’s Level One, Systems Only view. And there’s still plenty down here on this lower level of the core terminal, we see all the baggage handling systems. I’m going to try to remember to pause here because I think over the webcast, the view kind of resolves over a few seconds. In fact, here’s all the different systems-
Dr. Dan Patterson:
Oh. It’s rendering right, Max?
Yes. To make this a little simpler, I can turn off electrical fire protection. And so now you can see that we’ve got each system and if what we’re saying is important to a digital twin or a digital thread feeding into a digital twin with the idea of integrations, one way to talk about that is the idea of connecting quantities from a model, because that’s one of the things that models are great at, with cost and schedule. And I’ll turn back on these other disciplines. For example, in the InEight estimate tool of cost, I created this kind of pretend throwaway estimate here, it’s not detailed or accurate, but it is a container that was fed from the model that has the level of detail that it allows us to publish quantities from the model to the estimate and then create a link between them.
So if I turn on these features to select in the model and to frame the selected cost items in the model, and I go into this isolate mode where we only see the selected thing, if I wanted to see all the mechanical duct systems, I can just click that row. If I want to get a little more detailed and expanded it I could just see the supply ducts, the return ducts, outside air, the exhaust. So we can look at the quantity, see the connection to cost and schedule, use it as a way to sanity check and to make sure everything’s complete. Here all the AHU’s, so the air handlers, in this level one of the core of Bradley International Terminal. I could go look at an individual one if I wanted to.
Dr. Dan Patterson:
Max, this blows me away every time I see this. I think from the outside looking in, I think, it’s incredible because you started almost looking through the eyes of the owner with that 3D rendering, the digital twin. But now the ability to pull in what I think of as the elements associated with work, whether it’s quantities, or risk, or planned, or even actualized durations from the schedule, you’re marrying up the digital thread entities that we captured, again, in the scheduling tool or the cost estimating tool, the fact that they’re now directly linked to the digital twin, and the way you’re visualizing this is just incredible.
Well, and thank you for saying that. The super exciting and cool thing about it is that whatever is the right level of detail or granularity for a project, and it doesn’t even have to be one agreed upon hierarchy for the life of a project because different disciplines tend to think about projects in different ways, but whatever that container is that makes sense, we can use that as a place to link information. So to dovetail a little bit with some of the things you said earlier, this idea of maybe a work package or a location or a unit. Again, the coarse-grained or fine-graininess of this container doesn’t matter.
So if I were to say, “All right. I want to know something about what schedule believes a risk is for a particular discipline,” I could go ahead and show you two of the risk visual reports that I built. And frankly, I don’t know enough about it to speak intelligently about it. You taught me just enough to be dangerous. But what I can show you is what we call a visual report. So this risk review that had a p25 confidence, that is something that you could probably speak more intelligently about than I can. What we’re doing is we’re draping it over the model using the granularity of package that makes sense for a team. So it could be incredibly fine-grained or it could be incredibly coarse-grained like a UniFormat code. But this is telling us that-Dan, you have a question?
Dr. Dan Patterson:
Sorry. I mean, again, this is just incredible because prior to this visualization, in the world of risk we have this concept of… we call them p-values where p0 is the best case, p100 is the worst case. And honestly, in the risk world, we haven’t been creative in how to report those, what I call risk hotspots. And immediately here you’re showing me, I think, the p75, or p25 values. Being able to visualize those risk hotspots from both the cost perspective and the schedule perspective, in this 3D visualization, it’s mind blowing.
Well, and the cool thing is that the InEight platform in InEight Model, we don’t throw anything away. So what we thought of as the risk profile as of March 15th, p25, there’s a handful of things we’re concerned with, most of it we’re not. So this visualization’s basically saying anything that’s likely to meet schedule is just this kind of semi-transparent and gray. Green means it’s off by a small amount and as the heat map gets more and more red, it’s more and more of a concern. So the p25 on 3/15 looks like this, but the p75 on March 15th looked like this. And because these are stored in the project, we can always go and review, “Well, what did we think as of March 15th versus the p25 on May 19th, yesterday, the p75.” And I made it dramatic and our project is in trouble now because I wanted to make it more colorful. Obviously, this is fake data.
Dr. Dan Patterson:
It’s fake data but incredibly compelling because now you’re introducing yet another dimension which is really trending over time. Now, I’ve preached for many years that telling me that my risk exposure is X is interesting, but it’s a little bit irrelevant. What I really want to know is, over time, is that risk exposure getting better or worse? If it’s getting better, I’m increasing control over execution, if it’s getting worse I’m losing control. So the trending that you’re demonstrating here, both from a high risk p75 and a low risk, p25 kind of concept, again, having that trending insight and seeing it visually is something we’ve never been able to do before.
Well, and my team and I built 4D, a time visualization, proof of concept back in 2012. And it was powerful, it was interesting, but the amount of work it took to create it in the first place and then almost having to repeat all that work anytime there was a significant revision, meant that kind of your return on that investment was really questionable. It was a lot of effort to build it, a lot of latency, a lot of time it would take and manual effort, and by the time that’s completed, it doesn’t reflect the plan anymore.
And so we kind of shelved it until we had more powerful tools. And so we’re very, very close to unveiling the updated notion of this where in addition to this kind of draping the model with risk and status and all the other things that we’ve already been talking about, we’re going to unveil an incredibly powerful way to fly it over time and see what the plan looks like at various points at various risk profiles. And to, in the middle of execution, show plan versus actual and highlight any deviation between plan and actual. So having this very visual way of understanding tabular data is something that we’re getting close to making it so automated as to be irresistible.
Dr. Dan Patterson:
That kind of leads me to another question, and I hope I’m not putting you on the spot here, but just leading on from your example of what I described as risk hotspots. Because this environment is capturing and tied to that digital thread, is it realistic to also assume then, from those risk hops you can drill down and look at the associated quantities or maybe drill down and find out who the subcontractor associated with that particular element was? Or let’s look at the planned cost versus the actual cost. Again, this concept of jumping between dimensions through this threaded flow of information.
Absolutely. And because of this idea of shared data structures means that if you discover a hotspot, if you want to make an adjustment in one tool that affects all the other tools, that information should flow smoothly across all of them. So if it changes the quantity, it changes the date, you should have everyone on the same page.
Dr. Dan Patterson:
Very, very cool. And it seems even over a virtual meeting here, the responsiveness is amazing.
Well, we’ve done a lot of work to be able to handle mega project scale. We thought an international terminal would be incredibly challenging, we thought then the whole airport would be incredibly challenging. And then it turned out that high volume semiconductor chip fabs were 3X, 5X the challenges of airports, so we’ve worked on scale for a long time. But we’ve also worked on connections. So last thing I’ll show in this live environment here is this idea that whether the relevant documents and the relevant rows in the database were created in one tool or another, we want to be able to get to them easily.
So for example, in this return air system, I feel like we had a number of different… No. This must be exhaust fans. So I wanted to select the tag of this exhaust fan… I see in the metadata there it’s called EF-C1.1… double click on that and now I see this exhaust fan that’s part of that system we were just looking at. I can tab and see it isolated, I can see what it’s connected to. I can see documents that are linked to it, so if I double click on this the label says it’s photo. I see a photo of that item. I see warranty information. I can go from any of these documents to find any of the other elements that match.
So this multi-dimensional linkage of being able to have structured relationships between information of all kinds is really the heart of our philosophy here. So our team internally, and we’re all software nerds and we’re proud of being nerds, we’ve just generically defined all these things as nouns. And we have tools to be able to link nouns to nouns and workflow items are verbs that act on those nouns. And so we built this incredibly flexible system that we’re eager to show off. But I will resist the temptation to keep on showing more and more things. We’re at a summary level on this webinar, and I’ll hand it back to the slides for us to wrap up.
Dr. Dan Patterson:
Sure. So just before we move away from the model example there, Max, I think another very valuable capability of that type of solution is to be able to pinpoint what I call sort of geospatial clashes, in other words look at where different crews may be required or scheduled to work not only at the same time, but also, even worse, at the same physical point with regards to location. And so being able to tell point those geospatial clashes, again, feeds into actually updating the digital thread in the form of cost and schedule, because what we can then do is we can take those clashes, we can feed them back in as risk factors, and then running our more traditional schedule risk analysis, the impact of those geospatial clashes actually get reflected in what we call the risk-adjusted schedule.
So it’s the continuous improvement loop, the model highlighted the risk hotspots in turn the risk hotspots were tied to those geospatial clash pinpoints or touch points or pinch points, they themselves feed back into the risk model. And so it’s this concept of as you said, again, in the introduction, this continuous bi-directional flow of information is continuously updating the forecast through to completion. And then, I guess, talking of sort of forecasting through to completion, again, I think one of the huge benefits of this type of approach is the fact that a digital thread isn’t just a record of what has been done in the past in the field, capturing as-built information.
And then taking that as-built information and feeding it back into what we call the Knowledge Library, again, actually helps with forward-looking planning because it helps recalibrate and benchmark the remaining work. And for example, we can look at, “Okay. Well, how realistic is the remaining work with regards to productivity rates?” Or, “Historically, have we seen quantity growth there?” Or, “During the pre-con commissioning stage, historically E&I has been an issue?” And so again, for me, it really boils down to this bi-directional flow of capturing what’s happened in the past and then using that to help better forecast what we have left.
Yeah. That’s incredibly powerful. And understanding what’s happening live in real-time is obviously an enormous concern for these kinds of complex projects. This same architecture lends itself to that. So the idea of communicating progress from various tools, so the idea of having that messaging bus sitting in the middle of the various tools so that in some projects you might have progress communicated from claiming quantities in the field, and others you might have it from an update from a third-party system. But this idea of being able to say, “All right. Let’s take information from one tool, publish it to this messaging bus, and share it to the other solutions so that they can visualize and communicate it in a way that makes sense for their context and their team.”
So in this case, we’re showing how the model can communicate progress that was triggered by a decoupled solution, in this case, in progress out in the field from an iPad. So, I mentioned this earlier, and I’ll just say it really briefly because it could be its own topic. But what we found working backwards was, the best place to collect the information that the owner needs in post-construction, and the best way to measure it, to ensure that it’s complete and accurate, was to basically bake in to the quality and conditioning program, the collection of documents and information and metadata, so that we can report against it, see who owes us what information, be able to understand when it’s complete, and know, with confidence, that we’ll be able on day one to hand over all the information necessary for post-production operation. Love to talk in more detail around that, but we want to make sure we have time for some questions. You want to wrap it up, Dan?
Dr. Dan Patterson:
Yeah. I think before we open the forum up to Q&A, Max, I think without oversimplifying what we’ve walked through, because we’ve walked through some pretty amazing stuff. Again, I think the concept of the digital thread being a permanent digital record of the work needed and executed to build the asset, and then probably even more valuably, having a digital representation of the characteristics of that asset through the digital twin, and those two tied together and the frictionless flow of information.
And I think really that frictionless flow of information largely comes about, thankfully, because we are now moving away from multiple point solutions where you have a lot of overhead and complexity of moving semi-compatible or incompatible datasets around and moving towards the what I call a one stop shop single platform where all those dimensions, irrespective of whether it costs, schedule quantities, risk on and so forth, all live within the same ecosystem, I think is a massive, massive enabler. And again, I think we’ve focused in this session largely on the project phase of the asset but certainly from an owner perspective, we’re seeing huge interest in taking that digital thread and the digital twin and then that actually becoming an asset during the operational phase of the asset itself. It can help with planning, the frequency and cadence of things like planned shutdowns, and turnaround and maintenance, so on and so forth. So, while this is definitely a huge step forward for the project phase, let’s not forget it’s also very, very valuable for the asset as a whole.
Especially when it’s been exciting seeing that come to life with LAX in the sense of being able to coordinate multiple huge projects on that same campus. So the idea of the path of construction, that might make sense for a single project, but when they’re competing for space and resources across a tight environment like an airport… I don’t know if you remember airports, I remember going to the airport. I think we’ll do that again one day.
Just that ability to see who’s stepping on whose toes and to adjust what were separate projects to reflect that information is an incredibly powerful thing as well. So all of this flow of information, whether it’s across our platform, or it’s integrating with third-party solutions, because it’s impractical to have everyone all in one monolithic environment, so building the tools to be able to be relatively compatible with other systems, is also part of our philosophy here and can’t wait to see more examples of that out in the real world and have another conversation like this one with even more detail, even more exciting results to show.
Dr. Dan Patterson:
For sure. So I think we have Scott back online who, I believe, is going to facilitate our Q&A.
Yes. You do. Thank you, Dan and Max. That was a great presentation. I know we have a limited amount of time so I’m going to try to pose a couple of questions in the few minutes we have remaining to you. One question that stood out is, “Are you able to track material status in the model?”
We do. And in fact, that’s a great segue from the point about not necessarily being able to rely on all the information coming from a single environment. And in this case, InEight does not have a procurement system. We have a great integration with Jovix and we plan to have other integrations. But my team and I have defined our architecture on the InEight Model side, and the rest of InEight have done similar things in the solutions that they’re responsible for, done a great job in defining these things generically. So if you want to visualize in the way we showed visualizing risk earlier, you want to visualize procurement status, that’s something that we’ve done in AWP context before.
And in essence, any data contract, any schema, so to use the technical term for a definition of a set of related fields of information, we can define those using some of the stock definitions we use in our platform or from customer definitions. So it’s very, very easy to say, “All right. If a customer has built a proprietary procurement management system, or if there’s a commercial solution, we should be able to make a very simple and automated flow to be able to get the latest projected dates for when materials will be available on site.” And to be able to not only just visualize that in an eye candy sort of way, but actually have it influence our planning and potentially have us adjust, for example, the sequence of installation work packages and daily plans et cetera. So. Yes. I think we’ve got a great solution for showing integrations with material management systems and procurement in general.
Right. Unfortunately, that’s all the time we have for questions today. So please join me in thanking Dan Patterson and Max Risenhoover for their presentation, as well as our sponsor InEight. If you have additional questions, or we didn’t get to your question, don’t worry. If you want to submit a question, please click on the Email Us button on the console and we’ll share those with our presenters so they can respond to you directly, and they’ll respond to the questions that did not get posed.
Please note that you can find additional resources from InEight through the download tab on your webinar console, that’s the little tile with the cloud and the arrows.