TRANSCRIPT


Jordan Brooks:

All right. Welcome everyone. I think we can go ahead and get started here. Thanks for joining us today. Our webinar, as it says, we’re going to focus on Managing Bias and Project Controls. A couple admin items here. Remember post questions in the Q&A box that’s provided through Zoom, not the chat box. Make sure you’re putting those in the Q&A.

At the end, hopefully, we’ll get enough time to address some of those as long as we don’t run long on our end. Also, be sure to put some time aside. At the end of the webinar, there’s going to be a survey through a Q&R code that we’d like to go through an answer. If you’ve got the time to, that would be appreciated.

We’re going to start out with a quick poll to just give us an idea of who we’re talking to in the audience today. We’ll go ahead and start that poll, give everyone a little bit of time and then look at the results. All right. Let’s go ahead and look at those results now. Looks like it’s pretty good spread. That’s good to note.

Okay. Good. That just gives Aaron and I the idea of who we’re talking to in the audience. Nothing more than that. I appreciate you answering it. We’ll move forward from there right into introductions here.

With us today, myself, Jordan Brooks. I’m the product manager at InEight for Scheduling and Risk Management. I spent the better part of 10 years in the industry actually adding bias to project control strategies on numerous projects prior to joining InEight. Not sure if that qualifies me to talk about how to manage bias, but we’ll give it a go today.

As I said with me today is Aaron Cohen. I’ll let him do his intro here.

 

Aaron Cohen:

Thanks Jordan. Good morning everybody. My name is Aaron Cohen. I am the Estimate Product Director with InEight. Very similar, previous part of my life, I’d spent probably about 15 years as an estimator and project manager. Yeah. I guess I was adding bias into projects before I even knew bias was a thing.

Right now, working with InEight and also working in academia, teach some college courses in project management and estimating as well. Excited to be here this morning and talk about this topic.

 

Jordan Brooks:

All right. Last admin piece here before we jump into the webinar itself. Quick blurb on InEight, for those of you who have not heard of us, we’re a platform solution that helps our clients manage project information, contracts and change, cost and schedule and design and execution, as it says on the screen here.

If you’re curious about us, have not heard about us, just want more information, we will have a booth at the AACE conference and expo this June in Chicago. If you get a chance, make sure you come by. Check us out there. Ask any questions that may have that we don’t address possibly in this webinar or anyone’s on us beyond that.

All right. That takes care of the admin duties here. We’ll jump into the objective of this webinar, what we want some of those key takeaways to be for the participants who’ve joined us today.

First and foremost, we’re going to go through and establish what bias is and also what levels of bias we commonly see in the industry today. This will help us set the foundation for the rest of the webinar. That’ll be the first point. Next we’ll look at the impact that that bias has in the industry. We’ll give some high level examples on this, just some brief scenarios that could arise or we’ve seen before in the industry.

Then once we get to this next bullet point in the webinar, that’s when we’ll start focusing on how we manage that bias that we’re seeing. The ones that creep into those project controls. Review what part data, especially big data, plays nowadays with bias. That’s a big hot button in this webinar today.

Then we want to get into specifics of strategies for managing that bias. We’ll establish what bias is, where we see it, where it’s sneaking into the industry, and then how do we manage that. Then last and certainly not least, I’ll turn it over to Aaron. He’ll give some insight into what he sees coming in the future that will move the needle when managing bias in project controls. He’ll have some insight into that.

Long-winded way there of saying, “Let’s look at how we can manage bias in project controls.” Part one of that first bullet point that we talked about, we want to describe what bias really means. The definition I found is a disproportionate weight in favor of or against an idea or thing, which is pretty simple, reasonable definition to me. Nothing crazy there.

I think most of us that are on this webinar probably know what bias is nowadays. Definition, like I said, it’s not complicated. Even if we didn’t know the definition, we, for the most part know that bias exists in our day-to-day lives. I’ll let everyone process that quickly what we’re saying, because that’s the basis for the webinar.

We’re accepting bias exists in the industry when it comes to decision-making and project control strategies. We’re accepting that bias exists. We’ve covered the definition of bias. Let’s continue establishing the basis of bias for our purpose today with the levels and we’ll look at where we’re seeing those in the industry.

The levels of bias we see in the industry take on many forms, come from different points of a workplace structure. The first one we want to mention is organizational bias. I have it in a downward hierarchy here. That organizational bias being at the top, which we’re saying in this scenario is bias from past experience of a specific organization.

Organizations absolutely have past experience. They’re going to have some type of inherent bias with those. Even the organization’s values that they may hold near and dear could create some bias for them. Good, bad bias. There’s nothing wrong with having values, every organization has them, but it’s going to lend to some biases in some case.

These can also have a heavy hand in the systemic bias that you see in your organization within your project control strategies. They’re absolutely going to present themselves there, which I’ve seen mentioned in many different AACE presentations and papers and books. This is nothing new.

I don’t think to anyone that’s familiar with AACE especially or really any other project management organization, bias is called out. No. We accept that it exists. The next level that we want to call out is that leadership bias as you see on the screen here. This is bias imposed from leadership’s opinions or expectations.

This can be informed from any level of leadership, not just management. Excuse me. We can see this oftentimes in group planning sessions where feedback is key. Sometimes you have that loudest voice in the room impact where the leader is saying, “No. This is the way it should be,” and everyone just accepts that and moves forward. Obviously, there’s some bias in that.

After that, we want to call out the group bias. You can see. This is a little bit different than that example I gave you with the leadership bias. We talked about how someone in a leadership position can impact a group, but how can a group impact individual thinking?

We call this bias from group think, think risk workshops being done with an entire project team in one room. It becomes less about the loudest voice in the room and more about feedback that’s not being relayed because someone doesn’t want to have a dissenting opinion, which we seek quite often in the industry today.

The last one we want to call it on screen and for this webinar purposes is the human bias. Obviously, this is the one that everyone accepts. All humans have bias. It’s someone’s personal experiences, they’re all different. Your bias is going to be different. Bias at this level is human nature and it’s not the same from one individual to the next, which is expected.

Aaron, any levels of bias you have seen in your experience in the industry today?

 

Aaron Cohen:

Yeah. I think all of the categorization that you have there is accurate. I think there’s maybe approaching it from a little bit more of some practical example type. What I’ve seen anyway would be some things that most people can relate to.

Most construction companies, they have an estimating process. A lot of them have actually dedicated estimators doing that function. When you think about what it takes to drive revenue for a company, you have to estimate your work. That function largely falls to estimators.

Depending on the type of work that you do, you can often take a project, you get into a very large amount of detail, understanding what the requirements are, foreshadowing what you think it’s going to take to actually build the work, understanding what the risks are, how to mitigate those, overcome them.

You spend a lot of time getting very invested in a project and you tend to have a bias towards an optimistic outlook of how the project’s going to go. What can often be overlooked in that case is going to be a realistic approach to what some of the unknowns may be. Specifically, if it’s a project where maybe you don’t have as much experience on one than another or you’ve got maybe some things that are light and conditions that are unknown at the time that you’re bidding.

All sorts of things can color your lens as to how you approach that. A lot of companies will try to balance that out with a bid review process where they have somebody that comes in that’s not as familiar with the job, that can challenge the assumptions that have been made and make sure that the company has a good approach going in.

Then I think you can take the exact same company and you can flip the script on that and take a look at it the field execution side where you’ve got your project management team and I think project managers are notorious for being overly conservative in projections about how a project may end up and what still could happen and the things that could still go wrong in a project even though you’re 80% of the way through it and you’re beating budget and everything’s going great.

There’s really that positional almost bias, if you will, of the way that your function or your role in an organization can color the lens of the way you see the world and how you see things playing out. Yeah. All different levels. I think one of the most important things to just be aware of is, as Jordan said, we all have it. We all carry bias with us to some degree or another even if we don’t think we are or even if we don’t recognize it.

 

Jordan Brooks:

Absolutely. Yeah. All great points, Aaron. I’m going to reiterate really what you said from the practical perspective with a little more technical terms here. But at each of these levels there’s a natural tendency toward optimism bias, which Aaron pointed out that overconfidence effect biases and probability estimation.

There’s political bias, remuneration system bias, many others, other biases such as anchoring, cognitive inertia, confirmation bias, planning fallacy, and other well-known biases all come into play here. All of this information we identified before, it’s been identified before in AACE technical papers, conferences that I’ve been at. You hear these quite often.

What we really want to look at is what impact do these biases have on a project and how can we help manage those at different levels?

Now let’s look at the impact of those biases that they can have. A very simple example would be when estimating a project, the team uses the best unit rate or man-hour rate on that estimate that organization’s ever achieved. This in turn is going to create sensitivity to low. If you have low production rates at execution time on operations, it’s going to create sensitivity to low durations in your schedule.

Obviously, which we all know durations in the schedule should be representative of the unit rates from your estimate. Also, this scenario could lead possibly to low contingency in the estimate to cover those risks. If these are not offset by a proper risk identification process, then it’s setting that project up for difficult times during execution.

This is a simple example that if you’re doing something at estimating time, it could snowplow into something much, much bigger for that project team. Something we’ve seen used in the industry is benchmarking, which we’re shown an example of that on screen here. But this will help alleviate the scenario similar to the one I just mentioned.

But bias obviously does exist in benchmarking as well. This isn’t something that completely removes that bias, it just helps manage it and it’s one way and that’s what we’re focused on today. But the simple answer in that scenario that we just talked about would be for the project team to just make unbiased decisions.

But again, by our very human nature as we touched on earlier, we are all biased. It’s very hard to avoid something that’s just inherent to us as human beings. That leaves us with a question, what can organizations and project teams do to minimize the impact of bias in project control specifically?

This is going to bring us to the meat of our discussion. What methods and tools are out there to help organizations and project teams manage bias in decision-making? Where should project control strategies begin to address those biases? Where do you even start to begin to address the issue of bias when it is at all those levels we’ve talked about? There’s a lot to think through.

We’ll start at what we see as the foundation of being able to manage this bias. We touched on it a little bit early on. But that’s with data. It’s becoming again, another hot button, bias and data. We’re touching all the hot buttons in this webinar. But how do organizations use data to manage bias when it relates to project controls? That’s really what we want to focus on.

Like we’ve already been leading up towards … No cliffhangers really at this point. But what the industry is becoming more and more keen on is what decreases bias when making decisions? The biggest factor in decreasing bias on a decision from what we’ve seen today is data. The more data equates to more informed decisions for a user.

Obviously, that data is coming from many backgrounds. More organizations in the industry are on the data collection train today than we’ve seen at any other time previously. I think that will just continue to grow. All organizations are wanting their data and wanting it now for multiple reasons.

This data provides the work structure we mentioned earlier, invaluable decision making abilities. Organizations can look at a collection of captured data and decrease bias and decisions through concrete information.

Nowadays, organizations are turning their attention to getting suggestions from machine learning enabled systems, which obviously need data to operate. These are generalized statements, but it’s easy to prove there’s a correlation between data and the decrease in bias and decision making.

I’m listing them out here. But per numerous technical articles, books, journal articles, using data to provide suggestions in conjunction with human subject matter expert decision-making is really where managing bias becomes something of a reality.

There’s a sweet spot between having that AI, machine learning suggestions that you’re getting from that data lake that you have, and then integrating it with that human subject matter coming in and saying, “This is what we should use. This is the correct data we need to look at and we need to make our decisions based off this.”

Then you can start to establish somewhat of a workflow to help the organization where that project team build in processes that will help them manage bias. At this point to this slide, we’ve been high level. We don’t want to dive in too far. As we mentioned, it’s just about managing bias. We accept bias. We’re not going to get into some crazy theorem about bias and where it comes from or anything like that.

But with the short time we do have, we want to look at this a sample workflow that we put together that we’ve seen in the industry today or that could be used in the industry today, and it’s going to make the concept of managing bias a reality.

This is the workflow, the basis of the workflow here. It’s a common workflow for building a project control strategy, especially today, and it’s through a recommended process and this is the one we like to see, the one we’ve seen some of our clients using, and it’s helping them out greatly.

It all starts to a step I mentioned earlier and it encompass whittling down of items to give project teams realistic results. It’s compile a data set. We spoke earlier on gathering this data. We’re going to go through what it looks like to normalize it. The amount of data isn’t necessarily a deterministic value.

You don’t need a specific amount of data to say you have enough to get some suggestions from let’s say a machine learning or if you’re just using reports to make decisions off of, you’re still using that data to help avoid or manage bias.

There’s not really a sweet spot for how much data is enough data. There’s something to keep in mind. But the more data at your disposal, the less biased suggestions will be in those machine learning suggestions or possibly those reports that you can come up with. The more, the better a lot of the times.

This leads to something teams should address to make this workflow most effective. The more data you gather, the greater the need to qualify what is good data becomes. We all know the sayings on what type of deliverables are produced when items going in aren’t of quality items. That’s a soft way of saying the age-old one that I’m not going to sing, that I’m not going to go into.

But the better items that are coming in, the more quality of those are, the better the results on the back end are going to be those deliverables, those reports, the suggestions from machine learning. It’s important that as you get more and more data, you make sure it’s quality data that you can use.

As I touched on earlier, data cleansing, as you get more data and it becomes something that your organization starts to do is gather data, data normalization and data cleansing becomes something of a need, a must at this point. Another way of saying this, the more data that is collected, the greater the need to cleanse this data becomes.

Through methods of determining what data is available, mining this data and then normalizing this data, organizations can do upfront work that’s going to help provide those project teams with a foundation for improving the management of bias. Yes, we see this, the data gathering, we see that as at an organization and level. That’s where we see that sweet spot that they should be gathering their data, housing that data and then using that data.

Whether that be through reporting again, or that machine learning suggestions, either way you can use that data for your benefit. Then separation of types of work or types of projects or types of contracts. What other qualifiers have you seen in these scenarios, Aaron?

 

Aaron Cohen:

Just all sorts. I mean, if you look at a company where they manage your business and they leverage KPIs, you just get down to all the different scopes of work that can be what is it that makes your organization successful and define success. There’s that statistical correlation between the things that you’re doing and the projects that have been successful.

I think, yeah, you’ve got type of contract project, type of work, there’s any number of things that I think companies have managed to key in on and identify as things that are really the key indicators of success.

 

Jordan Brooks:

Absolutely. We’ve seen in the industry this becoming used more and more is outside services with organizations set up to help them succeed with that data cleansing or that knowledge base setup. It goes a long way into ensuring tools like augmented intelligence or risk workshops are successful or even those reports.

Organizations like to bring in someone from the outside to avoid that, what we talked about earlier, organizational bias. Obviously, if you’re bringing someone outside your organization in, they’re going to have different opinions on what data is important, those qualifiers that you should use, that Aaron mentioned earlier, they may have another idea of what to use.

We’ve seen that be used very successfully in the industry today. Excuse me. Avoiding bias in a risk workshop is a hot topic in the risk world and often that requires guidance to help manage as well. That’s another one that we’ve seen outside consultants come in the organizations to help them run those risk workshops.

One being that you go partway to avoiding that group think or leadership bias that could come in. Absolutely you’ve got someone else leading that risk workshop and they can drive that conversation away from someone if someone’s trying to be the loudest voice in the room.

A lot of the times those outside services can go a long way in helping not only with setting up what data you should store and what is good data or what they perceive as good data and have a unbiased opinion as far as it comes being towards the organization. They also can help get away from certain voices that are driving conversations in those risk workshops.

Next in the workflow, now that we have good data to use is building the plan. This is something that is going to happen whether you have data or not. There’s got to be a plan and it’s got to be a good plan. It’s got to be a successful plan. As we talked about earlier, a lot of times these can be overly optimistic, but you’ve got to have a plan.

That plan could be your estimate or schedule, anything else. We’re talking about project controls here. Nothing specifically when we say building a plan, that could even be your control budget. You’re going to use tools that allow organizations to take advantage of the data that’s collected upfront in the first step here with augmented intelligence possibly and suggestions which will help remove bias in plan building by calling back to that cleanse data.

You’ve got your data. You can put it into a knowledge library, a knowledge base. Then that augmented intelligence, that machine learning, even the reports like we talked about, if you don’t have quite that machine learning yet, you’re going to get better suggestions, better reports, less biased reports. Then this can decrease the time it takes to establish the plan also.

With things like suggestions, we’ve seen an increase in how long it takes to, let’s say, build a schedule. A lot of the times you’ll have that estimate. It’s not tied to the schedule. A scheduler’s got to oppositely or disconnected from the estimate, go create that schedule.

If you have suggestions that are coming in based off of data that could be flowing from the estimate possibly in the future, then that decreases the amount of time it takes to get that information and build that plan. It’ll also give you insight to decision-making when the human element of this workflow comes into play also.

By compiling data into a library that allows a machine learning engine to make suggestions like we’ve been touching on here quite often to users and it’s given them the opportunity to use that data with bias managed. That could otherwise be introduced from individual or group think.

As presented earlier, the more data offered, the more refined these suggestions become. I know that as you get more and more data, you also have the chance of increasing certain biases. It’s, again, help with outside organizations or someone coming in, taking a fresh look at it, can severely help that.

You’re never going to fully remove bias. It’s just something that you have to learn to manage like we’ve been talking about here. Especially, it’s very helpful if that data has been normalized. All of this factors into building a plan with potentially risk data already considered.

We’re showing that in this scenario that we work through this workflow process that we’re working through risk data is included in this. If it is included and it’s making suggestions based off of that, you’re already taken out more bias because these suggestions include risks and it’s covering those risks with unbiased decision making already being implemented in these.

As more data is added to the workflow, you can see how less bias is in there, the more bias is removed through that suggestions of that machine learning, if that’s the route you’re going down, and again, also in the reports you could be looking at. The more data you have, the uniqueness of that data, we have planning data, we have risk data, it’s going to decrease the amount of that bias you see.

Third step in here. We’ve gone through the compiled data set. We’ve built the plan. Next in the workflow is getting feedback. This is where we think it’s vital to remove to managing a lot of that bias. Again, I’m going to try to avoid say removing. Just it’s human nature there for me to say that, but you’re managing your bias, you’re not removing it.

The feedback stage is where we’re going to introduce those human subject matter experts that we talked about, that I talked about. We’ve you see in a lot of AACE technical papers, in presentations, they’ll talk about that mixture of that data and the machine learning suggestions and the human subject matter experts coming together. That’s kind of what you want to see in a process like this.

Then in conjunction with this data in AI suggestions, those human subject matter experts can produce a deliverable that has managed the bias and can be used to further manage your project controls. You’re seeing a symbiotic relationship between that human and that AI, those machine learning suggestions.

Then by allowing users to come into a tool where they can give feedback on, in this scenario we’re talking about a schedule and they use uncertainty and risks without having to be, say, in a meeting with their peers or with a leader. You can manage bias right there by taking out group think, by taking out the loudest voice in the room and they can go in and give their own feedback in a bubble and say, “This is what I really think is going to happen.”

Again, you’re managing bias right there. Just by separating people away from each other, that’s some of that bias being managed at the very beginning and you’re setting yourselves up for a positive impact on that future. Feedback on schedules, it usually spans an organization and it can include individuals outside of an organization also.

Like we’ve talked about, those people outside of the organization, they have some opinions that are important for the job. A lot of times, they’re a stakeholder in the job, so their opinion matters. Anything from estimators, project managers, superintendents, subcontractors, vendors, third-party designers, owners, GCs, we’ve seen all of these individuals, all of these organizations being involved in this gathering feedback and it’s a vital part of planning.

Managing bias also that can creep in into these feedback sessions. It’s important. Organizations have as much unbiased data as they can going into the next steps of the workflow. Then with the approach of in this scenario, augmented intelligence and the plan building step combined with human decisions, this is an approach recommended in the technical article specifically principles for quantitative project risk management.

Also the book: “Risk A User’s Guide,” too, I recommend using. Organizations can capture contextual analytics and creative problem-solving from the suggestions and feedback.

Last but not least on my workflow here, this brings us to the last step of … This is an important know, an iterative process of planning and managing bias, which is to risk adjust your plan. Using feedback in an organization or project team’s risk register, we risk adjust plans through simulations that in incorporate uncertainties in risks.

Then through these risk simulations, the nature of the results are intended to give deliverables that are backed by data, which is as much managed of bias as possible. With the understanding that some feedback gathered will inherently carry bias again. A point here is to call out the inclusion of the subject matter experts.

Having more than one feedback source in this scenario in the get feedback stage will help to manage bias that is inherent to the uncertainties and risks used in these simulations. In that feedback stage where the human subject matter experts are giving their feedback on uncertainties and risks, there’s obviously some bias. The more you get, the better from all individuals within an organization, outside of an organization, stakeholders on a project, you want their opinions on these.

This workflow that we called out earlier is also not intended to be a one-time thing, like I mentioned earlier. Projects, situations, scenarios, they’re constantly changing. To manage bias on a project, it’s important to make this an iterative process, not something that the project and organization goes through one time and then executes on that initial plan and then that’s it. That’s not what it’s intended for.

Again, it is worth noting that this does not currently only apply to let’s say a schedule, but that’s kind of what we’ve shown on screen. It also is a workflow that can be applied to estimates currently as well as even control budgets. It’s important not to just use this on one facet of your project controls, use it in all facets, all assets of that process and that strategy.

That brings us to the last part of our bullet points here and that’s advance advancements in managing bias. I’m going to turn it over to Aaron here. He’s going to give some insight to let everyone know what he sees coming down, again, what’s going to be moving the needle in the future for managing bias and project controls.

 

Aaron Cohen:

Yeah. Thanks Jordan. We got a lot of really great questions I think coming in. We’ll be sure to spend some time. Leave some time here so that we can go through those. But I just wanted to talk briefly about what we’re seeing in the industry and maybe what I’m not seeing happen here as you know that data collection I think is going to be the biggest thing that we’re going to have to contend with moving forward and evolving.

I mean we got robotic dogs that are running around the job site doing laser scans and giving us just more information than we’ve ever had. All sorts of sensors and all sorts of equipment and all sorts of things that give us access in the ability to do things. We’re going to have data coming out of our ears. We’re not going to know what to do with it all, which is a vastly different issue that we had to contend with when you contrast it to where we came from.

Looking at our business and our industry and how it’s growing, we all know that we have issues with labor. We all know that we have issues with getting people that are knowledgeable with how to build work on the site. I don’t know how you replace the 30-year experience, the subject matter expert guy with the boots on the field that knows how work is being built.

Those people are becoming fewer and farther between. They’re being replaced by a younger generation of very knowledgeable technologists, but they don’t necessarily have the same experience. It’s always fascinating to me. You get this guy that’s able to walk out onto a job and he just knows looking at the job. It’s a good job. It’s a bad job. What’s right, what’s wrong?

He can’t necessarily always articulate the problems, but he’s that one that you want to get that influence and input from. When those guys start going away, we start increasing risk in our work without necessarily knowing it. The data can augment that.

Now what I don’t expect is these robot dogs are going to all of a sudden collect so much data that we’ve got a predictive AI machine learning engine that can just tell us everything that we need to know. We don’t need to know about construction anymore. I don’t think that’s ever going to happen.

But when those folks that don’t know what they don’t know have a lot more data at their disposal, they’re going to be able to understand some of those things. The example that I like to use is how many of you guys have ever been in … You jump in your car and your GPS navigation system tells you that you might want to take an alternative route because there’s traffic ahead that is going to cause a 20-minute delay.

You didn’t necessarily go and say, “Hey, is there traffic up ahead?” It just watched your patterns and it knew what you were trying to do and it predicted that that might be valuable information for you. When we talk about predictive analytics and machine learning and all that type of stuff, it’s great at recognizing patterns.

But it’s not necessarily great at is putting it in the context of what we need. To help somebody that’s working on managing the bias in a project that’s trying to come up with an estimate that’s got reasonable productivity rates, that’s trying to manage a budget to forecast a cost of completion, to have some pattern recognition in place that’s going to alert people to things that maybe they weren’t looking for.

We’ve got a lot of businesses that have been very, very successful over the years and watching a set of key performance indicators or key metrics and that these things are in line, we’re going to be good. I think our business is changing so fast nowadays that those key indicators that we’re good for the past 20 years may not be the same things that are going to be necessarily controlling what success is for us moving forward.

Being able to evaluate, collect all this data to use all this data and to use the right data and to use it effectively to support the decisions that we feel that we know are right or to support the guy that has the experience or to supplement or augment the person that maybe doesn’t have the experience that they need.

That’s really where I see things moving forward in relation to technology and how we use it to be more successful in the work that we’re building, whether it’s an estimate, whether it’s managing a budget, whether it’s putting together a schedule. That’s what I’m seeing as far as the future and the direction of where we’re going with all this. I don’t know, Jordan, if you had any thoughts or anything additional there?

 

Jordan Brooks:

No. That’s perfect, Aaron. I agree with everything you said there. It touches on what we talked about earlier is not only are we saying this and we don’t want to pretend like we’re the only ones that are addressing bias. I’ve many articles, journals, conferences I’ve been to, even in other webinars that sweet spot seems to be that everyone agrees on is there’s … you use that data, you use some of that machine learning, that augmented intelligence.

Obviously, there’s bias even in that when you’re collecting that data. Then you bring in that human subject matter expert that’s going to be able to look at data and tell you if it’s right or incorrect. Should we be using that data? The mixture of those is really what you want in an organization when they’re trying to manage that bias or a project team.

I think no matter what comes into the future, like you said Aaron, that’s still going to be prevalent throughout the industry for a long time. All right. We did get some good questions. I read through some of those as Aaron was speaking there. I think we’ll have time here to address those. Not a whole lot of time, but at least some.

First I’m going to end it with this. Here’s that survey I was talking about. You can scan the QR code below there. Go in. Please give us your feedback. We would love to hear it, good, bad, indifferent. Again, if you have any feedback for us specifically or questions on it, we will be at that conference in June in Chicago with AACE. Please come see us there if you have any further questions we don’t touch on.

I’ve got one I did want to address immediately, Aaron, and then maybe I’ll let you pick off one that you saw. I saw one on here that how does one prevent subjective analysis with data, one can use and find data to strengthen their inherent bias?

I’ve seen that question a couple times. Like we mentioned earlier, obviously data … Oh, excuse me. Lost the screen there. Are you seeing me Aaron?

 

Aaron Cohen:

It’s black. You’re still there.

 

Jordan Brooks:

Let me hit Stop real quick and re-share. Let me. There we go. There we go. Sorry about that. Are you seeing it now Aaron?

 

Aaron Cohen:

I don’t believe so.

 

Jordan Brooks:

There we go.

 

Aaron Cohen:

There we go.

 

Jordan Brooks:

Okay. Sorry about that. Technical difficulties. With that data and the subjective analysis and the inherent bias that comes from that data and being able to use that data to come to your own conclusion and your own bias conclusions, again, the amount of data gets important there. The more data you get from different sources is vitally important.

I would say that’s going to help out avoiding these situations. Also, the number of participants or people, human subject matter experts who are giving you feedback feeds into that. If you’ve got one person absolutely using that data to draw you towards their bias and what they want their decision to be, that’s where that other subject matter expert or those other people giving feedback step in and you can look at what feedback they’re giving and hopefully that’ll draw you away from some of that bias and help you manage it.

Again, we mentioned this multiple times, there’s no way of getting rid of all bias through in project controls or all bias in human nature. It’s going to be there. It’s all about managing it. I saw a lot of questions. I think it goes around that same thing. You’re never going to totally get rid of bias. You just got to learn how to manage it best.

We gave you some ways today to hopefully spur some thought on how you should manage that bias or best ways we’ve seen it done in the industry today. Aaron, you got one on there that you wanted to hit?

 

Aaron Cohen:

Yeah. I’m looking. There’s a lot of good ones.

 

Jordan Brooks:

There’s a lot of good ones.

 

Aaron Cohen:

Maybe I’ll hit this one. Is there bias in using more data, i.e. more data, more potential ways to gain the data to get to what people want? I love that one because I think it’s spot on. I think it’s the more you have and you can use that information to paint the picture in the way that you want the outcome to go.

That is in of itself a form of bias is trying to drive towards that outcome that you have that’s most desirable. But again, I think that’s really the intent or the objective is to not be doing that unknowingly and to use the data to support the decisions and to be open to if you do see things that are contrary, maybe that’s an area for exploration and to try and understand what the differences are.

I’ve always just believed throughout my career that estimating’s not too terribly hard. It sometimes might not be accurate when you don’t understand all of the variables that impact your production and it impact what’s going to drive your costs. That in and of itself is where the risk lies. It’s understanding what those variables are.

The more data that you have that can indicate and point to things that might be going out of variance in some way or another, the more you have an opportunity to understand what those variables are that you need to monitor and control and use to control your work.

Yeah. I totally agree there is also that other downside risk of if somebody wants to use it for nefarious purposes, they can. You’re going to have more of ammunition, more information to paint that picture that you want.

 

Jordan Brooks:

Absolutely. Yeah. I saw another one here that I actually like. You mentioned machine learning. Do you have any advice on removing bias or alleviating concerns of bias from the somewhat black box nature of machine learning and its ability to generate or extrapolate from the bias of the source data and the modelers for the data?

Amazing question right there. I will absolutely. We can address that one. I think we talked about one way of doing that that I’ve seen done in the industry today is organizations actually will have a large amount of data, let’s say in a data warehouse. They want to bring it in and use it for that machine learning, that augmented intelligence. They’ll hire someone from the outside.

They’ll say, “Hey, so-and-so come in, a consultant comes in, looks through the data, mines that data, sees what’s available, and then without any bias from the organization that he was hired to do this for, he’s going to make decisions on what is good, what’s bad, what shouldn’t be included, what should be included.”

You don’t have to do that with one consultant. You can have multiple consultants do that. But that’s one way we gave in the presentation to avoid that black box nature of machine learning that was asked in the question and try to get some of that bias managed by an outside source.

Not always the best answer, but that is one answer. I like that one. That’s why I went to it because I’ve seen that done before today. I did want to address that one. Aaron, any other one you’re seeing on there? I’m scrolling through real quick.

 

Aaron Cohen:

I like this topic too just as a general discussion here. One question to the concern with optimism bias is generally not the project team but rather the political proponents. How does one deal with something outside the project team’s sphere of influence?

Man, I’ll tell you, I’ve been a lot of times or I’ve been given a job and I’ve been given the parameters and here’s your budget and here’s when you’re expected to finish it. I didn’t have a hand in deriving any of that, but I had to deal with it. I think that we all have that to some degree.

While it’s not great, I think the best thing that can happen is you get out on top of that early and again with more data and with more information and more ability to justify a, “Hey. There could be an issue with this budget or with this approach or with this schedule or whatever it is that we’re trying to do.”

Here’s my … It’s not just I don’t like it, so that’s why it’s no good, but there’s going to be more of that information to support your decision, to look back at some of the analytics that you’ve got from past work, from other similar jobs that are going to allow you to support decisions that may allow you to correct some of those things early enough.

I know it’s not always possible depending on your environment to be able to have that level of influence. But generally speaking I’d say that at least raising the issue is going to be doing more than, well, I guess, I got to deal with it and there’s nothing that could be done.

But yeah, certainly outside influence is having an impact is not just impacts us as far as managing bias. It’s something that we have to contend with throughout the entire project in most cases.

 

Jordan Brooks:

Absolutely. There is another one in here and I do want to address this one because I don’t want the webinar to get the attendees to get the wrong impression here. Feedback sought individually might be unbiased but may not be reliable given non-SMEs giving feedback.

For example, an engineering manager giving an uncertainty range for labor productivity for which only construction personnel is the SME for. Group discussions do have their benefits as well and absolutely agreed. I want to be clear on this one. The feedback stage that we talked about is recommended to be individualistic a little bit at least.

You can absolutely do it in a group setting. But by doing it individualistically, you remove that leadership bias, that loudest voice in the room, or that group think, the risk workshop step that we talked about at the end is absolutely what we see done in the group setting. You’ve gathered your feedback in that previous step.

You come into that risk workshop stage as a group and you talk through what feedback is useful and what isn’t. Obviously, that’s when you bring in the SMEs to that room and you say, “Is this good feedback or not?” In some programs, at least we’ve seen today, you can mute certain feedback if it’s not useful, if it’s being given from a non-SME and it’s kind of out of left field, you can mute those and that will remove that feedback, those uncertainties from impacting those risk simulations.

Again, you’re adding bias in by doing that, but you’re also removing bad quality data by doing that on the front end. There’s a little give and take there. We absolutely recommend that. Though a group setting is absolutely useful, we’re not saying you shouldn’t do it. You should absolutely have group settings as well.

But some feedback is best gathered individualistically, especially if it’s going to mute some of that feedback. Great questions on these. Aaron, did you see any other ones that you want to call out here? I think we’ve got a little bit of time.

 

Aaron Cohen:

Yeah. I like that one too. I’ll grab this one last one here. Mentioned company values as a kind of positive bias. Can you elaborate a little bit on when biases are positive and when they’re negative? Yeah, company values. I mean I think as a company you’ve got a safety culture. You’ve got a quality culture.

Maybe there’s just a bias that if you’re going to do something right, you’re going to do something safe that it’s going to take you longer, it’s going to potentially cost more. I think those can be types of bias just based on the fact that we’re doing this work and we’re going to do it in a certain way. But I think that you can also have a negative bias there.

Not even just company culture and values, but it could just be the project team. “Oh, we got this guy in the job and this is … We’re going to have a lot of change orders on this one.” Those types of things can influence the way that you’re going to go about scheduling and estimating and setting up your … progressing your control budget.

Those are the types of things that we want to try and manage and not have influence those types of planning activities. But they do. Again, it’s just more of awareness of … to the extent that you can manage that doing so with the information to justify, no, that’s not really the case. We think that every time Joe’s on this job, it goes bad.

But look here, these are the best few jobs that Joe’s done and he actually does pretty good in these types of jobs and we didn’t realize that. We got to keep him over on this side of town instead of that side of town because he doesn’t have as far of a drive to get to work. That’s the type of information that you can glean from some of these metrics and they start popping at you and how you can start managing some of that bias.

 

Jordan Brooks:

All right. I think that was … We covered the majority of the questions actually, which were great questions. Appreciate all the feedback on that and the ammo we got there to back up some of these points we had. Again, thank you for joining us today. Thank you AACE for giving us the chance to talk to everyone on this webinar.

Again, if you have any questions for us, we will also be at that AACE conference in June, in Chicago. Come see us if you have any questions on this webinar or anything else. Again, thank you all for joining us. We appreciate the feedback and the participation.

Show full transcription

REQUEST A DEMO

Thanks for contacting us. A member of our team will follow up with you shortly.