In an industry steeped in tradition, we are on the cusp of an exciting, AI-driven transformation. Join us for part two of this provocative webinar series as we examine AI’s potential within a cautious construction landscape. Explore how current research is reshaping perceptions, propelling AI from scepticism to strategic advantage. See how advanced data integration fuels AI’s capabilities and unlocks the doors to precise scope, cost, and schedule management.
InEight’s Rob Bryant returns as moderator for part two. He will be joined by Ajoy Bhattacharya of Microsoft, Dr. Ali Khaloo of Aren, and Professor Eduard Hovy of Melbourne Connect as they discuss real-world success stories on leveraging AI’s power.
In this webinar, you’ll learn
- How current research is altering traditional perceptions and paving the way for AI’s acceptance in construction.
- How advanced data integration empowers AI and propels it toward a revolutionary new level of project management.
- Steps to harness AI’s capabilities and foster growth within a traditionally sceptical construction sector.
Please note that this is a replay of recorded discussion, but Rob and Professor Hovy will join at the webinar’s conclusion for a live question and answer session.
Transcript
Rob Bryant:
Well, welcome, everybody. Thank you for joining us on what is part two of a special series of InEight insight webinars looking at the topic of AI. And today, we are going to be exploring the reality and the power of artificial intelligence in construction. There is a lot to talk about. It’s a hot topic. It’s one that’s been looming for decades as we will get into, but plenty for us to get into in the next hour.
There’s a few housekeeping bits and pieces that we will get to in just a moment. First of all, introducing myself. So I’m Rob Bryant. I’m the executive vice president for InEight in Asia Pacific. It’s my pleasure to work with a number of customers in this part of the world. I also have the opportunity to get to meet a number of different partners and strategic partners that we work with in different capacities as it applies to how we get out to market and what we learn and what we bring into the solutions that we have today.
Just to touch on a little bit of housekeeping before I get into the main event. We have on the screen for you a number of different ways that you can navigate. What you can do is obviously you’re all muted, but we do want this to be an interactive session by way of the questions that you put through for us in the chat feature. So please feel free at any point to enter those through the chat feature, message the host for any troubleshooting, but importantly, ask some questions, put those in through the Q&A feature. So you’ll find that on the screen there. Please do put those in. We’ll see if we can take those as we work through this session. At the end, if we have some really great questions, and I’m sure there will be, then we’ll make sure we come back to you with those if we don’t have a chance. But I’m going to do everything I can to make sure we answer those for you.
All right. Let’s get into some introductions so you know who you have talking. But before we do that, just one thing just to give it a context. So the bigger picture for InEight. What we are doing is really trying to challenge how the industry approaches project controls. I guess that’s the reason that we’re having these conversations is because we want to challenge how we think and how all of our clients think about how well they can apply data to make decisions. So with that said, I’m going to provide some introductions for you to our panel today.
All right. We have with us a very esteemed group of individuals who bring different perspectives that are going to help us explore this topic in full. We have with us Ajoy Bhattacharya who is from Microsoft. He is working currently in a number of different areas. He’s got a very rich background in terms of his experience in the industry and things that he’s brought into his role that he will explain to you in just a moment.
We also have Dr. Ali Khaloo, who is running a very enterprising business that is on the cusp of innovation, and we’ll hear more about that as well. And also, Dr. Eduard Hovy, who is someone I’m pleased to say, I’ve got to know over the last year or so working at Melbourne University and the director of Melbourne Connect, wherein he calls home here in Melbourne. So first of all, Ajoy, I’d like to introduce you, and if you could give us an introduction to your background and the perspective you bring.
Ajoy Bhattacharya:
Yes, hi. Good afternoon or good morning, depending on where you are. So yes, Ajoy Bhattacharya with Microsoft. I am a senior technology strategist and industry advisor. I come from industry. I worked at a general contractor, namely Suffolk Construction, prior to joining Microsoft about a year ago. So I had a lot of experience in how to bring technologies to construction to operations, and more importantly, adoption.
Rob Bryant:
Thanks, Ajoy. Looking forward to hearing more about your perspectives there. Ali, tell us a little bit about yourself and the business that you operate as well.
Ali Khaloo:
Absolutely, Rob. Thank you so much for having me here. So in terms of who I am and what are we working on, so as you mentioned, my name is Ali, and I’m the CEO and co-founder of a venture-backed startup called Aren. We are based in New York City. We started almost four years ago, and we are a company playing in the field of infrastructure tech and construction tech, if you will. And basically, the technology that we’re bringing into the market, it’s something that we worked on for the past, I would say 10 to 12 years in terms of different IPs that I created, mostly at the university level while I was a PhD student and later on postdoc working in structural engineering. A mix of structural engineering and AI, I think that’s the best way to put it. So I wear always two hats at the same time.
And in terms of the company, so we all know that infrastructure globally, we’re not doing good. It is crumbling and that’s almost everywhere. And these assets are pretty old. They’re past their design life. We are spending money, but at the same, we’re wasting a lot of money. So in terms of the goal of the company, is we are creating this software service, civil infrastructure management platform to help both the asset owners and engineering firms, assets that manage heavy civil infrastructure, bridges, dams, et cetera, et cetera, and we’re using a unique combination of AI and civil engineering to do so. In a nutshell, we’re building this digital health record of these assets, and the whole goal is to minimize the risk of failure and optimize the spending. And currently, we’re working with different clients globally.
Rob Bryant:
Fantastic. Looking forward to hearing more about that as you respond to some of our questions. Earlier, I know in our lead-up, you had some really interesting examples, so thank you.
Ali Khaloo:
Absolutely.
Rob Bryant:
Professor Hovy, welcome. If you’d like to introduce yourself and just give the audience a bit of a background on your experience in AI and your role today.
Eduard Hovy:
Yeah. Hello, everybody. My name is Eduard Hovy. I’m a computer scientist. My research is on machine learning and natural language processing in AI. I used to be at Carnegie Mellon University doing that. And then I went to DARPA for two years because I was kind of bored with just the academic game. I wanted to see how you can take academic stuff and make it real. And after DARPA, I came to Melbourne to head Melbourne Connect, which is a kind of institute in which you have engineers and computer scientists and things on the one hand, and you have a bunch of companies including Rob’s company InEight on the other hand. My job is to try to bridge those two, to listen to the problems in company land, to go to academia land, and say, “Hey, guys. Let’s see if we can find something that can be useful perhaps that can help.”
Rob Bryant:
Fantastic. And we’re pleased to say that we’ve been engaged in doing that and it’s been a great journey so far, so thank you. All right. Well, I’ve got an opening question for you to get us started and I think one of the key topics that we need to do is really sort of must open some of the myths and the misconceptions that exist around AI. There’s a lot of chat about AI and that’s no pun intended. But I wanted to start by asking each of you, based on your current use of AI, what do you feel has been the biggest misconception of how it can be used in construction? Because there’s some skepticism. There’s also some sort of silver bullet syndrome around some of this too. So start with Ali, given the business that you are operating, what’s been your experience of that in terms of misconceptions?
Ali Khaloo:
That’s a very good question. Also, let’s keep in mind that the construction industry as a whole, it is not tech savvy. It’s famous for not adopting technology fast. But even putting that all aside, one of the things that I keep seeing in the industry is, for a good amount of time, it was always about, “Oh, this AI, there are going to be solutions powered by AI that’s going to come to market. It’s going to happen at some point.” But everybody thought it’s going to happen, I don’t know, 10 years from now. Nobody was like, “Oh, this thing can actually solve some of our pain points today.”
One of the things that happened in the past year, with having ChatGPT and people start to use it for different things or writing emails or stuff like that, it was like, “Oh, this thing is very powerful. I can use it on a daily basis.” And now, you’re seeing… Actually, there are some companies out there in the construction business that are, for the lack of a better word, they’re freaking out in the sense that, “Oh, we should do something. We should adopt this technology. We should start to think about how we can change what is it we are providing to the business as a whole.” So the biggest issue so far was, “Oh, this technology is going to come, but it’s going to come later in the future.” And I think now, we are seeing more and more people talking about it and thinking about using it and willing to test it. And yes, it is still a learning curve, but now that willingness starts to be there.
Rob Bryant:
Yep. Yeah, I think there is an awareness, so people are just starting to see how it can really be used. Yeah, I think it’s a good point.
Ali Khaloo:
It can be used today. I think that’s the main thing. It’s not like it’s super futuristic. It’s actually right here right now. We can use it.
Rob Bryant:
Yes, yes, yes, yes. No, that makes sense. When you get out to market, what are you seeing in terms of people’s perhaps misconceptions of how much it can do? Is there a misconception there as well?
Ali Khaloo:
Yes and no. Yes, in the sense that sometimes people are overestimating and sometimes underestimating. So sometimes it’s like, “Oh, it should work a hundred percent of the time. No issue whatsoever.” No false positive, no false negative, stuff of that nature. No hallucination when it’s talking about generative AI. So sometimes it’s that, “Oh, it should work a hundred percent.” Which you’re like, “Hey, even human beings are not performing a hundred percent of the time.” So that’s one thing. And the other piece is sometimes it’s underestimation, which is like, “I don’t think it can do this.” And it’s like, “Oh, it can actually. It’s been doing that for the past 10 years in other industries and now you’re bringing that to construction.” So that is what you’re seeing in terms of underestimations and overestimation in different parts of the industry.
Rob Bryant:
Okay, so a lot of learning still to happen. Ajoy, what’s your perspective on this? What have you experienced? Because you come from, as you mentioned, the construction industry, in general construction, and now, you’re on the technology front. What’s been your experience?
Ajoy Bhattacharya:
Yes. So, firstly, AI has been around since 1956. So the thing that’s really important here is the world is abuzz about generative AI, and that’s a very important distinction. What I’m seeing, a lot of CIOs that I’m engaged with ask, “Can AI tell me what’s going to happen on the job tomorrow?” And that’s when the red flags kind of go up in my head and I’m like, “Hold on, let’s talk about what it is. What it is, what it’s not, just as importantly, and what it can do.” Because there are tools out there that can take your data, pass it, and do things like regressions and correlations, to tell you the correlation of something happening, of an output happening, of an event happening. But AI in itself, while it does predict the next word in LLM format, is not necessarily predictive.
That’s a big misconception. People ask, “Can you tell me when the next safety incident is going to happen on a job site?” It tells me that we need to sit down and level set and really explain what it is, what’s the power of AI, especially generative AI, and why the world is abuzz about it.
Rob Bryant:
Yeah. Okay. Ed, I know you have been working in AI for a good few years. You’ve seen its evolution. So, Eduard, have you experienced… What are you seeing in terms of that misconception? Thinking about the construction industry, and I know we were talking about it just before, so what’s your take on this?
Eduard Hovy:
I think-
Ajoy Bhattacharya:
I think from a-
Rob Bryant:
Sorry. Sorry, Ajoy.
Ajoy Bhattacharya:
Oh, Sorry. Oh, go ahead.
Rob Bryant:
Sorry. That’s all right. I’ll come back to you. I know there’s a wealth of things there. But I just would say, yeah. I’ll grab Ed, and then we’ll come back around.
Ajoy Bhattacharya:
Absolutely. Sorry.
Rob Bryant:
That’s okay.
Eduard Hovy:
I remember AI had a buzz of hype and then there was what they call the AI winter in the ’90s where people grew disenchanted with it and said, “Hey, this thing isn’t really working. It’s not good for me and stuff.” And then, it sort of flattened out. And now, we’re in another hype cycle. And of course, AI people, responsible AI people are a little afraid that the world is going to turn negative again and go into another AI winter because it’s of course the case that AI, even though with all the hype that we have today, doesn’t do all the magic that you read in the newspapers and so on. So there’s a lot of disillusionment sometimes when people think, “Oh, this new thing because it’s from English in, to English out, I can just tell it anything and because it can help me plan my parties and tell me a recipe, and tell me how to make my tour in Egypt or whatever, it’s now going to solve my problems in construction or anywhere else.”
Of course, it doesn’t. It’s just a language model. It’s just a bunch of patterns. The modern generative AI is a bunch of patterns that you assemble when you give the prompt and pull the right patterns together and give you some output that looks reasonably coherent, but it may not have a lot of content. But that doesn’t mean that it does not able to do things for you. So as Ali was saying, and as Ajoy also mentioned, old AI, pre-generative AI has quite a lot of bespoke tailored tools like cost prediction or looking at process flows and trying to find optimization and things like that there. That’s there. That works. That always worked.
And so, in sort of very specific areas, very bespoke areas, AI has been helping for a long time and it has been helping other industries perhaps more than construction, but that’s just because construction people sometimes are a little conservative in adopting things. This new capability, nobody really knows today what it can do, and how much it can do. And so the danger is that people jump on it with too much excitement and get disappointed. The other danger is that people don’t do anything and then they just lose out on the opportunities. That’s what I’m seeing.
Rob Bryant:
Yeah. Wow. It’s really is a case of waking up to what’s possible, as much as realizing what the limitations are at the same time. So a lot of education is the thing that I’m hearing.
First question I’d like to put out to our audience at this point as well because we’d like to get your take on this as we work through. So the first of our poll questions. We want to ask everybody, “Has your company started to use AI to support operations?” So have a look at that question. Please vote on the poll, and we’ll come back and see what everyone thinks as we get through a little bit more from our panel.
All right, so moving along. Next question I’d like to ask you all that’s related to this. And it’s, how can advanced data integrations, enable AI, and unlock greater efficiency in things such as cost and scope, and schedule on capital projects? These are the burning topics that the industry is challenged with. So I’m keen to explore on the critical topics that will help advance the industry in terms of its profitability, its productivity, all of the things that are challenging us today, is AI part of that answer? Is there an application for AI to advance the industry and to address some of the problems that we’re experiencing globally? And I know this is particularly true in North America as much as it is in Australia around resource management, scheduling accuracy, budgeting accuracy, real-world examples that you might have. Ed, I’d like to start with you on that one. Just thinking about those topics that we’ve talked about in terms of base points of data and augmented retrieval, things that might be of relevance in these questions.
Eduard Hovy:
I know several projects that I can point to that actually where data has been useful and has actually helped. So there’s a study that was done with about 71 different jobs, households to larger jobs to do electrification, to bring power into the building while it’s being constructed and a regression compared to a neural network just to say, “Well, how much is this job going to cost me?” And so you put a bunch of data in there and you look and you say, “Okay, what are the parameters? What’s the square footage? How much wall space do I have? How much light do I need in each room? How many rooms there are, et cetera, et cetera.” Then you predict the cost, and you look after the job how good you are. And neural networks turned out to be actually a lot more accurate than regression according to this study. That was something that I just happened to pick up recently.
There are quite a lot of this kind of analysis using data to do predictions, whether it is for scheduling or for cost prediction or for personnel usage management, et cetera, supply chain. That exists, and a lot of that can be used today. So now, you come to the modern era. When you come to generative AI, it’s so easy to just throw a bunch of data into something like ChatGPT, and now with the newest ChatGPT, the one that OpenAI announced recently, like two days ago, you can tailor your own machine. You get their machine and you put your own data on top and you have a very big window, 300 pages of data you can put in, and this thing can eat up numbers and do stuff with numbers too. Now you think, “Oh, I’m just going to throw my numbers in and it’s going to work.” Well, it doesn’t do…
ChatGPT, generative AI doesn’t do data analysis of the traditional kind. So there’s no guarantee it’s going to give you the right answer, so don’t use it wrongly. We come back to what you said before. You must know what it can do so that you use it properly. But if you use the right tool as everybody in construction knows, you can do your job. So there are examples.
Rob Bryant:
Yeah. Okay. That’s interesting. See what you’ve got to talk about with this Ajoy because I know you’ve seen it. Again, you moved into the role you’re in today because of some of the frustrations you’d seen in the construction sector. So tell us about what you are seeing here in terms of what can be achieved to address these principle issues of the industry.
Ajoy Bhattacharya:
Sure. And so, I’ll start with, the data is critical. Your data engineering, your data science, your data architecture, your data structures, all of that is going to be critical to AI. At the end of the day, what ChatGPT is, is a large language model. What’s important that differentiates us is how you apply your data, which is the context and the prompt, right?
Rob Bryant:
Mm-hmm.
Ajoy Bhattacharya:
Together, that’s what forms the answer or the response that you get back. However, what I’m seeing also is folks think that this is the final answer. No, it’s not. It’s a fine-tuning model. And the advice that I usually give people is, “You need to proofread it. It’ll get you there. It’s supposed to be more efficient to get you there. It’s supposed to help you. But if you take it as gospel, it may not be the best approach. So proofread everything and then go forward.” But the data is critical, and so what you’re seeing from a use perspective, I think Ali mentioned, this industry is a little behind in adopting technology. And so one of the things you do see with a lot of people in the construction world is data quality, right?
Rob Bryant:
Yes.
Ajoy Bhattacharya:
Things that are entered differently in different systems, right?
Rob Bryant:
Yes.
Ajoy Bhattacharya:
A client being called one thing in one system and another thing in another system. So data quality is very critical for you to have better answers. And so that underlying data system, it’s a bunch of practices that come within data that’s going to be very critical for folks to look at.
Rob Bryant:
There’s an interesting point when you talk about the data because I think one of the issues that we hear about in the construction industry, perhaps more than others, is that data is often considered the IP of the company, or construction companies might consider it IP because they’re used to considering data in the conventional sense of structured personnel, confidential financial data. How much of an issue are we seeing in that perception, that mindset, when we are talking about utilizing data to help develop these models? And Ajoy, I might stick with you, and I can see Ed got something to say, and we’ll come to you as well Ali. But Ajoy, what’s your thought on that?
Ajoy Bhattacharya:
Yeah, so what I’m seeing is historical data on the projects, on things that you’ve done, workflows, collecting that data is critical. It’s taking historical information and continuing to fine-tune that information. It’s learning from that, right?
Rob Bryant:
Mm-hmm.
Ajoy Bhattacharya:
At the end of the day, at least from a Microsoft perspective, the way we deliver Azure OpenAI, you’re not fine-tuning the main learning model. You’re fine-tuning your own model that’s in your tenant. And this is something that really differentiates Microsoft and how we deliver. It’s your data. But that is very critical and there are… I’m seeing improvements almost on a daily basis on this front, both within projects that I’m doing with clients, as well with what’s coming out with the LLM.
Rob Bryant:
Gotcha. Okay. Ali, keen to get your take on this. So again, this application, what can be done but also on that topic of data as well, if that’s something that you are experiencing.
Ali Khaloo:
Absolutely. So I think Ajoy mentioned in a very nice way that the main part of the whole issue is data. I mean it’s data structure, data availability. I would even take it a little bit further, data availability. We are sitting on the operation and maintenance side of things, so we are not touching things while the construction is happening, which that is getting better and better in terms of digitization, more documentation. We have these virtual tours of the job site. So there’s so many different things that has happened in the past five to 10 years. But when you’re looking at the operational maintenance side of things, and then think of a Brooklyn Bridge of the world, something that has been around for a long, long time, the amount of information that you have on that asset, as an example, I’m saying it’s not that much. It’s basically you have paper copies and you have PDFs of people going to the assets or any bridge or any building and they have five to 10 photos and it’s handwritten notes and that’s about it.
And that is the data that goes into, okay, I’m a portfolio manager. I’m a city of whatever, and now, I have to think about how I can deploy my money in the thousands of bridges that I have or thousands of buildings that I own and operates. So if that data that comes in is so poor and so subjective and inconsistent, you do have a problem. So you should also think of it from that perspective. We can have the best of the models. I’m talking about the architectures of different neural nets and this and that. First, you have to have good data and you have to have quality data and you have to have consistent data throughout the industry, which that is not in existence. So that is something which is a challenge, and as an industry, we should really think about it.
And there’s another piece that you were mentioning about data being IP. For a long time, everybody else is like, “Yes, [inaudible 00:26:30] there.” Looking at the construction firms and consulting engineering firms and all of them, they’re like, “Oh, whatever I’ve done on that project, that’s the IP. I cannot share it or stuff like that.” As an AI company, we have seen that during our lives in terms of each time that thing comes up that, “Okay, what are you going to do with my data, I’m sharing my data. It’s like thousands of images or field notes or stuff like that. What is going to happen with that data?” I think number one thing is as a company, whomever working in this industry, kind of everywhere, you have to be responsible. You cannot use the data and just have it publicly available. You should be responsible. Even if you want to use it for building something or retraining your models or make it let’s say smarter as they call it, make the model see more and more data, you have to be upfront with them. You have to share that that is…
You’re going to do that, but there’s two things that you have to make it crystal clear. Number one, if I’m going to use the data, I’m going to anonymize it. It’s not traceable. So it goes back to whomever was doing that and nobody can ever understand which project this came from.
And the second thing, which is very important, for everything, which is there’s training involved, and I’m talking about AI solutions, the way that I always frame it for the client is, “So think of it as the way that the consulting companies are working, management consultants like McKinseys of the world and Bains of the world.” So at the same time, they can work with different…
They get involved, let’s say in a company in the aerospace industry. They’re working with American Airlines as an example. They do that job and they learned a bunch of things about that industry. And now, they’re going to a second client, Delta Airlines, and now they’re working with them. So American cannot tell them like, “Oh, you know what? Everything that you learned just forget about them and now start from the scratch and learn about this industry.” So that’s what happened. So of course, everything private stays in that particular contract, but there’s things that people, the knowledge that you learn and you take it to another job and you maybe, over the years, you’re actually becoming better and better as a consultant. So that’s the same example that we’re using for the AI that we’re building.
You’re like, “Hey, this project that we’re working with you, it’s based on millions of data that we used in the past to train our models and we are going to keep training that. So you are getting the benefit out of all of those past projects, and moving forward, even though you’re going to stick with us for a longer period of time, this thing is going to keep getting better.” So think of it from that perspective and not about, “Oh, they’re going to take my data and put it on the internet and everybody can Google it and find it out.”
Rob Bryant:
I like that. I like that idea. I mean we are always very familiar with that notion of confidentiality in terms of the specifics. But you’re right. I mean consultants, if they’re working well are learning the structures, the patterns, the sequences, the things that they’ve seen occur, and then are being able to apply that to another scenario. That’s exactly what we’re talking about with the data. So Ed, what’s your take on this, when it comes to this issue of data ownership and the use of data to create models for learning and then being able to apply it to perhaps more controlled data sets or the challenge that occurs when you’re trying to build a broad base of reference for patterns and machine learning?
Eduard Hovy:
I think what Ali said is exactly right, and I think we don’t yet have the proof. That is to say, if I bring in my company’s data and I put that into the prompt and I run my chat thing and I get my answer, it may be perfect for me, but I want to know that my company’s private data doesn’t leak out through OpenAI into some other companies when they do their investigations. I don’t mind if the system learns its generalizations and it learns to be better at its task. It helps me. It helps the other guy. I’m fine. But I don’t want my private specific information to leak out. So everybody is asking, “How do I guarantee that my specific information doesn’t leak out with my generic patterns? And so lessons and expertise, that’s fine. I’m willing to share that.”
There’s no proof today that you cannot actually… That you can guarantee that a little piece of specific fact you put in doesn’t leak out. OpenAI and others, they give you a little box, you can switch and say, “Don’t use my stuff for training,” but you don’t know how much you can trust them. So in response, people are building their own ring-fenced versions. And OpenAI, two days ago announced that you can go and build on top of the thing, your own sort of training data. It’s also something called retrieval-augmented generation where you keep your data on the side, you put your prompt in, a little piece goes through the site like running Google, pulls out the relevant information from your own private database, stuffs that into the prompt, automatically makes the prompt longer, then you go through ChatGPT, and out you get kind of an amalgam that what ChatGPT produces is being enriched and sort of recalibrated or reranked using your private… So that sort of thing is being used. That’s a very good way of keeping your own stuff private.
But I expect that in the near future, we are going to have hundreds to thousands of smaller ChatGPT generative language models or generative foundation models, which will include images and number reasoning capability. We’ll have lots of them and we’ll all be able to buy one even down to the individual personal level probably 15 years from now. But certainly, at the company level, it’s a question of how much you’re willing to pay and at what point does it become just a service as opposed to having you have to have your own engineer build the thing for you. We’re not quite there yet, but it’s coming.
Rob Bryant:
Okay, interesting. It sounds like it may be a level of skepticism that is part of this reluctance in adoption around understanding how much people are giving away. Maybe there’s a bit of a risk and reward understanding to be gained here to know that if you’re willing to share your data, then you could be the recipient of a more grounded and a more thorough baseline model that is going to help you make better decisions in the future rather than being isolated to just what you know and what you contain in your smaller world of data in your business on its own. Is that part of it you think as well, Ed?
Eduard Hovy:
That is definitely part of it. The evidence is, and if you understand how these things work, that the chance of your own particular data and leaking out when you give your own specific stuff, going so clearly into this general model that somebody else can recognize it is essentially nil because any particular in fact gets distributed representations all over the place and you can’t pull out two, three, four, five-word sequences that belong to any particular guy, even if they’re unusual word combinations. So all these authors who run around screaming and say, “Oh, you stole my copyright yesterday.” You said, “Please, guy. Point me to the six word… You show me your six words which is unique to you. You pull those things out of ChatGPT, just do it.” They can’t do it. You can’t do… But saying that this is how it works and this is the experience doesn’t mean it’s proven. That’s the problem.
Rob Bryant:
Yes, yes, yeah. And it becomes more abstract when you’re talking about project data and costs and material quantities and so forth. Yeah, I can-
Eduard Hovy:
Can I add one thing? Because it’s-
Rob Bryant:
Yeah, go ahead.
Eduard Hovy:
… relevant to me. If you live in text land and if you live in numbers land, and if you live in images land, the picture is slightly different. If you’re in text land, you’re worried about regulations, about HR, about reports, and things about compliance, so there, a language model is a language model. It lives in text land, so it has a really rich collection and anything you put in it is going to be dispersed. If you live in numbers land, this thing isn’t a numbers machine, not yet. Not large language models. They’re not large numbers models. So you can put numbers in and they’ve been trained to do some things with numbers, but they’re not as good and there’s not as much there. Anything you put in may retain its particular individuality. You may be able to pull it out later.
If you live in image land and if you live in DALL-E and that sort of space where now these companies are trying to put images and texts and everything together in one big foundation model, there, it’s sort of unclear to me whether your particular image, if you put it in, if you’re Picasso and you put in your Picasso painting, this thing knows that Picasso painting. You’d probably be able to draw out that sort of picture again. You’ve seen all the images of Mona Lisa with a mustache. So this thing can. You say, “Do me the raw Mona Lisa.” It pulls up the Mona Lisa. If you say, “Do me some book or do me Regulation 15 from the government.” It cannot do Regulation 15 from the government. It doesn’t know that. But it does know Mona Lisa. So in the different kinds of data you put in, there’s different capabilities and different degrees of privacy.
Rob Bryant:
Okay, thank you. There’s a couple more key things we want to get to over the course of the next 20 minutes. But just to come back to our audience poll and the first result from that, in terms of a question, “Has your company started to use AI to support operations?” Most appear to be researching that topic. So I guess what we can take from that is that organizations are aware. They’re curious but not that many actually applying it. Piloting, we have 20%. Actively using AI, just another 13%. But that’s interesting. Might just ask our audience, so Ajoy, what do you make of that poll result? Anything that you read into there?
Ajoy Bhattacharya:
Yeah. I mean I concur with this. A lot of folks are still not sure whether they want to dive into it yet. Let somebody else dive first. I’ll follow after I see the results kind of thing. There have been a bunch of pilots. I’ve got a whole bunch of customers doing pilots, which is, I bunch that with researching because pilots typically tend to be less risky, right?
Rob Bryant:
Yeah.
Ajoy Bhattacharya:
This is maybe a little bit more into the research, but it’s still research. They’re not looking at… If anything I would say it this way is they’re looking at the results to see if we should take this to production.
Rob Bryant:
Yes, yes. Interesting.
Ajoy Bhattacharya:
This absolutely makes sense. And not at all typically tends to be the folks who are more risk-averse and say, “Let somebody else figure it out. Let me see. I’ll be the second mover.”
Rob Bryant:
Yeah, yeah. Which is understandable given the topic, given the discussion that we’re just having. Okay. Well, we’ve got another question for the audience. I’m going to ask that in just a moment because I want to keep some inputs coming from everybody. But I’ve got another question for our audience. Sorry, for our panel as well. Why don’t we do this? We’ll kind of move everything in parallel because I want to get as much as we can into this discussion. So I’m going to ask the second poll question fresh off the back of that one. So given those results, appreciate that there’s only a few people that are starting to use AI in their operations, a number that are exploring it, and researching it, so perhaps this question can apply to those areas that you are piloting or that you are investigating or where you believe the most benefit from AI in construction is coming.
It may be what you’ve seen in your business, but also what you’ve seen out in the broader world where you’re expecting to see that benefit. So please, to all of those, you listening and watching, and I see some people starting to put their results in, putting their opinions in now. Put those in, and they’ll help us form a bit of an understanding of just what the reality is for our audience as well.
Back to our panel, a question for you that I think will help everyone on the webinar. If you are an organization that is starting to consider how or you you’re already into AI as a part of the strategy in your business, what are some of the steps that you think organizations should take now to be AI-ready? Because I know one of the biggest issues for construction industries, probably no different to many others is that what you’re doing now is what you’re going to be able to leverage in terms of benefits in years to come. So being prepared as you get into a project when you start to gather data that might run for a couple of years is important in terms of how you structure it. I’ll stop there and I’ll just open that question up to our panel. Ajoy, what’s your take on this? Because you’re seeing some of this, I know, through the work you’re doing at Microsoft in Azure, what’s your take on, what companies can do to prepare themselves to better utilize AI?
Ajoy Bhattacharya:
Yeah, I think there are a few things that I’ll kind of just speak to really quickly. But one, I usually start with the executives and the line of business executives. Let’s get them on board because they need to understand what they’re getting themselves into. They need to understand at least from a Microsoft perspective, which AI should they be going with? They’re making a bet on AI and why Microsoft, so understanding that is important. But from an organizational standpoint, typically aligning on who owns this practice because it could be considered technology innovation and a line of business application almost.
I have done a lot of town hall-style demos to really show people watch and learn style where you show people demos between CoPilot and Azure OpenAI, but you also have to think about things like what defines success. And one of the things that we do at Microsoft is not just a number of users that are onboarded, but how well are they using that. So executives have to think about, how well are people using it and think about the KPIs that will drive that. It’s about user experience, not necessarily about the number of users, and things like that. You could have the entire company on it, but nobody’s using it. You’re not really getting the value. And part of that could be seeing how quickly are tasks getting done, the easy tasks. And other things like crowdsourcing ideas from your company. It really gets them engaged into the whole process and thinking about long-term support and really developing a roadmap around it.
Rob Bryant:
Really getting into… Making that part of the culture of your business and having everyone think about it and be engaged is the message.
Ajoy Bhattacharya:
Absolutely. And one other thing, I think this is critical. So we’ve all grown up learning how to answer questions in school. We have to train our employees to now, learn how to ask the right question. Critical, the prompt is key.
Rob Bryant:
Yep. The prompt is key. I like that. Ali, what’s your experience of this, as you’ve got into organizations, what’s some of the advice that you end up giving your clients and prospects in terms of how they should prepare themselves?
Ali Khaloo:
Absolutely. So of course, one of the key things is defining how the success is going to look like, and from the get-go, for example, somebody reaching out to us and we’re like, “There’s a hype and we found out through X, Y, or Z, and now, we’re talking, but tell me about the problem. I know what’s the problem that we’re solving as a company, but tell me about your problem.” I want to hear from them that this is the problem. I think this is going to be the solution. If this can help me 30% doing this thing better, it’s a success. So I want to hear from them. And sometimes, they don’t have that answer. They don’t know what is that KPI, what is that ROI that they’re looking for. So I think that is one of the key things.
And one of the things that unfortunately I’ve been seeing in our industry and it’s getting better, so there’s a silver lining there is most of the time you’re looking at the innovation team, CIOs, or that innovation team, let’s bucket the whole thing. They’re not that powerful in the company. So they’re piloting a hundred different solutions and maybe just one or two wind up getting used by the company. So they’re even having difficulties bringing their project managers into the conversation and they’re like, “Oh, I’m too busy. I don’t want to hear about this and that.” So having this innovation initiative, make it closer to the operation of that business, and it is important. And having more and more people with a lot of influence involved, I think that’s where you’re really going to see things moving.
Looking at, for example, our experience, the most amount of success that we had was where somebody who’s either more on the financial side of things that they want to optimize certain things in terms of the costs or somebody who’s more on the technical side of things. And again, they want to look into technology to save them on something or the efficiency game or even new revenue, that’s where we really see things moving forward.
If it’s just talking with a CIO, just for the sake of talking about it and oh, it’s AI. Oh, that’s cool. Nothing is really happening. So again, making it closer to the core and bringing more influential people into the conversation, that is definitely helpful. And of course, as Ajoy mentioned, educating people. If somebody has never used any… I mean these days, it’s hard to say, nobody use any AI in their life. You’re using on a daily basis in some shape or form. But again, if they’ve never dealt with it in terms of programming it or researching it or at school, and stuff like that, they need to learn. So providing, I don’t know, different webinars, different white papers internally for the company, for people to actually this on a monthly or quarter basis, I think that’s a step forward for sure.
Rob Bryant:
Ed, what’s your experience of this in terms of, again, how people should prepare, how organizations should prepare themselves to leverage AI in decision-making?
Eduard Hovy:
I think what both Ali and Ajoy said makes a lot of sense to me. At the top level, you have to do a generic education, what is this capability? But then you have to very quickly drill-down. I like your list on the screen there which says, “Project planning is different from cost management, which is different from reporting and it’s different from risk mitigation.” I don’t know where you put compliance, but probably, that’s also around the risk mitigation. Each of these is a different kind of skill and each of these can be using AI, even generative AI differently. Some more and some less. So I think after generally educating people and saying, “It’s not magic. It’s not going to eat your job. It may help you if you use it properly. Now, let’s go and see in your particular enterprise where are the areas where you have a need.” An early adopter who’s willing to play and explore this thing and this willingness and the time, and then you can bring the AI capability in and maybe hire a consultant or an expert and actually do something and try.
Since it’s so new, this generative AI, you can’t just go out and buy a thing that’s going to help you with risk mitigation. It doesn’t quite exist yet or education and training. But there’s some experience building up to say, “When you are in the education and training space or the HR space or the cost management space, here’s the kinds of things that you tend to do. Here’s example prompts you tend to write. Here’s the kinds of capabilities you tend to get. Here’s the kinds of things that cannot be done today.” I think that’s the best way to deploy, to move from we are just looking around to actually trying. I think the big message is, “Try. Don’t be afraid. Do it and see what you get.”
Rob Bryant:
Experiment and learn.
Eduard Hovy:
Experiment.
Rob Bryant:
Fantastic. Yeah. I think that’s clear. So conscious, we’ve got a few questions coming in. I want to make sure we have some time for those from our audience. There’s some good questions being asked. Also the responses on this last poll question that we asked as well. So let’s have a look and see what people are saying in terms of where they’re seeing the most benefit from AI. Project planning, reporting and analytics, engineering and design, so some very clear winners, and some that haven’t prompted any response. But certainly, project planning, reporting and analytics coming out on top. And I guess that makes good sense. Engineering and design. I can see some applications in there too. So I think the reporting and analytics, that’s I guess where we might expect people to want to see and expect to see AI working at its best. So good to get that result.
We might talk a little bit more about this topic of data preparation and risk reward and understanding the nature of the problem you’re trying to solve. I think that’s probably one of the common issues. Perhaps if we keep that rolling, and Ed, I might put that back to you. I’m sure you’ve seen this plenty of times in your work, particularly as you’re bridging, as you said, the academia and the real-world problems. A lot of the time, there’ll be issues that people want to explore because it’s of interest versus issues they want to explore because they have a real need to solve a problem in the industry. When it comes to AI and construction, how well are you seeing examples of people applying it to problems that are going to make a difference? Things that are real issues as opposed to just points of curiosity, what’s your take on that?
Eduard Hovy:
I think the other two guys might be better able to answer that.
Rob Bryant:
Okay.
Eduard Hovy:
Yeah.
Rob Bryant:
All right. Okay. Happy to throw it across. So, Ali, you’re making a business out of some of this, so perhaps, where do you see that question? Where do you see the answer?
Ali Khaloo:
In terms of the data preparations?
Rob Bryant:
I think probably more in terms of what are the real… Are organizations tackling real problems with AI and what are the real problems that can be tackled with AI versus the points of curiosity that people might have? So perhaps the problem-solving versus the broader benchmarking perhaps.
Ali Khaloo:
Yeah. And also I would add, are you solving a… The solution that you have, is it a must have or is it nice to have? Is it really something that is going to change something today?
Rob Bryant:
Is it a needle changer, a needle mover for the business and the industry?
Ali Khaloo:
Yeah, yeah, yeah. So I mean the thing is, the issues in the industry, it’s so clear and known that there’s no way to hide it that there’s certain inefficiencies that are happening or lack of digitization, lack of good data, and I’m talking general data. So it’s less about, oh, the problem is unknown or nobody knows about it. It has always been like, “Okay, how you are solving this?” And is it another hype? Is it something too difficult to use that I can never start learning it and use it in my day-to-day? So I’m sometimes blaming people on the tech side of things that they’re building solutions that is really hard to swallow for the industry. So it is important that the solution, it’s easy to use. It’s really fitting a known problem in the market. And part of that is also listening to the market as well.
Just as you’re talking to the different people have your eyes and ears wide open to think about what they’re doing today and how we can augment that process. So I think that’s one of the key things. So if you don’t want to go there and say, “You know what? I’m going to have a machine that’s going to change everything for you, and instead of having 1,000 employees, you’re going to be down to a hundred employees, and that’s about it.” An industry which is hard to move by nature, those kind of things cannot happen.
With the solutions that you’re providing to the problems that are known to the industry, making sure that for now, you’re augmenting the process, for now, little by little, like baby steps, making sure that the adoption happens, make sure that you have very clear ROIs in terms of, “Okay, what is the return on investment for these guys to look like?” That’s the key thing. And again, it starts with something super, super tangible and build on top of that and get to something which is more futuristic, which is like, “I’m going to write a prompt and it’s going to design a new building, super sophisticated for me.” We’re going to get there, but let’s go a little bit baby step here.
Rob Bryant:
Yeah, walk before you run in that respect. So we’ve got some good questions coming in, and I’ve got one I want to pose to you, Ajoy in just a moment from our audience. I’m going to pose the final poll question as well because I think it’s important to get this one out for our audience as well. The greatest challenges that you are seeing in implementing AI. Because I get the sense that there’s a lot of finding their way through the maze in terms of how to get things addressed, particularly when we saw the answer to that question around who’s adopting AI currently. So what are the greatest challenges that you are experiencing? Question to our audience, “As you are implementing or you’re considering implementing AI in your organization, what are the challenges?” So you can have a think about that. We’ll come back to that one in just a couple of minutes.
Ajoy, a question from our audience going to move to some Q&A that we’ve received from the audience this morning. One question from Frank that I think is interesting here is the size of company that can benefit by adopting AI, is it limited to the large organizations who can see a sense of return, or can it be across the industry? And I know Ali, you’ll have an opinion on this too. So, Ajoy, what are you seeing from your perspective in terms of who can benefit?
Ajoy Bhattacharya:
I am seeing actually a vast size of clients. I mean you’ve got. I see the floor in the question. So I’m seeing large organizations as well as small SMBs, small to medium businesses use it. At the end of the day, the way we deliver AI, Azure OpenAi, at the end of the day, we are delivering a large language model. The same thing that we’re delivering to the large organizations than the small ones. So from that perspective, it’s not different. The question really becomes what use case are you going after? How much information do you want to feed it for fine-tuning? So at the end of the day, you are looking for a more efficient outcome, way of doing things.
And so, I won’t lie, there is a cost to it. There is going to be a cost because you are going to have to have some infrastructure and things in place. But that said, there is no limitation of who can use it. The question becomes… I think the main question is, at what stage does it start giving you the benefit? And that’s the question. If you feed it just one document, it’s going to read out one document and give you everything and knows from that one document. Is that really the best answer? When you have a collection of documents on a particular topic, that’s when you start getting better answers.
Rob Bryant:
Okay. Ali, what’s your thought? What’s your experience? Large businesses, small businesses, who benefits?
Ali Khaloo:
I’m with Ajoy. So looking at who can use it, and also, at the same time, get the most benefit out of adopting AI. It can be all of the above. It can be small to medium size and it can be the largest companies out there. So I don’t see any limitation on that front. One of the things that I’ve seen more on the adoption side of things or the speed of adoption, or at least try something quick and even fail. What we are seeing, again, in our experience is you’re looking at more medium-sized companies which have more resources than the small ones. At the same time, they’re not as complicated as the big ones. In our case, they have been faster in terms of adoption. The conversation, it’s happening.
Rob Bryant:
A bit more agile.
Ali Khaloo:
Very soon you can get in. Yeah, it’s more agile. Very soon you can get in front of the C-suite and they all agree upon it, they sign it, and move forward. Versus some of the small ones, as you mentioned, there’s an infrastructure that you have to have in place and stuff like that. There’s an investment that you should do that sometimes they don’t have it or it’s not a priority for them. And with the large ones, again, just navigating them, sometimes it’s laborious. You talk with this team and that team and they’re at different parts of the world. It’s just so many… It gets super complicated. So again, in terms of who can use it and the benefit, it’s across the board. In terms of adoption speed, I’m seeing more medium to smaller than the large-scale companies, the ones that are looking at are a little bit-
Rob Bryant:
Thank you.
Ali Khaloo:
… still smaller.
Rob Bryant:
Thanks, Ali.
Eduard Hovy:
Would you say… Sorry, can I ask a question? Wouldn’t you say it depends a little bit on what the functionality is, what the task is that is? That is if you say, “How much does the machine know already without any specific training and how much does the user, the operator already… How much can he be helped immediately?” So if you take an FAQ, here’s somebody sitting on the phone and asking and answering customer’s questions, you’ve got to train this guy a lot. But if you quickly train your language model with the rules and the capabilities of your product, and then you have the same guy, but you don’t have to train him so much. He’s sitting, answering the phone, and as soon as the question comes in, the LLM is doing all it can and it helps him. It gives him the answers. There’s not a hell of a lot of extra work you have to do.
You can save on the training of this guy and then you just put him in front on the phone and he can run. And I’m seeing that happen quite a lot. And then when you look at regulation compliance and taking a description of what you’ve done and compare it with the rules and regulations and the laws and stuff, that you’re also beginning to see. But on the other hand of the scale, when you have somebody who has sort of a complicated numerical problem where there’s all kinds of extra expertise needed where an engineer really has to sit and do something, then there’s a lot more work I think you have to do to bring a language model to get the power out of the language model to really help this guy. So isn’t it the case that you have to look task by task and just see where’s the thing ready for use with the right people and the right knowledge?
Rob Bryant:
I think it’s got to be a yes to that, Ed. And unfortunately, we’re right up on time. I think there’s evidently plenty material that we can come back and revisit again with another perspective. So just to bring things to a wrap as we are on the hour, but looking at the response from our audience, I think let’s get a sense of what people are feeling out in the real world, and it looks fairly balanced. Although 54% say a lack of experience, infrastructure, and process is what’s presenting the greatest challenge to them in implementing AI. I think that’s a topic all of its own that we can get into. It will have to be at another time, but hopefully, we provided some answers to that question and that challenge through the last hour. We need to wrap it here. So look, thank you so much for everything that you’ve contributed in the preparation and through the last hour.
Thank you Professor Hovy. Thank you, Ajoy. Thank you, Ali. Really appreciate your time. Audience, thank you so much for all of your questions, your interactions. There is a survey on the console. I ask if you can please just take a moment to answer that. If you’d like to find out more, you can visit website InEight.com for more information. For recordings of this webinar, if you want to share those, then those will be available as well, as well as the first in this series. So thank you again. Look forward to talking to you all in the future. But I’ll wrap by saying, again, thank you to our panel for your contribution to make this such an exciting and interesting discussion today.
Ajoy Bhattacharya:
Thank you.
Rob Bryant:
Thanks, everyone.
Ali Khaloo:
Thank you so much.
Eduard Hovy:
Thank you.
Rob Bryant:
Thank you.
Eduard Hovy:
Bye-Bye.
Ali Khaloo:
Bye.
Ajoy Bhattacharya:
Bye.
Rob Bryant:
Bye-bye.