Nathalie Tidman, The In-House Lawyer and Legal Business: Welcome everyone. We have a fantastic panel of extremely talented and insightful people here this evening.
As a starter for ten, what do you see as the real opportunities and benefits for in-house lawyers using generative AI in your day-to-day dealings?
Helen Dooley, AIB: In-house lawyers add value to the business, but we are an overhead. I am scared by GenAI, but I see it as a huge opportunity for in-house lawyers. I am sure law firms can utilise this for the low-value, high-volume type work to free up lawyers, who tend to be expensive resources, to do the strategic value-add. It is about harnessing the opportunity.
David Hackett, Addleshaw Goddard Ireland: We are only at the beginning of this.
The most logical use of generative AI is around those quasi-legal tasks that land on our desks and take up a lot of time. We can see that the technology can be really revolutionary for that, and the hope would be that we can deploy the technology successfully, that it will free up lawyers’ time from some of those tasks. We can let the AI do that, and do it well, so we can really concentrate on the type of advice that we are trained to give and becoming more involved in analysing the results that come through the AI tools, and inputting on strategic matters that really add value to the business.
Cíara Garahy, Salesforce: If you even look at a small use case, if you are trying to hire someone into your team and the HR person says, ‘Can you write a job description?’ ChatGPT can write that. There is a huge range of opportunity, not only on the legal aspect, but especially for in-house counsel. A portion of our job is contracts, but a larger portion is actually dealing with the business, writing communications, marketing and talking to people. AI is a huge opportunity within that and in driving efficiencies. For the things that come across your desk that are not actual strategic legal work, it is game-changing.
Nathalie Tidman: Is GenAI just a way of saving time and being more efficient, or are you seeing this as a real tool for actually making strategic decisions?
Cíara Garahy: It depends on which tool you are using and how you are using it. Are you using it for an NDA review that you had an intern doing? Are you using it to analyse a huge set of data that came in and you need to get a response for your board in time? GenAI is a broad piece. It is drilling in and finding the piece that is useful for you. It is absolutely strategic and useful on all levels.
Helen Dooley: AIB has done a few acquisitions in the past couple of years. We have bought a couple of companies, and we have bought some loan books to use it, for example, to accelerate your due diligence. Months can be spent on due diligence, and all the while, you are spending money, but you are using valuable resources, whether it is business analysts and internal and external lawyers, to get a quicker answer on something like, ‘Is this feasible? Are there any problems there? Is this a company that is going to fit with our strategy?’ To short-circuit all that brings huge business benefits.
David Hackett: To get the best use out of AI now and in the future, you are really going to get benefits if you stand back a little bit, look at your business, your role, the industry you are in, the type of tasks that you do, and really figure out what the use cases are. The possibilities are endless, so we should not be constrained by just saying we are going to use it for what occurs to us now.
Time spent strategically thinking about how we are going to deploy AI in our businesses is really going to be valuable, and it is going to be time well spent.
Nathalie Tidman: Have there been any ‘wow’ moments where you have thought AI is really driving change in quite a short space of time? Any experiences of actually having really good results from using it?
Cíara Garahy: Salesforce uses GenAI products, but they are not the big game-changer for us. The big game-changer is that Salesforce sells GenAI products. Einstein GPT is the big play at the moment. We are looking at it from the other side of the house as well, and the customer’s concerns. When you see how it can change our customers’ day-to-day and you actually see what can be done with the GenAI product that we are selling, then you are like, ‘Oh my gosh, this is huge’.
If you take something like an online retailer, they can apply it everywhere, from them buying the fabric to the person at the end of the purchase. For example, if you buy a knit jumper every winter, they know you are buying a knit jumper, but now, not only do they know that, but they can also create a marketing piece around it. They can provide you with a sales piece. That is all fed through your product suite that you are already using. That is when you go, ‘Wow, look at it go’.
Helen Dooley: Working for a regulated financial service means that we are some way off. Banks, and certainly Irish banks, in the context of the Irish regulatory framework, are going to be slow to get ahead here. We use a lot of AI today. For example, we would have automated credit decisioning, but that is just based on, ‘If these facts are true, then the person can be approved for a personal loan up to £10,000’.
We looked at some research, which I think was in China. China obviously has a huge population, and for many years, it has been sharing its data. It did an analysis of recovery of loans going bad. If somebody took out a loan and they had the equivalent in China of a Hotmail account, rather than a more established email account, and a mobile phone that was pay-as-you-go, they had a higher default rate. That was proven time and time again with many datasets.
The Irish banks are particularly hampered with holding more capital because of our history through the financial crisis. Can machine learning help us with our lending decisions? We also have to think about the ethics of that. Are people being excluded from basic credit? If the machine has identified that the loan will go bad, it does not necessarily mean they should not get the loan in the first place. There are interesting concepts to consider.
David Hackett: In terms of ‘wow’ moments, it is just seeing the practical application of how the technology can work and the ability to use plain English with the technology to get it to analyse data and documents with an accurate result. The narrower AI tools have been successful, and there is still a place for those. I am thinking of something like Kira, which is used a lot in litigation, and Discovery. Again, the way things needed to be phrased, and the inputs that were needed to drag out the information, it needs to be quite specific.
This is completely different. You can almost talk to it in regular, plain English, ask it a question. It might clarify what you are seeking to look for, but then it goes off and it delivers the results. Something I think we are all going to have to get over is the ability to trust the AI and rely on what it is doing.
Nathalie Tidman: There are clearly risks associated with GenAI, such as data privacy. What do you think are the most salient points around the challenges of using it?
Cíara Garahy: There are a few massive challenges. One is obviously the data piece and, going back to the techie side of it, that applies whether you are doing open source or closed source. If you are using open source and feeding data into it that is coming out on the far side, whose data is being used? How is that safely being protected? Who is the processor? Who is the controller?
You also have to think of IP and who owns the output, and you have to back-to-back your agreement. For an in-house lawyer, realistically, none of us are at the point, unless we are working in open AI, of using a third-party process. You need to back-to-back your agreements because of your IP, your data and ethics. If you are in a regulated industry, can you trust what comes out of it? AI is very good at predicting and using its dataset, but it is not very good at the nuances, so there are huge risks there and you have to look at them all individually for your own unique business and how it is being worked out.
When you are providing services to clients that effectively use AI technology as part of those services, are your clients aware of that? You need to be transparent so that they understand how it is going on. That may lead to more questions, but again, that is the contractual framework that you have in place, both in terms of your receipt of the technology but also in service delivery.
In terms of the in-house environment, it is going to be a huge pressure. The businesses will want to use these things and then, again, your role as in-house counsel is to say, ‘How do we deliver that for the business but protect the business in the best possible way?’ You are not going to be able to find a perfect solution to that. It is finding the best workable solution because the business will want to use these tools and will ask how you can allow that to happen.
Helen Dooley: In-house legal teams spend a lot of their time enabling the business but also keeping the business from walking into glass walls. I see a point coming very quickly where various business units will see the value these tools can deliver but then we are faced with the issues of if it is open software, we cannot put our clients’ data into it. Are the algorithms fair? Are there any biases in there? There is a huge learning for us here.
Again, as a regulated financial services entity, we will be very slow on the uptake, because the risks in the near term are going to outweigh the benefits, because we do not know enough.
Nathalie Tidman: Some of us – and I am including myself in this – are Luddites. Have you been in the position where somebody has resisted the use of technology like ChatGPT? What are the barriers to rolling it out across the business and making people buy into it?
David Hackett: What I would say to people is this is not an optional extra in your set of tools that you use as a lawyer. It is not something where somebody is coming along trying to sell you a piece of technology and you say, ‘Maybe I would use that’, or ‘That might be useful’, or ‘It might be a little bit of help in my
job’. This is a complete change in the entire business environment, in the legal environment, in lots of different environments, so you do need to get on board with it because I do not think that opting out and saying, ‘I am not going to use this’ is an available option.
As much as possible, you need to embrace it, see what it can do and be open to that, because you see all these headlines, buzzwords and slogans going around. When that review came out that said 44% of legal jobs would be replaced by AI, that is a headline clickbait story, but it is not so much that AI is going to replace lawyers. Another line I heard used that I thought was quite accurate was that lawyers who use AI will replace lawyers that do not use AI.
Helen Dooley: AIB will not be at the forefront of this. When ChatGPT was getting traction, as a precaution AIB disabled the screenshot element on people’s work phones. I presume that was in case we took a screenshot of customer data and put it into ChatGPT. We are behind the curve.
Nathalie Tidman: If you had a junior member of the team using GenAI, you clearly could not take what they produced as read, or as being the finished article. Are there any risk mitigation processes that you are developing or using already in your businesses?
Cíara Garahy: I feel like you have to review everything. ChatGPT is probably quicker than a trainee but is it more accurate? You still have to review the work regardless. You would not send something out blindly and most people would not do that anyway, with the legal products that are for sale and for use. I was listening to a podcast and it said, ‘Do not be afraid that the robots are coming.
If a robot is coming, close the door’. They have not been trained to open doors.
Nathalie Tidman: David, from your perspective, there is the client piece and not putting client data into GenAI to come up with a result. How do you manage that?
David Hackett: It is a moving target, and we are at the very beginning of this journey, so we are developing those policies and frameworks. There are some basic ones which say, be careful with what data you put in and make sure you have the relevant permissions and that you are entitled to do that. Approach with caution, but, that said, I am not saying it should be a runaway train, but you do not want to be overly restrictive. That may lead to some restrictions on what you use the technology for, and it may not be helpful, but they are common sense things like taking a step back and looking to see what you are using the AI for. If it is to draft an email to somebody, that is absolutely fine; it can do that and do it very well.
On the other hand, if it is litigation involving commercially sensitive or valuable data, then you do want to think, ‘Should we be feeding that into an AI technology?’ and specifically, ‘What is the agreement? How are we entitled to use it? Who gets access to this data once it goes in? What protections do we have in terms of contractual remedies?’
Helen Dooley: Operating in a regulated industry sometimes means we are slower to adopt new technology. For example, with having our data in the cloud it took us a long time to be comfortable with that. Part of the reason is half the people in this country have a bank account with us and we have your sensitive personal data, and we know what you spend your money on so there is a huge trust element.
We have sensitive personal data that would be catastrophic if in the wrong hands or used wrongly. Banks, certainly in Ireland, are still a long way off from having high trust levels, however, we need to accelerate our thinking around AI, because I see the benefits. If we go at the glacial pace that we might sometimes approach other new technologies or new innovations, we will be left behind.
Nathalie Tidman: One of the risks at the moment is that GenAI is not regulated, so is that a self-regulating thing that each business has to do itself? Do you think it should be regulated by an external body?
David Hackett: There are different types of regulation. Are we talking about legal regulation or general regulation of AI? On general regulation of AI, does it need to be regulated? Definitely. ChatGPT itself has said it would welcome regulation. ChatGPT may be trying to get ahead of the curve, and it is a good soundbite. Undoubtedly, this is going to be a massive area and, frankly, how do you regulate it? It is tricky. From a European perspective, we have the AI Act coming down the tracks and there has been great progress on that. There is talk of it being adopted by the end of the year; that is very ambitious. In terms of what it sets out and the type of issues it is trying to grapple with, it is going to be difficult to effectively regulate AI. I suspect you will always be chasing the genie a little bit and trying to get it back in the bottle.
Looking at the example of GDPR, we are five years in with that now and it was trumpeted as, ‘It is going to fix everything on data protection, making sure individuals’ rights are protected and that organisations are regulated’. There is no doubt it has achieved some of that, but it has not functioned in the way that people were expecting. Although data protection is a huge area, it is very small in comparison to regulation of AI. Regulation is needed, but there is a balance needed as well. If you are overly prescriptive on the regulatory side, it may stifle innovation.
There is a trust question here as well and for people to feel comfortable using it and that their data is being used by it, they are going to want to see a level of regulation and certainly, in the legal regulation sphere, additional controls and restrictions on lawyers using it. In essence, lawyers need to be able to do a stand over with whatever results are generated by AI. That is what we are paid to do, to give advice that clients and our businesses can rely on. That is fundamental. Whether that is set out in regulation or not, we need to be able to do it. As a regulation piece, it is going to be huge.
Nathalie Tidman: Are you already training people internally on how to use AI to mitigate risk?
Cíara Garahy: Absolutely. You have to. It is like any new piece of technology, you have to take it, look at it, adopt it, learn it and then everyone needs to learn a new piece of technology when it comes in.
First, the lawyers have to upskill. We can then put the guardrails in place and then push it out and also educate our customers, because they are trying to buy this piece of technology. It is a massive education piece for everyone.
David Hackett: At the moment it is about encouraging use in the correct environments and with the correct datasets. Using it is the biggest thing for people at the moment, because a surprising number of people have not used it yet, so you are talking about it in the abstract.
Nathalie Tidman: What impact is GenAI going to have on the value for money question and on pricing?
Helen Dooley: Clients are going to expect to be paying for outputs, not time input. Aside from AI cutting through, the old billable hour for law firms is questionable today. As an aside, the alternative legal service providers, which are the Big Four accountancy firms that are going to embrace AI and not come at it from the traditional billable hour, are going to force the law firms to have a different approach.
David Hackett: The billable hour is a concept, it is there, but there are a lot of other factors that flow into what you charge for the work that you do. Certainly, my own experience is clients are not shy about raising the issue of fees. The majority of clients are fair, and they realise the firm needs to make a profit on that, but equally they do not want to be paying an exorbitant price, or paying for something that they feel is your learning and you are going to be able to use in the future. That is a reasonable conversation to have, and you need to be able to have it.
Some of these tools are very expensive and so there is going to be an investment by law firms in getting licences to deploy them throughout the organisation, but conversations around fees for a long time have been moving more and more in the direction of ‘What value are you delivering for us?’ Even if you are doing quite mundane tasks, that can be of value to a client, and they are very happy to pay for it. This is going to refocus the conversation on the value of the advice you provide. If it is the case that you are using AI to provide your advice more quickly, for example, in a competitive process where there are a number of bidders to buy a company and you can use AI to dig in and see where the issues in this company are and how that is going to affect the price we are going to pay, very quickly you get to a point of ‘This is what it is worth to the client. We are willing to offer this price to get the deal done’. If you can deliver that quickly, it is of huge value for the client.
Cíara Garahy: The other side of it is if we are in-house, we then also have to show our value. If we are getting the AI tools in, then what is the point in us being there? If I do ten NDAs a day as my day job and an AI tool can do it, then what is justifying my salary? Across the board we need to be careful about the value-add on this piece.
Nathalie Tidman: What are your takeaways on the future of GenAI?
Cíara Garahy: It is here, so get used to it. It is not as scary as you think it is and you are able to do it.
Helen Dooley: The more curious we are, the more benefit we are going to get from it. AI is in a lot of the things we do today, and so we are trying to harness the value it will ultimately bring.
David Hackett: Embrace it because it is happening. There is no getting away from it so better to be on the train than see the train pulling away from the station and you are not on it. Realise that this is a revolution and that this is a good time to be involved. We are not coming to it when everyone else is already way ahead and they know what they are doing.
Do not to be scared by it. AI is another tool that we as lawyers will use to do our job, but we direct what it does, we give it the data to analyse, we ask it the questions, it gives us results, we interpret that. Ultimately, we have the final say.
- Nathalie Tidman Editor, The In-House Lawyer and Legal Business
- Helen Dooley Group GC, AIB
- Cíara Garahy In-house counsel, Salesforce
- David Hackett Head of IP/IT and data protection, Addleshaw Goddard Ireland