Jump to content
Head Coach Openings 2024 ×
  • Current Donation Goals

    • Raised $2,716 of $3,600 target

Open Club  ·  46 members  ·  Free

OOB v2.0

Generative AI is coming for the lawyers


Muda69

Recommended Posts

https://arstechnica.com/information-technology/2023/02/generative-ai-is-coming-for-the-lawyers/?comments=1&comments-page=1

Quote

David Wakeling, head of London-based law firm Allen & Overy's markets innovation group, first came across law-focused generative AI tool Harvey in September 2022. He approached OpenAI, the system’s developer, to run a small experiment. A handful of his firm’s lawyers would use the system to answer simple questions about the law, draft documents, and take first passes at messages to clients.

The trial started small, Wakeling says, but soon ballooned. Around 3,500 workers across the company’s 43 offices ended up using the tool, asking it around 40,000 queries in total. The law firm has now entered into a partnership to use the AI tool more widely across the company, though Wakeling declined to say how much the agreement was worth. According to Harvey, one in four at Allen & Overy’s team of lawyers now uses the AI platform every day, with 80 percent using it once a month or more. Other large law firms are starting to adopt the platform too, the company says.

The rise of AI and its potential to disrupt the legal industry has been forecast multiple times before. But the rise of the latest wave of generative AI tools, with ChatGPT at its forefront, has those within the industry more convinced than ever.

“I think it is the beginning of a paradigm shift,” says Wakeling. “I think this technology is very suitable for the legal industry.”

Generative AI is having a cultural and commercial moment, being touted as the future of search, sparking legal disputes over copyright, and causing panic in schools and universities.
The technology, which uses large datasets to learn to generate pictures or text that appear natural, could be a good fit for the legal industry, which relies heavily on standardized documents and precedents.

Legal applications such as contract, conveyancing, or license generation are actually a relatively safe area in which to employ ChatGPT and its cousins,” says Lilian Edwards, professor of law, innovation, and society at Newcastle University. “Automated legal document generation has been a growth area for decades, even in rule-based tech days, because law firms can draw on large amounts of highly standardized templates and precedent banks to scaffold document generation, making the results far more predictable than with most free text outputs.”

But the problems with current generations of generative AI have already started to show. Most significantly, their tendency to confidently make things up—or “hallucinate.” That is problematic enough in search, but in the law, the difference between success and failure can be serious, and costly.

Over email, Gabriel Pereyra, Harvey’s founder and CEO, says that the AI has a number of systems in place to prevent and detect hallucinations. “Our systems are finetuned for legal use cases on massive legal datasets, which greatly reduces hallucinations compared to existing systems,” he says.
Even so, Harvey has gotten things wrong, says Wakeling—which is why Allen & Overy has a careful risk management program around the technology.

“We’ve got to provide the highest level of professional services,” Wakeling says. “We can’t have hallucinations contaminating legal advice.” Users who log in to Allen & Overy’s Harvey portal are confronted by a list of rules for using the tool. The most important, to Wakeling’s mind? “You must validate everything coming out of the system. You have to check everything.”

Wakeling has been particularly impressed with Harvey’s prowess at translation. It’s strong at mainstream law, but struggles on specific niches, where it’s more prone to hallucination. “We know the limits, and people have been extremely well informed on the risk of hallucination,” he says. “Within the firm, we’ve gone to great lengths with a big training program.”

Other lawyers who spoke to WIRED were cautiously optimistic about the use of AI in their practice.

“It is certainly very interesting and definitely indicative of some of the fantastic innovation that is taking place within the legal industry,” says Sian Ashton, client transformation partner at law firm TLT. “However, this is definitely a tool in its infancy and I wonder if it is really doing much more than provide precedent documents which are already available in the business or from subscription services.”

AI is likely to remain used for entry-level work, says Daniel Sereduick, a data protection lawyer based in Paris, France. “Legal document drafting can be a very labor-intensive task that AI seems to be able to grasp quite well. Contracts, policies, and other legal documents tend to be normative, so AI's capabilities in gathering and synthesizing information can do a lot of heavy lifting.

But, as Allen & Overy has found, the output from an AI platform is going to need careful review, he says. “Part of practicing law is about understanding your client’s particular circumstances, so the output will rarely be optimal.”

Sereduick says that while the outputs from legal AI will need careful monitoring, the inputs could be equally challenging to manage. “Data submitted into an AI may become part of the data model and/or training data, and this would very likely violate the confidentiality obligations to clients and individuals’ data protection and privacy rights,” he says.

This is particularly an issue in Europe, where the use of this kind of AI might breach the principles of the European Union’s General Data Protection Regulation (GDPR), which governs how much data about individuals can be collected and processed by companies.

“Can you lawfully use a piece of software built on that foundation [of mass data scraping]? In my opinion, this is an open question,” says data protection expert Robert Bateman.

Law firms would likely need a firm legal basis under the GDPR to feed any personal data about clients they control into a generative AI tool like Harvey, and contracts in place covering the processing of that data by third parties operating the AI tools, Bateman says.

Wakeling says that Allen & Overy is not using personal data for its deployment of Harvey, and wouldn’t do so unless it could be convinced that any data would be ring-fenced and protected from any other use. Deciding on when that requirement was met would be a case for the company’s information security department. “We are being extremely careful about client data,” Wakeling says. “At the moment we’re using it as a non-personal data, non-client data system to save time on research or drafting, or preparing a plan for slides—that kind of stuff.”

International law is already toughening up when it comes to feeding generative AI tools with personal data. Across Europe, the EU’s AI Act is looking to more stringently regulate the use of artificial intelligence. In early February, Italy’s Data Protection Agency stepped in to prevent generative AI chatbot Replika from using the personal data of its users.

But Wakeling believes that Allen & Overy can make use of AI while keeping client data safe and secure—all the while improving the way the company works. “It’s going to make some real material difference to productivity and efficiency,” he says. Small tasks that would otherwise take valuable minutes out of a lawyer’s day can now be outsourced to AI. “If you aggregate that over the 3,500 lawyers who have got access to it now, that’s a lot,” he says. “Even if it’s not complete disruption, it’s impressive.”

So how much will law firms billable hours actually be reduced by AI?  Something is tell me "not much".  Either that or they will just raise their hourly billable rate to compensate.  So an easier job for the lawyers, no real monetary benefit for the clients.

 

Link to comment
Share on other sites

×
×
  • Create New...