Welcome to the peer-powered, weekend summary edition of the Law Technology Digest! Using PinHawk's patented advanced big data analytics, this summary contains the best of the best from the week. These are the stories that you, our readers, found most interesting and informative from the week. Enjoy.
'It would seem that in the AI world, Apple is giving itself a failing grade. Their internal efforts apparently are not as good as the competitions, prompting them to look at others. Talks with Anthropic apparently failed when they got greedy and demanded too much money from Apple. Talks with Google apparently has them training a custom version of Gemini to augment Siri. OpenAI's GPT already acts as an augmentation to some aspects of Siri. Apparently Apple is having trouble keeping its AI staff as insane salary offers and poaching reduces headcount. One interesting aspect is that should Apple go with Gemini, it won't be running locally on the phone, it will be in Apple's cloud. Read more at Silicon: Apple 'Discusses' Using Google's Gemini To Power Siri
'I think I'm going to somewhat disagree with Stephen Embry in his latest post on AI and the billable hour. I am a firm believer that bad or lazy lawyering is the reason we see so many articles about hallucinations before the court and why Damien Charlotin's database now contain over 300 instances of hallucination events worldwide. Stephen seems to think that a recent survey by DeepL backs up his opinion that "to merely sanctimoniously say those lawyers who don't check cites are just reprobates and that there is no excuse ignores some of the real life pressures on lawyers and legal professionals." Hopefully I'm not being sanctimonious when I say the hourly pressure has always been there. "It Better Be Billable. And Collectible" is a mantra I've heard ever since I entered the legal market, decades ago. You might be able to blame AI for exacerbating it, but it certainly didn't create it. Stephen writes, "Rather than just blaming the lawyers, we need to address the issues across the board and make sure there is a recognition and accountability both among senior lawyers and, as I pointed out in my previous article, clients who demand the use of AI but refuse to bill for things like robust cite checking." The senior lawyers are lawyers and the GC s of the clients are lawyers. Their failure to address the issue, recognize it or be accountable for it to me is also bad or lazy lawyering. You can't do AI and straight up billable hours. They are incompatible. Maybe Stephen and I are on the same page after all. It's a good post and one you should make time to read more at TechLaw Crossroads: Billable Hour Demands, Shadow Use of AI and Law Reality: It's a Hot Mess
'Those in the KM department were typically first to begin brining AI tools to bear in the arena of knowledge management. But what if you don't have a dedicated KM department or you haven't been as successful as you had hoped? Bruce Boyes might be able to help you. Bruce references a new study addressing the technical, organizational and ethical challenges of integrating AI and KM. He also addresses the challenges of knowledge creation, storage, sharing and application. You'll want to read more at REAL KM: What are the key challenges in implementing AI in KM, and how can they be addressed?
'I was today years old when I learned there was a mathematical formula for trust. Trust equals the sum of credibility, reliability and intimacy divided by self-orientation. According to Daniel Sokol it comes from a book, The Trusted Advisor, by Maister, Green and Galford. Daniel writes his post oriented toward lawyers, but trust is something we all deal with and wish to have. He writes, "The Trust Equation is a helpful tool to reflect on how trustworthy we are, and how we might improve. What can we change to be more credible, reliable, emotionally approachable and show that we care deeply for the client? Even small adjustments, such as acknowledging a client's frustration or sending a follow-up e-mail, can make a big difference." For a further explanation and definition of the variables, be sure to read more at The Law Society Gazette: How to be trusted
'What would we do without Father Microsoft making decisions for us that we clearly aren't expected to be able to make on our own? "Microsoft says that Word for Windows will soon enable autosave and automatically save all new documents to the cloud by default." and better still, "Microsoft will also roll out this functionality to Excel for Windows and PowerPoint for Windows users later this year." I am sure for some people, this is a great default setting. I am equally sure that for many others it is a disastrous default setting. Having the option is great. Choosing it for me by default is not. Read more at BLEEPINGCOMPUTER: Microsoft Word will save your files to the cloud by default
'With everyone scrambling to make sense of old workflows in the age of artificial intelligence, I thought this post by Dana Maor, Patrick Guggenberger and Alina Holzer would be helpful. They note, "Creating end-to-end processes that drive value is an area where many organizations fall short. Tasks and roles remain function-based, managers' days are stuffed with briefings that don't reflect priorities or performance management, and data lives in silos and spreadsheets, preventing scalable efficiencies." The fix? "The solution sounds simple: finding the most effective and efficient way to get from point A to point B with results that reflect the organization's strategic goals." I agree it sounds simple, but is not so simple in practice. Having just recently returned from ILTACON I've talked to many people about how they are folding AI into their work and frankly some of it sounded complex - overly complex. If you take anything away from this, simplicity is the key, "Simplifying and optimizing processes across an organization allows managers and employees alike to invest their time and resources more effectively to create sustainable value." Set aside some additional time and I'm recommending a different sort of kiss as the background music as you read more at McKinsey & Company: Want to break the productivity ceiling? Rethink the way work gets done
'Kudos to Roy Strom, as I believe he is the first to write about the costs of generative AL in legal. I have been bemoaning the fact that every post that's come through my PinHawk feeds that discusses the cost of AI, focuses on the "benefits" and completely ignore the "costs." With that kind of math, the answer is always skewed. He writes, "Firms spend $50 to $350 per attorney each month for AI tools, Greg Lambert, chief knowledge services officer at Jackson Walker, estimates. AI adds another 30% in cost to existing tools, he said. So firms may be spending as much as 0.5% of their revenue on AI, he said, stressing it is difficult to know the precise amount." It seems many "deal" big law has worked out with legal IA vendors are about to come to an end, "A chief innovation officer at a top 50 firm who was not authorized to discuss its spending publicly said his firm was still in the testing and ramp-up phases for numerous AI-related products. When those periods end, the firm is likely to see its costs rise significantly." Another quote I found interesting, "The AI gold rush is on for legal technology vendors. The question for law firms is if-or when-the new technology will break the bank." Comment and pass this post around so we'll see more of the same. And again, kudos to Roy for taking on what no one else in the market has seemingly been willing to do. Read more at Bloomberg Law: Here Is How Big Law Is Preparing to Pay for the AI Onslaught
'Have you ever considered that your question is wrong, that its your questions where the mistakes originate? In their article this morning, Aaron De Smet and Tim Koller ask you to consider this in great depth. Marketing mistakes and space shuttle explosions are the examples they provide in how questions are the problem, not the answers. They write, "The framing effect is a well-documented cognitive bias in which people with identical information make different decisions based on how the information is presented." And they conclude, "One way to compensate for the framing effect is to 'follow the thread' - that is, reverse engineer decisions by starting with the desired outcome, tracing back through the actions and decisions that led to it, and identifying the right question to ask." I'm pretty sure asking if you should read on is a mistake. You definitely will want to read more at McKinsey Quarterly: Bias Busters: When the question-not the answer-is the mistake
'While I'm not typically a betting man, I would bet a healthy sum that Ryan McKeen's post will provoke more than a few responses from bigAI in legal. Here are some of my favorite turns of phrases from Ryan. "...under the hood, they're using the exact same language models you can access directly. CoCounsel runs on GPT-4. Harvey uses Claude. Lexis+ AI Copilot is powered by Anthropic's models. They're essentially charging you a 10x to 50x markup to put a legal wrapper around technology you can use yourself." "Meanwhile, the performance gap keeps widening. When I use Claude directly, I get responses in seconds. When I use CoCounsel, I wait. And wait. The interface feels clunky, like software designed by committee in 2019. The prompting is rigid. The outputs are formulaic." You can definitely say Ryan is provocative, but can you say he's wrong? Be sure to read more at JD SUPRA: Why Legal AI Products Are Getting Lapped by $20 Chatbots
'On my first pass review I misread Eric Dodson Greenberg title and thought he was talking about off brand, or knock off AI tools. As I came back for a second pass, I saw he was talking about use cases and describing them as "off label." To me off-label is a term from the prescription drug world where a drug is not used for the purpose it was approved for. He talks about AI contract review tools being used to help to review political advertisements for defamation risks. Eric throws out some pretty amazing stats for AI's use but comes home with this very realistic view: "Despite these wins, not every product is a home run. Sometimes technology that dazzled in demos proved cumbersome and inefficient to implement. We've acquired tools that are undeniably powerful but required levels of training or upfront resource investments that, as a practical matter, prevented us from fully realizing the anticipated efficiency gains. This reveals a critical gap in legal technology adoption. The 'some assembly required' element can be sufficiently taxing that the drag it puts on implementation almost negates the value of the product." Eric has now added off-label and 'some assembly required' to my AI vocabulary. Read more at Bloomberg Law: 'Off-Label' AI Uses Can Expand In-House Team's Reach Beyond Legal
'A major goal of every technology project is user adoption. Matheus Bertelli warns us that adoption is not always good and that all adoptions are not created equal. He's got a global study ("more than 32,000 workers from 47 countries") showing that more than half of employees "intentionally use AI at work - with a third using it weekly or daily." That all sounds goo until he dispenses these facts, "However, almost half the employees we surveyed who use AI say they have done so in ways that could be considered inappropriate (47%) and even more (63%) have seen other employees using AI inappropriately." So the good news is that people are using AI tools, but the bad news is they aren't using them appropriately. Concerned? You had better toss Stephen Abram a hat tip and read more at The Conversation: Major survey finds most people use AI regularly at work - but almost half admit to doing so inappropriately
'I thought it useful to point out in the wake of Ryan McKeen's pro general AI tools that while they might be faster and cheaper, they are not perfect. AI safeguards are a slippery slope. At what point do you shut things down or notify a human response team? This post notes that "OpenAI eased these content safeguards in February following user complaints about overly restrictive ChatGPT moderation that prevented the discussion of topics like sex and violence in some contexts." Depending on the types of issues lawyers are dealing with, they could trigger all kinds of alarms inside of AI. In this case a teenage boy died by suicide after "ChatGPT provided detailed instructions, romanticized suicide methods, and discouraged the teen from seeking help from his family." Some 377 messages got flagged for "self-harm content" but nothing was done. In his post, Benj Edwards has a powerful subtitle, "There's no one home: The illusion of understanding," that I think is worthy of deep thought. For today's generation of AI, there is no one home. There is no understanding. Will we get there? I don't know - maybe. But those facts needs to be considered whether your doing legal research and drafting or seeking help for physical or mental issues. If you're researching things that you are a subject matter expert on, then review it and make sure the information is legit. If you are not a subject matter expert for the information your asking about, turn to a human expert to validate what AI is telling you. Read more at ars technica: OpenAI admits ChatGPT safeguards fail during extended conversations
'Rob Ameerun talks with Epona's CEO Marcel Lang for his post today. I'm not focusing so much on the post, which reads more like an ad for Epona, but rather the title and the idea that the "traditional DMS" will disappear because of artificial intelligence. "Traditional" is a key word here (and can most likely be translated as meaning NetDocuments and iManage) because clearly the folks at Epona don't think of themselves as traditional. I don't believe they are going away. This is not the first time that the threat of the disappearance has been made. I think it was around and about the introduction of Windows 10 that this prediction was first made. (If anyone has better time line, please email me!) In that iteration, it was the operating system that was going to cause its demise. A separate DMS wouldn't be needed because Microsoft would build the DMS into the Windows OS. Of course that never happened. For those you not aware, Epona has its origins in the code from a tool called MatterCenter that was developed by Microsoft's legal department. I will stipulate that there are some slick pieces to Epona and it's integrations with Microsoft are pretty cool. Marcel asks, "Why cling to metadata in a proprietary database when Microsoft gives you a global platform with security, compliance, and AI built in? The future is about finding and using information intelligently, not dumping it into buckets." Classifying data is always going to be a key element and if we talk about users not interacting with a traditional DMS in the same way as they have in the past, then I'm in full agreement. But I don't see it going away because of AI. Limited time and space prevent me from launching into all the potential issues, but one key item is that Microsoft has already rejected the DMS concept. (If someone wants to set up a panel discussion on the subject - I'm in!) If Microsoft were to buy Epona (or NetDocuments or iManage) and truly invest, then this story might have a different outcome. Read more at Legal IT Professionals: "The Traditional DMS Will Disappear - the Future Is AI-Driven"