Welcome to the peer-powered, weekend summary edition of the Law Technology Digest! Using PinHawk's patented advanced big data analytics, this summary contains the best of the best from the week. These are the stories that you, our readers, found most interesting and informative from the week. Enjoy.
'First let me say that the signs of suicide are not always clear. If you suspect someone is not in a good place, reach out with an encouraging word. If you experience any feelings like this I encourage you to reach out to loved ones and discuss it. If you need to, reach out to me and we'll talk. For this post, I have to say that I am with the Raine family's lawyer, Jay Edelson in calling this response by OpenAI as "disturbing." According to OpenAI's filing, "A full reading of his chat history shows that his death, while devastating, was not caused by ChatGPT." Caused? Probably not. Contributed? Highly likely. I think anyone contemplating such an act is besieged by many, many devils and torments. Think about others in your life and have a safe weekend as you read more at ars technica: OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide
'I grew up in the age of First Lady Nancy Reagan's"Just Say No" marketing campaign against drug use. I don't know whether that helped me in my career as a CIO in saying no to partners, but I suppose it didn't hurt. Some people might find it shocking for me to say this, but it is part of my job to say no to partners and saying no is quite easy! Navigating how to say it, gracefully, politically, etc. is where it can be challenging. In my first law firm, it took me a while to figure out the right way to say no to the partner in charge of marketing at my first firm, but after I learned how, he and I got along excellently. In one firm where I was brought in, IT was running ragged, had competing projects and was failing to complete projects. One of the key things I had to teach the IT Diirector was how to say no. In his post today, Stephen Abram shares his memes on saying no. If you have trouble saying no, be sure to check them out at Stephen's Lighthouse: Saying NO: My memes collection
'Two of my favorite eDiscovery people, Brett Burney and Doug Austin, are talking about AI in the courtroom. Now they are not suggesting you submit fake evidence generated by AI, but be on the lookout for someone else who might. It's bad enough when lawyers are too lazy to check their AI augmented submissions to the court for hallucinations, but to deliberately submit deepfake videos and images is disgusting. Watch more at JD SUPRA: Key Discovery Points: If You're Planning to Submit GenAI Deepfake Evidence, Make Sure It's Believable
'Mirko Zorz interviews Graham McMillan, CTO at Redgate Software and they "discusses AI, security, and the future of enterprise oversight." I find something very disturbing about the idea that the AI incidents to date haven't been bad enough to provoke change and maturity. Maybe we all need to be disturbed more and Graham is correct. Is what we need a good old, massive meltdown to get the change we need? Read more at HELPNETSECURITY: How an AI meltdown could reset enterprise expectations
'In my career as a CIO I've had to plan for growth and expansion, mergers and more. But I've never had to double my serving capacity every six months such that I was looking to by a factor of 1,000 in 4-5 year window! That is what Google feels they have to do to meet their demand for AI services. And while that would be challenging all by itself, there are some conditions that go with that intense scaling: "Google needs to be able to deliver this increase in capability, compute, and storage networking 'for essentially the same cost and increasingly, the same power, the same energy level.'" Wow! Talk about challenges. Read more at ars technica: Google tells employees it must double capacity every 6 months to meet AI demand
'I've got a double Bob highlight for you this morning. Thomson Reuters vs ROSS Intelligence is back in the news and Bob Ambrogi is all over it! In his first post he reviews the 85-page redacted brief filed last week by Kirkland & Ellis on behalf of TR in the 3rd U.S. Circuit Court of Appeals. "Copying protectable expression to create a competing substitute isn't innovation: it's theft. This basic principle is as true in the AI context as it is in any other," gives you the basic context. In his second post on the same subject, Bob puts on his tin foil hat and dives deeper into the redactions. Sadly it seems like the good folks at Kirkland did their redactions properly and the information is truly hidden. I want to propose that Bob take his analysis of the redactions one step further and create a Mad Libs: TROSS edition. He could enter the brief in MS Forms and ask for input for each redaction and share it with us all. What a hoot that would be! We could even get various AIs to play by submitting the brief to them and ask them to fill in the blanks. I'm game are you? But seriously, be sure to read more at LawSites: Thomson Reuters Tells Appeals Court: ROSS's Copying Was 'Theft, Not Innovation' Thomson Reuters v ROSS: What Might Those Tantalizing Redactions in TR's Brief Conceal?
'In reading Adi Gaskell's post I thought I might see if the struggles that students having using AI responsibly might explain why some lawyers have the same struggles. Imagine my surprise when the study by the University of Wollongong involved law students! Adi writes, "Even when law students were given guidelines and instruction, many misused the technology in ways that raise serious concerns." Of the study, Adi writes, "The results were mixed. Some students used AI well-to sharpen arguments, summarize complex ideas, and improve clarity. But many ignored the rules. Some cited academic sources that did not exist, while others misrepresented real papers by claiming they said things they did not. This problem, known as AI 'hallucination,' is well documented, but its effect on legal education is particularly troubling." That matches the real world to a T! "A key problem is 'verification drift.' Students may start out checking AI-generated content, knowing it can be unreliable. But as they get used to its polished and confident tone, they begin to trust it more than they should. Over time, verification seems less necessary, and errors go unnoticed." Lazy students and lazy lawyers. In his summary he writes, "AI can be a valuable tool, but only if students learn to use it critically. Without stronger safeguards, the next generation of lawyers may enter the profession with a dangerous habit: treating plausible-sounding nonsense as fact." Sadly we're already there. Read more at REAL KM: Why students struggle to use AI responsibly
'I have to admit I was today years old when I first heard the term rubber duck debugging. Elliot Varoy gives a great explanation of the origin and the methodology. "The gist of it is that one should obtain a rubber duck, and use it when your code isn't working - and you don't know why. Explain to the duck what your code is supposed to do, and then 'go into detail and explain things line by line'. Soon, the moment of revelation strikes: you realise, as you speak aloud, that what you meant to do and what you actually did are two very different things." It works on issues other than code as well! I'd recommend you send this article to your boss BEFORE you start talking to rubber ducks. It will save a lot of explaining if they walk in on you. Read more at REAL KM: Stuck on a problem? Talking to a rubber duck might unlock the solution
'I have to admit that there are some days that I think the only people excited about AI are the ones trying to sell it to me. So I was keenly interested in this post by Deborah Lovich, Stephan Meier and Chenault Taylor. The crux of the survey is that while 76% of top executives THINK their employees are enthusiastic about AI, only 31% are! That is a huge gap. It would seem to prove the old adage about why we have two ears and one mouth - to listen twice as much as we talk. I think its worth your time to read this post. As the authors write, "The future of AI isn't just about smarter machines, it's about more human organizations. Employee-centric organizations have powerful processes that don't just make work better; they make AI work better, too." Read more at Harvard Business Review: Leaders Assume Employees Are Excited About AI. They're Wrong.
'I love good quotes and Dan Rockwell has a bunch of good ones in this post. I think my favorite is one that comes from a graphic he used and not one of the seven he listed. Cicero apparently said, "Gratitude is not only the greatest of virtues, but the parent of all the others." Take a moment and read more at LEADERSHIP FREAK: 7 Gratitude Quotes That Will Change You