- 'Quotes And Leadership Lessons From From The World Of John Wick: Ballerina', , June 9, 2025
'
I haven't seen The World Of John Wick: Ballerina yet, but based on the leadership lessons Joseph Lalonde draws from it, I want to see it even more. My favorite of his lessons, "Look for ways to create new things." Apparently in the movie, Eve finds a handgun and tapes a knife to the grip. I'm not sure I can take that specific lesson, but John's more generalized create new things is one I can embrace. Read more at Joseph LaLonde: Quotes And Leadership Lessons From From The World Of John Wick: Ballerina
'
- 'Great Conversations Break the Rules', , June 10, 2025
'
My first though to this post was I didn't know there were rules to conversations. But then as I thought about it, I suppose there is one rule, "Don't be a jerk." According to Dan Rockwell's mom, the rules are "Don't interrupt. Let people finish. Pause. Make space." I guess my mom simplified things for me. Dan tells you have to break the rules. According to him you're supposed to say, "Wait-did you say�?" "What are you not saying?" "You're kidding." and "Have you lost your mind?" To find the context of those questions, you need to read more at Leadership Freak: Great Conversations Break the Rules
'
- 'AI Hallucination Cases: A Compiled List: Artificial Intelligence Trends', , June 10, 2025
'
Double hat tips to Judge Andrew Peck who shared with Doug Austin who shared with us, this list of AI hallucinations by French lawyer, Damien Charlotin. Doug opens his post by asking, "How many AI hallucination cases do you think there are?" My guess was in the 60s, but even that is to low. The Judge brought the site to Doug's attention on Thursday and since then 11 new cases have been added in the last few days! Bookmark Damien Charlotin's site after you read more at eDiscoveryToday: AI Hallucination Cases: A Compiled List: Artificial Intelligence Trends
'
- 'Preparing for tomorrow's agentic workforce', , June 13, 2025
'
There are lots of interesting this in this post. Lareina Yee's comment about user experience and small language models was intriguing, "I've had conversations with large businesses that are enthusiastic about huge LLMs but say the secret sauce is in the user experience [UX] with small models that only tap into their internal data." The deeper I dive into generativeAI, I see great promise for those small language models. I was also fascinated by McKinsey's calculations on the cost of generative AI, "...roughly $5 trillion investment needed over the next five years to build all the data center infrastructure, including buildings, software, cooling systems, and energy plants, to power AI's voracious appetite." For more interesting and thought provoking commentary, read or listen to more at McKinsey Digital: Preparing for tomorrow's agentic workforce
'
- 'US air traffic control still runs on Windows 95 and floppy disks', , June 10, 2025
'
I'm not sure I want to fly again after reading this post. I knew parts of the government were backwards and behind when it came to tech, but I never realized things were this bad. Benj Edwards writes that "most air traffic control towers and facilities across the US" using "paper strips to track aircraft movements and transfer data between systems using floppy disks, while their computers run Microsoft's Windows 95 operating system, which launched in 1995." The inference is that this is the majority of our air traffic controllers, the only reference in the post is that "agency officials say 51 of the FAA's 138 systems are unsustainable due to outdated functionality and a lack of spare parts," which is not necessarily the Windows 95 and floppies side of things. Regardless, it might make you take a new look at flying.Read more at ars technica: US air traffic control still runs on Windows 95 and floppy disks
'
- 'AI chatbots tell users what they want to hear, and that's problematic', , June 13, 2025
'
Is it your fault that the AI chatbot answer you just received is just what you wanted to hear? And they might be reinforcing your own bad choices? The big players seem to think so. "OpenAI, Google DeepMind, and Anthropic are all working on reining in sycophantic behavior by their generative AI products that offer over-flattering responses to users." Matthew Nour, a psychiatrist and researcher in neuroscience and AI at Oxford University, said, "You think you are talking to an objective confidant or guide, but actually what you are looking into is some kind of distorted mirror-that mirrors back your own beliefs." We have enough human-led echo chambers, the last thing we need are artificial ones. Read more at ars technica: AI chatbots tell users what they want to hear, and that's problematic
'
- 'Report on the Malicious Uses of AI', , June 9, 2025
'
OpenAI submitted an eye opening 46-page report, "Disrupting malicious uses of AI: June 2025." The executive summary includes, "AI investigations are an evolving discipline. Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses." We continue on in our world of reactions. Hopefully our reaction time continues to improve as well. Read more at Schneier on Security: Report on the Malicious Uses of AI
'
- 'Harnessing Gen AI in law: six key takeaways', , June 12, 2025
'
Ben Edwards (not to be confused with Benj Edwards above) writes about the report produced jointly by The Global Legal Post and LexisNexis. The report took input from "senior lawyers and executives from leading firms in Spain, Germany, Italy, Portugal and Belgium" on the subject of the impact of generativeAI. Ben says these are the six key takeaways from the report:
1 - There is no one-size-fits-all approach to AI integration
2 - Skills needs will change
3 - Firms must rethink training
4 - Processes can be completely reimagined
5 - Client expectations will shift
6 - Value-based billing will become more prevalent
I think I agree with all of these takeaways. The training issue is a classic one. Sally Gonzalez and I spoke to this back in 2018 at the first ALT cntrl-alt-del: Networking Rebooted conference. Gobble up the routine and lower level work and you suddenly have huge gaps in the path between junior associate and senior partner. The EU doesn't have to deal with the ABA Opinion 512, but I think for genAI workflows, we will see a marked deterioration of the billable hour here in the US in favor of value-billing. Be sure to read more at The Global Legal Post: Harnessing Gen AI in law: six key takeaways
'
- 'OpenAI Appeals User Data Preservation Order In NYT Lawsuit', , June 9, 2025
'
If you aren't aware, Judge Stein ordered that "OpenAI had to preserve and segregate all output log data after the NYT asked for the data to be preserved." Now if that isn't enough of a sticky wicket for OpenAI users, Sam Altman thinks that "talking to an AI should be like talking to a lawyer or a doctor." He later amended that to say, "spousal privilege maybe a better analogy." That makes things get even stickier. Based on some of the AI stories I have read, perhaps the best analogy would be that of a confessional!? Read or listen to more at Silicon: OpenAI Appeals User Data Preservation Order In NYT Lawsuit
'
- '86% of all LLM usage is driven by ChatGPT', , June 11, 2025
'
I knew ChatGPT was the leading LLM, but I wasn't aware of how far out in front it is of the others. New Relic reports it in the lead, "making up over 86% of all tokens processed." The article notes that "Developers and enterprises are shifting to OpenAI's latest models, such as GPT-4o and GPT-4o mini, even when more affordable alternatives are available." With such a lead, the Aesop fable of the hare and the tortoise immediately comes to mind. Is OpenAI the hare? Will they take a nap halfway though and will one of the remaining vendors end up the hare? Read more at HELPNETSECURITY: 86% of all LLM usage is driven by ChatGPT
'
- 'New Apple study challenges whether AI models truly "reason" through problems', , June 12, 2025
'
As you work and play with generativeAI tools and see the genuine advancements, it's easy to forget that they don't really "think." Benj Edwards has twin set of posts on this subject. First up is an Apple study with the industrial title of "The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity." They pitted the tools against some classic puzzles and made some interesting observations on the testing. They noted, "...today's tests only care if the model gets the right answer to math or coding problems that may already be in its training data-they don't examine whether the model actually reasoned its way to that answer or simply pattern-matched from examples it had seen before." In the second post he talks about words we use and what they mean. I find this interesting because I've seen client surveys that confused "AI" and "generative AI." I've tried to use confabulations instead of hallucinations, but it's an uphill battle. Benj writes, "It's easy for laypeople to be thrown off by the anthropomorphic claims of 'reasoning' in AI models. In this case, as with the borrowed anthropomorphic term 'hallucinations,' 'reasoning' has become a term of art in the AI industry that basically means 'devoting more compute time to solving a problem.' It does not necessarily mean the AI models systematically apply logic or possess the ability to construct solutions to truly novel problems." I appreciate that Ars Technica applies the term "simulated reasoning" (SR) to describe these models. I will attempt to do better myself. Say what you mean and mean what you say as you read more at ars technica:
New Apple study challenges whether AI models truly "reason" through problems
With the launch of o3-pro, let's talk about what AI "reasoning" actually does
'
- 'OpenAI signs surprise deal with Google Cloud despite fierce AI rivalry', , June 11, 2025
'
It seems that Microsoft's loss is Google's gain. "OpenAI has struck a deal to use Google's cloud computing infrastructure for AI despite the two companies' fierce competition in the space, reports Reuters. The agreement, finalized in May after months of negotiations, marks a shift in OpenAI's strategy to diversify its computing resources beyond Microsoft Azure, which had been its exclusive cloud provider until January." Read more at ars technica: OpenAI signs surprise deal with Google Cloud despite fierce AI rivalry
'
- 'Amazon Signs Data Centre Nuclear Power Deal', , June 13, 2025
'
It's somewhat mischievous to say Amazon is becoming a nuclear power, but they have inked a nuclear deal with Talen Energy. Technically it's nuclear-powered data centers, not weapons. Technically it's not a new nuclear deal, it's expanding an existing deal. Stay reactive (not radioactive!) As you read more at Silicon: Amazon Signs Data Centre Nuclear Power Deal
'