|
|
|
// YOUR AI EDGE
THE AI THAT BEAT YOUR DOCTOR (AND THE PART NOBODY'S TALKING ABOUT)
A Harvard study, a $5 billion Pentagon contract, and the Bloomberg Terminal gets a brain. Five stories. Five minutes. Let's go.
|
|
// TODAY'S SIGNAL
|
The AI That Beat Your Doctor
A study published in Nature Medicine this month tested GPT-4o against two emergency room physicians on 500 real patient cases. Same information. Same conditions. Chief complaint, vitals, labs, imaging. The AI got the right diagnosis 76% of the time. The doctors hit 56% and 57%. The study came out of Harvard Medical School, led by Dr. Thomas Lindsey and Dr. Atul Gawande (the surgeon who wrote "Being Mortal" and ran USAID under Biden). It's peer-reviewed, it's in Nature Medicine, and the gap is not subtle. Twenty percentage points. GPT-4o's edge was sharpest on rare and complex cases. "It can simultaneously weigh hundreds of data points without fatigue, bias, or anchoring," Lindsey told TechCrunch. Here's the part that didn't make most headlines: the AI's errors were twice as dangerous. In about 8% of cases, GPT-4o confidently gave a diagnosis that, if acted on, could have harmed the patient. The doctors? About 3%. "When AI is wrong, it's wrong in ways that are hard to catch," Lindsey said. "It doesn't hedge the way a doctor might. That overconfidence is a real risk." Also worth knowing: the AI was reading structured text summaries. Not a screaming patient at 3 a.m. with an incomplete medical history and a family member who can't remember which medication mom takes. Gawande put it cleanly: "An ER is not a multiple-choice test." So what does the research actually suggest? Not replacement. A second opinion. "Imagine a rural ER at 3 a.m. with one doctor and 15 patients," Lindsey said. "An AI assistant that surfaces likely diagnoses and flags red flags could save lives."
|
// THE REAL STORY
The headline is "AI beats doctors." The study says "AI should help doctors." The gap between those two sentences will define how this technology actually gets deployed. The most useful version of medical AI right now? The one catching what a tired human missed at hour 14 of a shift. A staffing solution hiding inside a science paper.
|
|
|
|
// SHORTCUT
Copy, paste, go:
"I'm about to submit [type of deliverable: report, deck, proposal, email] to [audience: my manager, a client, senior leadership]. Review it as if you were [their role]. Give me: 1. The single weakest point they'll push back on 2. One data point or example that's missing 3. Any sentence that sounds vague or hand-wavy 4. A one-line summary of what this deliverable actually argues
Be blunt. I'd rather fix it now than get notes later."
|
|
|
Wall Street's $25,000 Copilot
Bloomberg just pushed the biggest update to its Terminal in eight years. At the center: BloombergGPT Enterprise, a proprietary AI model baked into the platform that 325,000 financial professionals pay $25,000 a year to access. What it does: you type a question in English, you get structured data back. "Show me the 10 worst-performing energy stocks in the S&P 500 this quarter ranked by free cash flow decline." Done. Previously, that required Bloomberg's proprietary command syntax, which takes years to master. The AI also summarizes earnings calls (45 minutes of reading compressed to under 5), drafts research notes, and flags weird market activity in plain language. Bloomberg shared beta data with Wired. Among junior analysts (under three years of experience), 91% are using the AI features weekly. Among users with 15+ years on the Terminal? 34%. One senior hedge fund trader: "I've spent 20 years learning the Bloomberg keyboard shortcuts. I can pull up any screen I need in seconds. Why would I type out a sentence when I can hit three keys?" A managing director at a bulge-bracket bank was less diplomatic: "The junior people love it because they never learned how to use the Terminal properly. It's a crutch." Bloomberg CTO Shawn Edwards knows the stakes: "A wrong number in a trading context isn't just an error. It's a potential compliance violation and a real financial loss." The AI features come included with the existing $25,000 subscription. A premium tier called Bloomberg AI Pro adds custom workflows for $5,000 more per year. Full rollout to all subscribers by end of Q3.
|
// CAREER MATH
If you're in finance, consulting, or anything data-heavy, the Bloomberg story is your preview. The tool that took years of muscle memory to master just got a shortcut. Senior people who already know the system won't feel the pinch. But the idea that knowing the system is worth 20 years of your time? That just got a lot harder to defend. The competitive advantage is shifting from "I memorized the commands" to "I ask the right questions."
|
|
|
The Pentagon's $5 Billion AI Shopping Spree
Last week the Pentagon announced Thunderforge: five classified AI contracts totaling roughly $5 billion over five years. The winners: OpenAI, Google, Nvidia, Palantir, and Scale AI. The loser, again: Anthropic. Issue 007 covered Anthropic losing an $800 million intelligence contract to OpenAI in February after its bid included use-case restrictions. Thunderforge is bigger, broader, same pattern. Anthropic submitted a proposal with guardrails. Evaluators picked the companies without them. A defense official told The Verge: "They wanted guardrails that were incompatible with what the mission required." Google's participation is the story within the story. In 2018, Google employees revolted over Project Maven (a Pentagon drone program) and the company walked away. Eight years later, Gemini models for classified intelligence analysis. No employee revolt this time. OpenAI's piece alone is worth up to $2 billion. Meanwhile, Anthropic is fielding offers at a $900 billion valuation. The company that keeps losing Pentagon contracts is about to be worth as much as the company winning them.
|
// CONNECT THE DOTS
Three facts. Google employees killed Project Maven in 2018 over ethics concerns. Google just signed a classified AI contract with no public protest. The difference? Eight years, a tighter job market, and a company that learned to stop asking for permission. The AI ethics conversation didn't end. It just got quieter.
|
|
|
|
🔴🟡🟢 RED LIGHT / GREEN LIGHT
|
🔴 // RED LIGHT
AI-Generated Marketing Assets
An AI startup used AI-generated knockoffs of the "This is Fine" dog in ads. The creator sent a cease-and-desist. Copyright law hasn't caught up, but lawsuits have.
|
|
🟢 // GREEN LIGHT
NotebookLM for Meeting Prep
Feed it the 40-page report you didn't read. Get a 10-minute audio briefing on your commute. Free with a Google account.
|
|
🔴 // RED LIGHT
Trusting AI Diagnoses at Face Value
GPT-4o beat two ER doctors on accuracy but made dangerous errors at 2x the rate. The AI doesn't hedge when it's wrong. That overconfidence is the risk.
|
|
🟢 // GREEN LIGHT
Bloomberg's Natural Language Queries
Type a question in English, get structured financial data. Junior analysts are saving an hour a day. 91% weekly adoption among users under 30.
|
|
| |
|
Mickey Mouse Is Scanning Your Face Now
Disneyland started using facial recognition at park entry in late April, replacing fingerprint scanners. Cameras match your face to the photo on your ticket. Disney says it's only for identity verification, not for tracking you inside the park, and facial data is deleted shortly after each scan. You can opt out. The alternative is showing a government-issued photo ID every time you enter. So your choices at the Happiest Place on Earth are now: let a camera scan your face, or hand a 19-year-old cast member your driver's license. Privacy advocates are not impressed. Albert Fox Cahn, who runs the Surveillance Technology Oversight Project, told 404 Media: "Disney is normalizing biometric surveillance in the happiest place on Earth. When one of the most beloved brands in the world embraces face recognition, it sends a message that this invasive technology is acceptable." Disney's argument: this prevents ticket fraud and it's faster than fingerprints. Both probably true. But the switch from fingerprint to face is a meaningful jump. Fingerprints require physical contact. Face recognition works at a distance, scales easily, and has a documented history of misidentifying people with darker skin tones. The rollout started at Disneyland in Anaheim. Walt Disney World in Orlando is expected to follow later this year.
|
// STEP BACK
This is the consent ratchet at work. First it was MagicBands. Then fingerprints. Now facial recognition. Each step normalizes the next one. The question for anyone working in marketing, operations, or customer experience: where does your company draw this line, and who's making that decision?
|
|
|
"This Is Fine" (Until Someone Steals It)
KC Green created the "This is Fine" dog in 2013 as part of his webcomic Gunshow. A dog sitting in a burning room, sipping coffee, insisting everything is okay. It became one of the most shared images on the internet. This week, Green discovered that an AI startup called Centric AI was running ads on LinkedIn and X featuring AI-generated variations of his character. The dog in an office. The dog at a computer. The dog promoting Centric AI's productivity tools. Fans spotted the ads and tagged Green. Centric AI initially claimed the images were "original AI-generated artwork." After the internet did what the internet does, they pulled the ads and said they "respect creators' rights." No apology to Green. He's consulting with an attorney and has sent a cease-and-desist. The legal question here is genuinely unresolved. If an AI model was trained on Green's widely reproduced image (which it almost certainly was) and then generates something clearly referizable to his character, is that infringement? Copyright law doesn't have a clean answer yet. What it does have: a lot of pending lawsuits from a lot of angry artists.
|
// TRANSLATION
"AI-generated original artwork" is the new "we found it on Google Images." If your company uses AI to create marketing assets, someone needs to be checking whether the output looks like somebody else's work. Not because the law is settled. Because the lawsuits are coming either way.
|
|
|
| |
|
// TOOLBOX
NOTEBOOKLM
What it does: Google's research assistant. Feed it documents, PDFs, YouTube links, or websites and it builds a knowledge base you can chat with. It generates podcast-style audio summaries of your sources, which sounds gimmicky until you're prepping for a meeting on your commute and need a 10-minute briefing on a 40-page report.
The pitch: Like having an intern who actually reads all the attachments.
The caveat: It only knows what you feed it. No internet access, no real-time data. If you forget to upload the latest version of a doc, it'll confidently cite the old one. Works best for deep research on a specific topic, not as a general assistant.
|
|
// WAIT... DOES THIS ACTUALLY WORK?
THE OBSCURE PROMPT OF THE DAY
prompts nobody asked for. results nobody expected. try it anyway.
|
"Write my annual performance review, but every accomplishment has to be described using only weather metaphors. I'm in sales."
|
// OUR VERDICT
ChatGPT went all in. "Q3 pipeline velocity created a Category 5 surge in closed-won revenue." "Cross-functional collaboration with marketing produced a sustained high-pressure system over the Southeast territory." It wrote six bullet points and every single one sounded like a Weather Channel anchor trying to get promoted. The problem: I showed it to two friends in sales and both said their actual performance reviews already sound like this. Corporate jargon and weather metaphors are, apparently, the same language. Zero practical value. Maximum existential crisis.
SURPRISINGLY PRACTICAL: ★★☆☆☆
|
|
// YOUR EDGE
|
01
Learn this: How AI diagnostic tools work in clinical settings. The Harvard study used GPT-4o on text-based case summaries, not live patients. That distinction matters when someone in a meeting says "AI is better than doctors now." It excels at pattern-matching structured data. Impressive, useful, and very far from replacing a physician.
|
|
02
Watch this: Anthropic's fundraising. They're fielding offers at a $900 billion valuation while getting shut out of Pentagon contracts. At some point, the market has to decide whether the "responsible AI" premium is worth paying, or whether it's a competitive disadvantage. That answer will shape the entire industry.
|
|
03
Say this: "Bloomberg's AI features have 91% weekly adoption among junior analysts and 34% among senior users. The tools aren't controversial. The generational divide in who uses them is."
|
|
|
// GOT THIS FROM A FRIEND?
Your edge on AI, twice a week. Free forever.
Subscribe →
|
|
|
|