Everyone’s talking about Muck Rack’s 2025 State of Journalism report. It’s a doozy. But too many takeaways stop at the surface. “Don’t be overly promotional.” “Pitch within the reporter’s beat.” “Keep it short.” All true. All timeless. But if you work in crisis communications or anywhere near the intersection of trust, media, and AI, those are just table stakes. The real story is what the report says about disinformation and AI’s double-edged role in modern journalism. Here’s where every in-house and agency team should be paying the closest attention: 🧨 The Risk Landscape: What Journalists Are Actually Worried About: 🚨 Disinformation is the #1 concern Over 1 in 3 journalists named it their top professional challenge—more than funding, job security, or online harassment. 🤖 AI is everywhere and largely unregulated 77% of journalists use tools like ChatGPT and AI transcription; but most work in newsrooms with no AI policies or editorial guidelines. 🤔 Audience trust is cracking Journalists are keenly aware of public skepticism, especially when it comes to AI-generated content on complex topics like public safety, politics, or science. 🤖 ‼️ Deepfakes and manipulated media are on the rise As I discussed yesterday in the AI PR Nightmares series, the tools to fabricate reality are here. And most organizations aren’t ready. 🛡️ What Smart Comms Teams Should Do Next 1. Label AI content before someone else exposes it: → Add “AI-assisted” disclosures to public-facing materials—even if it’s just for internal drafts. Transparency builds resilience. 2. Don’t outsource final judgment to a tool: → Use AI to draft or summarize, but ensure every high-stakes message—especially in a crisis—is reviewed by a human with context and authority. 3. Get serious about deepfake detection: → If your org handles audio or video from public figures, execs, or customers, implement deepfake scanning. Better to screen than go viral for the wrong reasons. 4. Set up disinfo early warning systems: → Combine AI-powered media monitoring with human review to track false narratives before they go wide. 5. Build your AI & disinfo playbook now: → Don’t wait for legal or IT to set policy. Comms should lead here. A one-pager with do’s, don’ts, and red flag escalation rules goes a long way. 6. Train everyone who touches messaging: → Even if you have a great media team, everyone in your org needs a baseline understanding of how disinfo spreads and how AI can help or hurt your credibility. TL/DR: AI and misinformation aren’t future threats. They’re already shaping how journalists vet sources, evaluate pitches, and report stories. If your communications team isn’t prepared to manage that reality (during a crisis or otherwise), you’re operating with a blind spot. If you’re working on these challenges—or trying to, drop me a line if I can help.
How News Organizations can Adapt to AI
Explore top LinkedIn content from expert professionals.
Summary
As artificial intelligence (AI) becomes more integrated into newsrooms, media organizations are faced with the challenge of adapting to new technologies while maintaining trust and transparency with their audiences. From combating misinformation to implementing AI ethically, the industry is navigating profound changes in how news is created, verified, and consumed.
- Prioritize transparency: Clearly label AI-generated or AI-assisted content and provide detailed insights into how stories are produced to build audience trust.
- Establish clear guidelines: Develop and implement policies to govern AI usage, ensuring ethical standards and human oversight in content creation and curation.
- Embrace audience engagement: Actively communicate with readers about your journalistic processes and address their concerns to reinforce credibility in an AI-driven media landscape.
-
-
Last month, the NYT made a huge AI policy shift that will transform how we pitch journalists. After years of fighting AI companies in court, they've finally embraced AI tools in their newsroom—and the implications for PR pros are immediate. What’s interesting is where the NYT drew the line in the sand: ✔ Writing headlines, drafting interview questions, suggesting edits, creating social copy ✖ Drafting articles, bypassing paywalls, using copyrighted materials without permission In other words, they’re trying to balance technological advancement with traditional journalistic values. Here’s what this means for PR pros: 1. Journalists might use AI to initially screen pitches. 2. Coverage will hopefully move faster as AI helps journalists with research and promotional copy. 3. The fundamentals of good storytelling become more important as AI handles routine tasks. Three ways we're adapting our media pitches: 1. Front- and end-loading key information: LLMs tend to weigh the importance of tokens in a U shape, meaning words that come first and last matter more than those in the middle. We’re experimenting with restructuring our pitches accordingly—leading with the big ‘so what’ and ending with ‘why it matters’. 2. Using clear industry categorization: not that we haven’t done this before, but we're being more intentional about explicitly stating which beat/topic the pitch fits into, and ensuring it aligns with the journalist’s beat. We don’t want an AI classifier discarding our pitches before they’re even considered. 3. Including datapoints in standardized formats: LLMs are not yet very good at parsing complex PDFs, so we’re experimenting with including data/statistics in more AI-friendly formats such as spreadsheets, CSVs, etc. To my PR friends: Would love to know what your thoughts are on this move and how you’re adapting 👇
-
The news industry is about to undergo our version of the horsemeat scandal - thanks to generative AI. For those who aren’t familiar, this happened in Europe in the 2010s. Meat sold as beef had partly come from horses. Consumers started looking for food that was traceable as a result - as they always do in the aftermath of revelations like this. Traceability is about to become much more important for news organizations. Audiences are going to be less and less certain about the provenance of what they’re consuming - as generative AI will shortly allow anyone to replicate the visual grammar of news content, and publishers themselves may not always be transparent about how their material is generated. I think that there are four things that the news industry can do now to get ahead of this. 1) the NYT uses ‘enhanced bylines’, that summarize the newsgathering process/the expertise of the authors. These should become the norm. 2) news organizations should consider what published editorial standards should look like - to anticipate the needs of consumers who have more questions than ever about how the material has been gathered. 3) journalists should start documenting their process more within their reports - and being more transparent about how they are made. The how is going to become as important as the what. 4) both brands and individual journalists should build opportunities into their routines to engage with the audience on questions like these. There was little point in doing this when there was a closed circle of publishers with high costs to entry. Stories about journalistic process were boring. But more and more - people are going to want to know exactly how the sausage gets made.