"I don’t understand why publishers are signing licensing deals with generative AI companies that will likely steal their traffic. OpenAI pays you a multi-million dollar licensing fee, they publish (potentially inaccurate) summaries of your journalists’ work in their results (with attribution links, of course), users skim those results, don’t bother with going to your site, and continue with their day. It feels like a broken internet. From a user standpoint, using ChatGPT as a fact finder for current news seems efficient, especially if it can summarize current events. Getting a breakdown of the US inflation situation, for example, is useful. But there’s no guarantee that the provided summaries are accurate (LLMs match patterns, they aren’t search engines), and in the longer term, if users come to rely on generative AI companies instead of the underlying media sources, the system stops working. ChatGPT needs articles from publications to generate accurate answers, those publications need paying users to operate, and many of those users may opt out of their subscriptions (or, for free sites, ad revenue may plummet from loss of traffic) if they can get their answers from ChatGPT. When too many readers stop visiting media sites, the system breaks down. For publishers who focus on long-form commentary over breaking news, these deals make even less sense. The Atlantic, which just a week ago published an excellent op-ed from The Information’s Jessica Lessin on the dangers of partnering with AI platforms, announced its own deal with OpenAI a few days ago, further empowering the company’s summarization machine. Generative AI has, in many instances, been a fun and useful tool, but I just don’t see how it can integrate within the media ecosystem without breaking it along the way. And, beyond that, an internet driven by large language model summarizations just feels bleak, no? Do we seriously want a one-dimensional internet where we rely on a central aggregator for everything?" My thoughts on how Gen AI is making the internet worse: https://lnkd.in/e-sYNY3z
AI's Influence on News Consumption
Explore top LinkedIn content from expert professionals.
Summary
Artificial Intelligence (AI) is changing the way we consume news, with tools like generative AI and personalized news apps reshaping user behavior and the media industry. While AI can provide quick summaries and tailored news feeds, these shifts raise concerns about misinformation, declining traffic for publishers, and the erosion of traditional journalism.
- Support trustworthy journalism: Subscribe to credible news outlets and promote accurate reporting to combat the spread of AI-generated misinformation.
- Verify what you read: Cross-check information from multiple sources and use tools like reverse image searches to identify AI-generated content.
- Understand AI’s role: Be mindful that AI-generated summaries or news feeds may not always provide accurate information and could miss important context or nuances.
-
-
AI Overviews are eating clicks. According to new Pew Research data, Google users are significantly less likely to click on any search result — even the cited sources — when an AI-generated summary appears. ➽ Just 8% of users clicked a traditional link when AI Overviews were shown (vs. 15% without). ➽ Only 1% clicked a link inside the AI summary itself. ➽ Over a quarter of users ended their session right after seeing the AI Overview. That’s a major shift in user behavior — and a clear signal that publishers, marketers, and SEO professionals must rethink how they show up in this new AI-first SERP. More key takeaways: ➽ AI summaries are triggered most often by longer, question-based, and full-sentence queries. ➽ The most commonly cited sources? Wikipedia, YouTube, Reddit — with government sites also getting more visibility in AI results. ➽ News websites accounted for just 5% of links in AI summaries. The future of search isn’t just about ranking anymore. It’s about being cited, summarized, and trusted by AI. Here’s the full Pew report: https://pewrsr.ch/4lIqbsM
-
There's something about Artifact, Kevin Systrom's AI-driven news app, that I think could be really healthy for the media business. If you're unaware, Artifact is a mobile app that applies AI to the consumption side of news. It's supposed to learn from your behavior in the app to adjust your feed and bring you stories you're likely to be interested in — like TikTok's "For You" feed, but for news stories. Here's what caught my attention during Systrom's TechCrunch Disrupt chat: 📈 Interest vs. Value: Clicking on a story signals interest, but not necessarily value, and Artifact is designed to not overindex on either. The media often confuses interest and value, focusing on click-based metrics rather than the reader's true takeaway. 🌟 A Shift in Incentives: In the digital media world, it's all about clicks leading to ad revenue. But that doesn't account for the depth and impact of content. The traditional solution is to change your business model: paywalls, commerce content, or data businesses. These can be effective but require significant effort and investment, so many publishers don't do it. 🤖 AI-Driven Solution: Could Artifact pave the way for ad-supported digital media? Imagine AI rewarding valuable content and curbing clickbait. An AI could theoretically do this more efficiently than manual curation. 🚫 Saying Goodbye to Clickbait: Artifact even rewrites clickbait headlines for clarity. This could be a game-changer in discouraging low-value content. 👥 Reader Demand: The million-dollar question – do readers want this? Big media players should closely monitor Artifact's AI integration for insights into their own platforms. And yes, AI helped me format this post for LinkedIn (though the raw material was 100% human! 🙂) Tossing the YT vid in the comments, and let me know what you think! 👇📣 #DigitalMedia #AI #FutureOfMedia
-
The AI Slop: When Even Disasters Become Clickbait We used to worry about fake news. Now we have something worse: fake everything. I had my daughter remind me (and I needed that) that not everything you see can be trusted. We know that but yet we forget it. Here are few examples: The Texas floods were horrific. Real people, real loss. Then came stories of LSU football coach Brian Kelly rescuing 165 people in the Texas floods. The LSU Tigers football team did NOT donate $50 million to support flood victims. Other examples: Peyton Manning din’t change the life of a poor 13 year old girl named Nia by paying for her to attend math camp, leading her to win a national math award seven years later. King Charles did not cry at French state dinner on July 8, speaking about Catherine, Princess of Wales. What shows up on social media? An AI-generated soup of "breaking news" “Inspiring stories” so synthetic, it feels like tragedy has been run through a content blender. This is the new game: → Social media algorithms aren't just personal, they're hyper-personal. → AI slop makers pump out infinite content on every niche topic you can imagine (and plenty you can't). → Disasters, wars, celebrity scandals, nothing is off-limits. Everything is an "engagement opportunity." Here's the dark side we're ignoring: → Democracy at risk: When fake content floods our feeds faster than fact-checkers can verify, public trust erodes → Emotional manipulation: AI-generated content exploits our fears and biases with scientific precision → Truth becomes optional: Real journalism gets buried under an avalanche of synthetic content But we're not helpless. Here's what actually works: → Support legitimate news sources with your wallet, not just your clicks → If story pulls your heartstrings, check why. AI generated goes to extreme. They mimic emotion, but they lack nuance or contradictions- human story usually includes complexity, flaws or unexpected details. → Cross reference facts. Google names, places and events mentioned in the text. If you can’t find independent confirmation of key details, it is likely an AI-fabricated story. → Teach others to spot AI-generated content (hint: it's too perfect, too engaging, too fast). Do reverse image search. Drop photos into Google Images or TinEye to see where else they appear. If the same image shows up with different contexts or no real origin, it is likely AI-generated. We're sliding into a world where AI-generated chaos is faster, cheaper, and stickier than real reporting. And people? They just keep scrolling, buying into it and getting caught up not realizing. As I said, I needed that reminder. Because I also did not realize. It is only going to get more believable. In the age of AI, not every heartwarming story is real. If it sounds too good to be true- it probably is. So become skeptical, become smart. I did. Share how you believed something which was not real in comments. Save 💾 ➞ React 👍 ➞ Repost♻️
-
Misinformation is not new. Neither is the fact that Artificial Intelligence can be used to generate misinformation. Think about the deepfake images of Donald Trump resisting arrest. But now that ChatGPT and similar tools have become more accessible, they are being used to create fake news websites, and the number of fake news websites is rising exponentially. A WSJ article reported that “In early May, the news site rating company NewsGuard found 49 fake news websites that were using AI to generate content. By the end of June, the tally had hit 277.” There is an incentive for people to create such websites: It’s a way to make money “through Google’s online advertising network.” Google does not automatically block AI-generated content, but it looks to "surface high-quality information from reliable sources, and not information that contradicts well-established consensus on important topics.” That does not mean it can catch all the fake news websites. We all can fall for misinformation, but we can also take some steps to avoid it: ✅ Get your news and information from reliable sources, not random new websites. The Wall Street Journal, BBC, NPR, and CNN are examples. They have fact-checkers and editors who verify their stories before publishing them. ✅ Twitter, Facebook, Reddit, and other social media sites - are not news outlets. Unless the source is someone you know and trust don’t take it at face value. Verify it yourself. ✅ Don’t use generative AI tools instead of a search engine. They are not always accurate or truthful. Sometimes they make up things that sound plausible but are not true. These are called hallucinations and they happen more often than you think. See this post for an example I ran into. https://lnkd.in/d46fYZ8q Did you run into examples of misinformation? How do you deal with it? #artificialIntelligence #technology #fakenews ________________________________________ ➡️ I am Talila Millman, a fractional CTO, and a management advisor. I help CEOs and their C-suite grow profit and scale through optimal Product portfolio and an operating system for Product Management and Engineering excellence. 📘 My book TRIUMPH: A Guide for Transformational Leadership in Uncertain Times is to be published in 2024. 🔔 Follow me and ring the bell on my profile to get notified of new posts https://lnkd.in/dT6uGbtk
-
LIFE BY ALGORITHM | In the past year, it has become painfully evident that concerns over misinformation, AI-generated content, and algorithmic bias have not just been speculative—they have materialized before us. The artist formerly known as Twitter, along with legacy media, promised to democratize information but instead has fueled a chaotic narrative. The result is a deafening mix of voices where the rise of the unqualified and the erosion of journalistic integrity accelerate by the day. And it underscores a broader trend that threatens democracy's very foundation: the elements of Truth and Trust. Contrary to popular belief, democracy's core isn’t found in any particular amendment but in these deeper fundamental principles. They are the bedrock that supports the social contract between citizens and, when compromised, we begin to see the fraying fabric of our communal bonds. Once the bastion of truth, the news is now trapped in a web of clickbait, where the algorithmic reward function of engagement, or maximizing "what is most interesting to you," prioritizes sensationalism. But here's the psychological problem: The human brain, evolutionarily primed for novelty, gravitates towards dramatic content. It is a scary concoction in the age of information overload. We're not just passively consuming content, we are becoming conditioned to a distorted version of reality Consider Elon Musk’s frequent, often simplistic, takes on complex topics. For someone who knows more about AI than most, he doesn't know much about AI. This isn't an excuse just to attack Musk, but rather a critique of how we assign weight to opinions and those in power, often irrespective of actual expertise. Political views aside, or opinion of him as supervillain or superhero, X's now clearly stated reward function makes it more like E! News than a promised replacement for legacy news. It's not okay, and betrays much about the mechanisms under the proverbial hood. However, there’s a silver lining: we have control. The systems are designed to feed us more of what we desire; so we hold the power, especially if we act as our best selves. The next groundbreaking innovation will be a platform that offers users algorithmic choice, empowering them to decide what they wish to “maximize.” All said, we cannot allow our platforms (or ourselves) to be ruled by the joys of clicks over truth and trust . . . because in the interplay of democracy, information, and society, these twin pillars are non-negotiable. #technology #future #artificialintelligence #socialmedia
-
Today let's talk about "Internet-time" citizen journalists. Here we are referring to individuals who report, capture, and share news and information about events as they unfold, utilizing real-time online platforms like social media, live blogs, and streaming services. What we are seeing with Generative AI is the the Rise of 'Probabilistic Web': Algorithmic curation and personalization: As algorithms dictate what we see, static content becomes less visible. Dynamic, adaptive creators who understand and navigate these algorithms thrive. Ephemeral information and misinformation: Falsehoods and half-truths spread quickly. Trustworthy creators who fact-check and provide context in real-time become vital sources of truth. Constant churn of trends and topics: News cycles shrink, attention spans diminish. Creators who can identify and respond to emerging trends rapidly stay relevant. Hallucinatory Experiences: Blurring lines between reality and simulation: Deepfakes, AR/VR, and AI-generated content create immersive experiences that challenge our understanding of truth. Creators who can navigate these realms while maintaining trust and grounding audiences become essential guides. Rise of alternative narratives and personalized realities: Echo chambers and confirmation bias lead to individualized versions of "truth." Creators who can bridge these divides and foster critical thinking become crucial for promoting understanding and empathy. Gen AI hallucinations: reasons why AI hallucinations occur: Limited training data: If an AI model is trained on incomplete or inaccurate data, it may generate outputs that reflect those same biases or errors. Algorithmic biases: The algorithms used to develop AI models can themselves be biased, leading to unfair or discriminatory outputs. Overfitting: When an AI model is too closely aligned with its training data, it may struggle to generalize to new situations and generate hallucinations when faced with unfamiliar scenarios. Living on Internet Time: Continuous creation and adaptation: Rigid archives won't suffice. Creators who can adjust their content flow, formats, and styles in real-time stay ahead of the curve. Conclusion: The power of citizen journalists on "Internet-time" lies in their immediacy, ubiquity, authenticity, and potential for transparency and inclusivity. However, it's crucial to consider the challenges of accuracy, ethics, and sustainability to ensure their contributions are valuable and responsible. Time will tell!