Effects of Algorithmic Systems on News Reporting

Explore top LinkedIn content from expert professionals.

Summary

The effects of algorithmic systems on news reporting highlight the profound ways artificial intelligence is reshaping journalism. From influencing information accuracy to altering how audiences access news, these technologies present both opportunities and challenges for the industry.

  • Focus on real-time adaptability: Develop or integrate tools that help newsrooms process and verify live data to avoid inaccuracies and ensure timely reporting.
  • Address misinformation risks: Establish rigorous editorial benchmarks and automated fact-check systems to maintain trust and credibility in AI-generated content.
  • Protect intellectual property: Advocate for clear AI training data policies to safeguard original content from unauthorized use by generative AI tools.
Summarized by AI based on LinkedIn member posts
  • View profile for Francesco Marconi

    AppliedXL

    6,794 followers

    LLMs, when used alone, cannot reliably be deployed in journalism, especially in real-time information generation. Here are the key issues and the ways to address them: 1. Inability to Adapt to New Information: LLMs excel at processing existing language data but struggle with “innovative thinking” and real-time adaptation, which are crucial in news reporting. Since they are trained on pre-existing datasets, they can’t dynamically update their knowledge post-training. For instance, when mining local government data, LLMs might overlook recent policy changes or budget updates. The solution involves developing real-time event detection systems that can monitor and analyze local government records, such as council meeting minutes or budget reports. Such systems use what is called an ‘editorial algorithm’ to identify noteworthy changes in the data based on criteria defined by journalists. 2. Lack of Guaranteed Accuracy: LLMs cannot ensure the accuracy of their output, as their responses are based on patterns from training data and lack a mechanism for verifying factual correctness. Continuing with the example above, an LLM might inaccurately write an analysis of a significant policy change detected by an editorial algorithm. To address this issue, we can develop domain-specific models trained to understand a particukar coverage area (like a beat reporter). Any analysis produced by an LLM should be subjected to automated fact-checking against quantifiable editorial benchmarks using reinforcement learning with AI feedback (RLAIF). These benchmarks involve cross-referencing with official records, verifying historical accuracy, and ensuring alignment with journalistic standards. This method, known as ‘editorial AI,’ makes the AI follow journalistic guidelines to maintain the integrity and accuracy of news content derived from complex data.

  • View profile for Michael Millenson

    President, Health Quality Advisors LLC

    3,930 followers

    At a time of splintered information sources and banged-up media budgets, traditional investigative reporting is showing it's still relevant in health care. A journalistic exposé of an errant artificial intelligence-based algorithm has spurred a class action suit against UnitedHealth Group, operator of the nation’s largest #medicareadvantage plan and the biggest health insurer in America. The suit, spurred by a series of STAT investigative stories, accuses UnitedHealth's Optum subsidiary of deploying an algorithm with a known high error rate in order to deny costly rehabilitation care to seriously ill patients. (United says the lawsuit “is without merit.” Centers for Medicare & Medicaid Services is looking into some of the questions the STAT reporting raised. And a Senate hearing in May was based on the STAT and ProPublica reporting on insurer claims denials. So far, no comment from AARP, which has a licensing agreement with UnitedHealth worth hundreds of millions of dollars annually.) Meanwhile, NEWYORKTIMES.COM and KFF Health News are investigating “the financial and emotional toll of providing and paying for long-term care” in a joint series entitled “Dying Broke.” And washingtonpost.com addressing the question of "Why are so many Americans dying early?", has begun examining the roles played by politics, stress and chronic illness. Recently, the two STAT reporters investigating the alleged algorithm abuse, Casey Ross and Bob Herman, described their work on a webcast moderated by their editor, Lison Joseph. As someone who regularly did investigative reporting back in the (prehistoric) pre-Internet and pre-smartphone era, I was struck by how some basic journalistic principles have remained constant. In this post, I follow their description of the investigation and also examine the strengths and limitations of investigative reporting as a tool for bringing #accountability to health care. (Note the overuse of stents from a study by Lown Institute. The reporting about an algorithm is particularly important as our society moves into a murky future of often-unconstrained #artificialintelligence. Shining the spotlight of investigative journalism on the decisions made by machines may yet become as commonplace as doing the same for decisions made by people. #longtermcare #claimsdenial #transparency #MedAI #healthcareAI #algorithms #postacutecare #nursinghomes #rehabilitation #healthcarejournalism USC Center for Health Journalism American Medical Writers Association (AMWA) Charles Ornstein Jordan Rau Lisa Simpson Brian Klepper Jane Sarasohn-Kahn Dov Michaeli M.D, Ph.D Patricia Salber, MD, MBA Margaret (Maggi) Cary, MD MBA MPH PCC Ceci Connolly Susan Dentzer Steven Findlay Nancy VanHuyck Chockley National Institute for Health Care Management (NIHCM) Cheryl Clark Society of Professional Journalists Fred Schulte Judith Graham Blackford Middleton, MD, MPH, MSc AMIA (American Medical Informatics Association) Patricia Kelmar

  • View profile for Swapneel Mehta

    Founder, Postdoc at BU and MIT | Rebuilding Digital Trust | Google, Mozilla, Wikimedia awardee

    7,511 followers

    It's really early days of #GenAI and as an AI researcher I'm fascinated by its potential. However, as with any other technology, it is important to be mindful of the societal implications especially given the disproportionate emphasis on its "long term benefits" which belies the costs of short term harms including the accelerated erosion of trust on the internet. Partnering with newsrooms in the past two years has been an eye-opener about the daily struggles publishers had to face even before #LLMs and diffusion models changed the dynamics of content generation completely. GenAI presents a wonderful opportunity to reduce the time-to-publication and information gathering bottleneck, among other operational improvements to the media machinery. However, there are also unforeseen failures emerging as a result of trying to take "AI short-cuts", so to speak. This might be a cynical take but I think we could do with some more 'real talk' about the challenges associated with GenAI for media orgs. especially centered around #trust in #journalism and news over a longer horizon. So I am very grateful to Julius Endert and the DW Akademie team for allowing me to share my thoughts and some critical opinions about the state of online information and how GenAI affected it. https://lnkd.in/eBfCTWUN

  • View profile for Willow Bay
    Willow Bay Willow Bay is an Influencer

    Dean, USC Annenberg School for Communication and Journalism

    16,850 followers

    "As a society, we need to analyze the harms created by generative AI. When statistical hallucinations invent facts, chatbots misattribute authorship, or computational summaries bungle analyses, they produce dangerously wrong language that has all the confidence of a seemingly neutral, computational certainty. These errors are not just rare and idiosyncratic curiosities of misinformation; their real and imagined existence makes people see media as unstable, unreliable, and untrusted. Society’s information sources—and ability to gauge reality—are destabilized."   This is just one of many critical insights from Professor Mike Ananny, who is faculty co-director of USC's newly established Center on Generative AI & Society. Mike continues to unpack the complex issues surrounding AI and the ways it is reshaping the journalism industry and you'll want to be sure to check out his most recent piece, "To Reckon with Generative AI, Make It a Public Problem," in the National Academies' Issues in Science & Technology. https://lnkd.in/dcrzy7SS Plus don't miss Mike on the "Synthetic Media: AI and Journalism" episode of NYU's Engelberg Center Live! podcast https://lnkd.in/dWj3u6cB and "Press Freedom Means Controlling the Language of AI," which he coauthored with Jake Karr for Harvard's Nieman Lab https://lnkd.in/dUZpxZSQ

  • View profile for Andreas Welsch
    Andreas Welsch Andreas Welsch is an Influencer

    Top 10 Agentic AI Advisor | Author: “AI Leadership Handbook” | LinkedIn Learning Instructor | Thought Leader | Keynote Speaker

    33,233 followers

    𝟯 𝗿𝗲𝗮𝘀𝗼𝗻𝘀 𝘁𝗼 𝗳𝗼𝗹𝗹𝗼𝘄 '𝗧𝗵𝗲 𝗡𝗲𝘄 𝗬𝗼𝗿𝗸 𝗧𝗶𝗺𝗲𝘀 𝘃𝘀 𝗢𝗽𝗲𝗻𝗔𝗜': 𝟭) 𝗜𝗻𝘁𝗲𝗹𝗹𝗲𝗰𝘁𝘂𝗮𝗹 𝗽𝗿𝗼𝗽𝗲𝗿𝘁𝘆 𝗽𝗿𝗼𝘁𝗲𝗰𝘁𝗶𝗼𝗻 The Times claims that OpenAI has unlawfully used its articles to train their models. As a counterpoint, the AP and Axel Springer have entered licensing deals with OpenAI in recent months. 𝗠𝘆 𝘁𝗮𝗸𝗲: IP lawsuits are not new for Generative AI providers. But we'll likely see more of them — and one of them will set precedence. 𝟮) 𝗥𝗲𝘃𝗲𝗻𝘂𝗲 𝗹𝗼𝘀𝘀 ChatGPT and others are pulling data from The Times' website or are reciting what they have previously learned (from their data). Hence, fewer web visits, fewer subscriptions and advertising revenue. 𝗠𝘆 𝘁𝗮𝗸𝗲: This is an interesting one with a far-reaching impact beyond The Times. 𝟯) 𝗛𝗮𝗹𝗹𝘂𝗰𝗶𝗻𝗮𝘁𝗶𝗼𝗻𝘀 Generative AI tools such as ChatGPT or Bing generate factual inaccuracies that are attributed to The Times. This risks damaging the brand. 𝗠𝘆 𝘁𝗮𝗸𝗲: Whether or to what extent this is enforceable is to be seen, but this also goes beyond the media industry. 𝗪𝗵𝗲𝗿𝗲 𝗶𝘁 𝗴𝗲𝘁𝘀 𝗶𝗻𝘁𝗲𝗿𝗲𝘀𝘁𝗶𝗻𝗴: "Besides seeking to protect intellectual property, the lawsuit by 𝗧𝗵𝗲 𝗧𝗶𝗺𝗲𝘀 𝗰𝗮𝘀𝘁𝘀 𝗖𝗵𝗮𝘁𝗚𝗣𝗧 𝗮𝗻𝗱 𝗼𝘁𝗵𝗲𝗿 𝗔.𝗜. 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 𝗮𝘀 𝗽𝗼𝘁𝗲𝗻𝘁𝗶𝗮𝗹 𝗰𝗼𝗺𝗽𝗲𝘁𝗶𝘁𝗼𝗿𝘀 𝗶𝗻 𝘁𝗵𝗲 𝗻𝗲𝘄𝘀 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀. When chatbots are asked about current events or other newsworthy topics, they can generate answers that rely on journalism by The Times." ChatGPT, Bing, or Bard experiences are becoming the dominant way for web search. Their versatility beyond "just" reporting the news is a clear threat for establishes business models. Publishers have been fighting declining print readership for years. Few have successfully transitioned to an online world. So... - Will the media industry have to adapt once again? - Is Search Engine Optimization (SEO) dead? - Whoever owns the UI layer (e.g. ChatGPT) is the gatekeeper to information? 𝗪𝗵𝗮𝘁’𝘀 𝘆𝗼𝘂𝗿 𝘁𝗮𝗸𝗲? PS: If you thought your data doesn’t matter for Generative AI, think again. 💰 #ArtificialIntelligence #MachineLearning #GenerativeAI #DigitalTransformation #IntelligenceBriefing

Explore categories