CSR Trends in the Tech Industry

Explore top LinkedIn content from expert professionals.

  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    MIT AI Risk Initiative | MIT FutureTech

    64,210 followers

    "this position paper challenges the outdated narrative that ethics slows innovation. Instead, it proves that ethical AI is smarter AI—more profitable, scalable, and future-ready. AI ethics is a strategic advantage—one that can boost ROI, build public trust, and future-proof innovation. Key takeaways include: 1. Ethical AI = High ROI: Organizations that adopt AI ethics audits report double the return compared to those that don’t. 2. The Ethics Return Engine (ERE): A proposed framework to measure the financial, human, and strategic value of ethics. 3. Real-world proof: Mastercard’s scalable AI governance and Boeing’s ethical failures show why governance matters. 4. The cost of inaction is rising: With global regulation (EU AI Act, etc.) tightening, ethical inaction is now a risk. 5. Ethics unlocks innovation: The myth that governance limits creativity is busted. Ethical frameworks enable scale. Whether you're a policymaker, C-suite executive, data scientist, or investor—this paper is your blueprint to aligning purpose and profit in the age of intelligent machines. Read the full paper: https://lnkd.in/eKesXBc6 Co-authored by Marisa Zalabak, Balaji Dhamodharan, Bill Lesieur, Olga Magnusson, Shannon Kennedy, Sundar Krishnan and The Digital Economist.

  • View profile for Rich Lesser
    Rich Lesser Rich Lesser is an Influencer

    Global Chair at Boston Consulting Group (BCG)

    187,585 followers

    I'm excited to share the launch of "Bold Measures to Close the Climate Action Gap," the latest report from Boston Consulting Group (BCG) and the World Economic Forum Alliance of CEO Climate Leaders. https://lnkd.in/e8MCFKAm    We see businesses doing more to tackle climate change, but collectively, the world is moving way too slowly. This new report focused on opportunities for companies and governments to translate their individual actions into more substantial global progress. The bottom line is that our individual efforts must be more geared to driving systemic change. The report highlights five ways for companies to do this, including: 1. Accelerate supplier decarbonization. In many companies, suppliers’ emissions are 3x to 8x their own Scope 1&2. Cutting the first 50% of many products’ supply chain emissions can be achieved with an end-price impact under 1% 2. Enable customers to make greener choices. Product redesign, circularity, reducing customers’ energy consumption can substantially lower the emissions footprint of many products. 3. Drive change with peers in your sector, especially in supply chain ‘pinch points’: Ten players or less control more than 40% of many key markets; clearer product labeling is another great area of opportunity 4. Engage in cross-industry partnerships, especially large-scale buying groups, to mobilize capital and accelerate development and scaling of advanced technologies 5. Advocate and support bolder policies. First, make sure you and your lobbying partners are not harming climate progress in your government engagements. Then, look for opportunities to go further to be an effective partner to governments to encourage bold and pragmatic changes in incentives, policies, and reporting. The report is filled with real life examples of what companies are doing today in each of these areas. Thanks to Pim Valdre and Pedro G Gomez Pensado from WEF and my colleagues Dr. Patrick Herhold, Jens Burchardt, Cornelius Pieper, Edmond Rhys Jones, Trine Filtenborg de Nully, Galaad Préau and Natalia Mrówczyńska for leading the work on this important report. And to my Alliance co-chairs, Jesper Brodin, Christian Mumenthaler, Ester Baiget, and Feike Sijbesma for your continued leadership.

  • View profile for Dr. Saleh ASHRM

    Ph.D. in Accounting | Sustainability & ESG & CSR | Financial Risk & Data Analytics | Peer Reviewer @Elsevier | LinkedIn Creator | @Schobot AI | iMBA Mini | SPSS | R | 58× Featured LinkedIn News & Bizpreneurme ME & Daman

    9,158 followers

    How can the Internet of Things help us make a dent in climate change? Imagine this: A factory running 24/7, producing goods to meet high demand. It’s a large operation, with countless machines consuming energy, water, and other resources. Managers know they need to cut back on waste, but tracking every detail—every kilowatt, every litre—seems overwhelming. This is where the Internet of Things (IoT) can step in, offering a new way for businesses to truly understand their operations. IoT isn’t just another tech trend; it's already shaping how industries tackle real-world problems. By 2025, IoT is expected to hit a market cap of $11.1 trillion, and it’s bringing sustainable solutions to the forefront. One of IoT’s biggest strengths lies in gathering data that was previously hidden or hard to access. For instance, sensors placed on factory machines can monitor energy use in real-time, identifying where resources are wasted and where efficiency can improve. Imagine how much waste could be prevented if organizations pinpoint these details and adjust operations accordingly. The impact of IoT is already visible in areas like smart buildings, where connected devices adjust lighting, heating, and cooling to use just the right amount of energy based on occupancy and external conditions. Or in agriculture, where sensors help farmers manage water use precisely, ensuring crops get what they need without overusing precious resources. These examples show that IoT doesn’t just save money—it has the potential to create lasting change by reducing our environmental footprint. As IoT continues to grow, it aligns closely with global goals, like the UN’s 2030 Sustainable Development Agenda. It offers a way to approach sustainability with data-backed actions rather than good intentions alone. However, for IoT to reach its full potential in sustainability, companies and industries need to recognize it as a tool for positive change, not just operational efficiency. So, How can we make the most of IoT for a greener future? It starts with seeing these devices not just as gadgets but as partners in the mission for a sustainable world.

  • View profile for David Linthicum

    Top 10 Global Cloud & AI Influencer | Enterprise Tech Innovator | Strategic Board & Advisory Member | Trusted Technology Strategy Advisor | 5x Bestselling Author, Educator & Speaker

    190,540 followers

    The Illusion of Green: Are Tech Giants Truly Committed to Sustainability? In recent years, large enterprise technology players have increasingly vocalized their commitment to sustainability, crafting extensive press releases and touting high ESG (Environmental, Social, and Governance) scores. Yet, despite their fervent declarations, a critical examination often reveals a stark contrast between their stated intentions and actions. This discrepancy raises important questions about whether these companies are genuinely dedicated to sustainable practices or merely engaging in greenwashing to enhance their public image. One core example of this disconnect is seen in some corporations' energy usage. While they may proudly announce investments in renewable energy projects, the overall carbon footprint often paints a different picture. Reports have highlighted instances where tech giants still rely heavily on non-renewable energy to power data centers, which are among the largest consumers of electricity globally. These data centers significantly contribute to emissions, undermining any lip service paid to sustainability. Moreover, the lifecycle management of electronic devices is another area where rhetoric falls short of reality. Promises to promote a circular economy through recycling and sustainable manufacturing processes often lack tangible results. The rapid turnover of electronic goods continues, spurred by the pursuit of profit and planned obsolescence, which leads to increased e-waste and resource depletion. The public statements made by these enterprises frequently emphasize transparent supply chains, yet numerous audits and investigations reveal persistent environmental and human rights abuses in their supply networks. These findings suggest a gap between corporate pledges and the actual oversight and enforcement of sustainability standards. To be fair, some companies are making genuine strides towards sustainability, albeit incrementally. However, the broader trend of proclaiming commitment without substantial action is increasingly evident. For the industry to achieve true sustainability, it requires more than just high ESG scores—it demands a fundamental shift in operational practices and corporate culture. As enterprises navigate this complex terrain, stakeholders, investors, and consumers must scrutinize actions over words. Only through consistent accountability and transparency can we discern authentic dedication from strategic posturing in the realm of sustainability. What are your thoughts on this? Am I being too hard on them? #Sustainability #FakeSustainability #EnterpriseIT

  • View profile for Heena Purohit

    Director, AI Startups @ Microsoft | Top AI Voice | Keynote Speaker | Helping Technology Leaders Navigate AI Innovation | EB1A “Einstein Visa” Recipient

    21,638 followers

    For AI leaders and teams trying to get buy-in to increase investment in Responsible AI, this is an excellent resource 👇 This paper does a great job reframing AI ethics not as a constraint or compliance burden, but as a value driver and strategic asset. And then provides a blueprint to turn ethics into ROI! Key takeaways include: 1/ Ethical AI = High ROI Companies that conduct AI ethics audits report twice the ROI compared to those that don’t. 2/ Measuring ROI for Responsible AI The paper proposes the "Ethics Return Engine", which measures value across: - Direct: risk mitigation, operational efficiency, revenue. - Indirect: trust, brand, talent attraction. - Strategic: innovation, market leadership. 3/ There's a price for things going wrong. Using examples from Boeing and Deutsche Bank, they show how neglecting AI ethics can cause both financial and reputational damage. 4/ Intention-action gap:  Only 20% of executives report that their AI ethics practices actually align with their stated principles. With global and local regulation (e.g. EU AI Act), inaction is now a risk. 5/ Responsible AI unlocks innovation Things like trust, societal impact, environmental responsibility help open doors to new markets and customer segments Read the paper: https://lnkd.in/eb7mH9Re Great job, Marisa Zalabak, Balaji Dhamodharan, Bill Lesieur, Olga Magnusson and team! #ResponsibleAI #innovation #EthicalAI  #EnterpriseAI

  • View profile for Mario Hernandez

    Helping nonprofits secure corporate partnerships and long-term funding through relationship-first strategy | International Keynote Speaker | Investor | Husband & Father | 2 Exits |

    53,997 followers

    Corporate philanthropy is broken. Not because companies don’t care. But because: They’re still treating purpose like an ad campaign Something to market instead of something to live. The old model? Cause marketing: Slap a nonprofit logo on your product and hope people feel good buying it. The new model? Cause integration: Design your product so it solves a real problem, by default. This shift is subtle. But seismic. Cause marketing says: “Buy one, we’ll donate one.” Cause integration says: “Our product is the solution.” Think: • Patagonia makes clothing that resists fast fashion. • Who Gives A Crap funds sanitation with every toilet paper roll. • Tony’s Chocolonely builds ethical supply chains into every bar. These companies don’t just support causes. They are the cause. 4 ways to make this shift inside your company: 1. Start with your product, not your press release. If your company disappeared tomorrow, what problem would go unsolved? 2. Audit your friction points. What harm is baked into your business model? Remove it at the root. 3. Align profit and purpose. Your margins should grow when your mission does. That’s the integration test. 4. Make customers part of the engine. Don’t just tell them what you’re doing. Show them what they’re changing by buying from you. This isn’t about feel-good branding. It’s about future-proofing your business. Because tomorrow’s customer doesn’t just want to know what you believe. They want to know what you build. And if your product isn’t solving something real? They’ll find one that does. And that’s where nonprofits come in. Nonprofits hold the expertise, the trust, and the on-the-ground solutions that businesses can’t replicate alone. Partnering with them isn’t charity, it’s strategy. The companies that win in the next decade won’t just integrate causes into their products. They’ll integrate nonprofits into their business models. With purpose and impact, Mario

  • View profile for Tom Reichert

    Global CEO, ERM

    12,551 followers

    This week, I had the pleasure of joining Sunya Norman, Salesforce's SVP on ESG Strategy & Engagement, at Dreamforce to discuss the imperative of decarbonization in the age of AI.   We discussed the technology sector's role in decarbonization (for example, video conferencing instead of travel, ESG reporting and decarbonization platforms, and AI aimed at making operations less resource-intensive). At the same time, the sector is a major contributor to global emissions (2-3% according to the UN), with Scope 3 emissions a major part of this footprint.   AI power demand will surge 550% by 2026, from 8 TWh in 2024 to 52 TWh, before rising another 1,150% to 652 TWh by 2030 — this represents 8,000% growth from the 2024 level (Forbes). A focus on decarbonization is a must.   The tech sector should focus on: - improving efficiency within their operations - accelerating the use of renewable energy and energy storage - developing new materials. This includes replacing certain industrial gases, such as high-GWP (global warming potential) fluorinated gases, commonly used in manufacturing processes like cooling and chip production. Accurate greenhouse gas (GHG) accounting, especially for indirect emissions (Scope 3), is crucial to understanding the sources of emissions and ensuring effective, targeted actions.   In partnership with Salesforce’s Net Zero Cloud, ERM is helping businesses achieve these goals by combining our decarbonization expertise with Salesforce’s technology to provide actionable insights for reducing emissions and meeting net-zero targets. #decarbonization #AI #ESG

  • View profile for Siddharth Rao

    Global CIO | Board Member | Digital Transformation & AI Strategist | Scaling $1B+ Enterprise & Healthcare Tech | C-Suite Award Winner & Speaker

    10,612 followers

    𝗧𝗵𝗲 𝗘𝘁𝗵𝗶𝗰𝗮𝗹 𝗜𝗺𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗼𝗳 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗔𝗜: 𝗪𝗵𝗮𝘁 𝗘𝘃𝗲𝗿𝘆 𝗕𝗼𝗮𝗿𝗱 𝗦𝗵𝗼𝘂𝗹𝗱 𝗖𝗼𝗻𝘀𝗶𝗱𝗲𝗿 "𝘞𝘦 𝘯𝘦𝘦𝘥 𝘵𝘰 𝘱𝘢𝘶𝘴𝘦 𝘵𝘩𝘪𝘴 𝘥𝘦𝘱𝘭𝘰𝘺𝘮𝘦𝘯𝘵 𝘪𝘮𝘮𝘦𝘥𝘪𝘢𝘵𝘦𝘭𝘺." Our ethics review identified a potentially disastrous blind spot 48 hours before a major AI launch. The system had been developed with technical excellence but without addressing critical ethical dimensions that created material business risk. After a decade guiding AI implementations and serving on technology oversight committees, I've observed that ethical considerations remain the most systematically underestimated dimension of enterprise AI strategy — and increasingly, the most consequential from a governance perspective. 𝗧𝗵𝗲 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗜𝗺𝗽𝗲𝗿𝗮𝘁𝗶𝘃𝗲 Boards traditionally approach technology oversight through risk and compliance frameworks. But AI ethics transcends these models, creating unprecedented governance challenges at the intersection of business strategy, societal impact, and competitive advantage. 𝗔𝗹𝗴𝗼𝗿𝗶𝘁𝗵𝗺𝗶𝗰 𝗔𝗰𝗰𝗼𝘂𝗻𝘁𝗮𝗯𝗶𝗹𝗶𝘁𝘆: Beyond explainability, boards must ensure mechanisms exist to identify and address bias, establish appropriate human oversight, and maintain meaningful control over algorithmic decision systems. One healthcare organization established a quarterly "algorithmic audit" reviewed by the board's technology committee, revealing critical intervention points preventing regulatory exposure. 𝗗𝗮𝘁𝗮 𝗦𝗼𝘃𝗲𝗿𝗲𝗶𝗴𝗻𝘁𝘆: As AI systems become more complex, data governance becomes inseparable from ethical governance. Leading boards establish clear principles around data provenance, consent frameworks, and value distribution that go beyond compliance to create a sustainable competitive advantage. 𝗦𝘁𝗮𝗸𝗲𝗵𝗼𝗹𝗱𝗲𝗿 𝗜𝗺𝗽𝗮𝗰𝘁 𝗠𝗼𝗱𝗲𝗹𝗶𝗻𝗴: Sophisticated boards require systematically analyzing how AI systems affect all stakeholders—employees, customers, communities, and shareholders. This holistic view prevents costly blind spots and creates opportunities for market differentiation. 𝗧𝗵𝗲 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆-𝗘𝘁𝗵𝗶𝗰𝘀 𝗖𝗼𝗻𝘃𝗲𝗿𝗴𝗲𝗻𝗰𝗲 Organizations that treat ethics as separate from strategy inevitably underperform. When one financial services firm integrated ethical considerations directly into its AI development process, it not only mitigated risks but discovered entirely new market opportunities its competitors missed. 𝘋𝘪𝘴𝘤𝘭𝘢𝘪𝘮𝘦𝘳: 𝘛𝘩𝘦 𝘷𝘪𝘦𝘸𝘴 𝘦𝘹𝘱𝘳𝘦𝘴𝘴𝘦𝘥 𝘢𝘳𝘦 𝘮𝘺 𝘱𝘦𝘳𝘴𝘰𝘯𝘢𝘭 𝘪𝘯𝘴𝘪𝘨𝘩𝘵𝘴 𝘢𝘯𝘥 𝘥𝘰𝘯'𝘵 𝘳𝘦𝘱𝘳𝘦𝘴𝘦𝘯𝘵 𝘵𝘩𝘰𝘴𝘦 𝘰𝘧 𝘮𝘺 𝘤𝘶𝘳𝘳𝘦𝘯𝘵 𝘰𝘳 𝘱𝘢𝘴𝘵 𝘦𝘮𝘱𝘭𝘰𝘺𝘦𝘳𝘴 𝘰𝘳 𝘳𝘦𝘭𝘢𝘵𝘦𝘥 𝘦𝘯𝘵𝘪𝘵𝘪𝘦𝘴. 𝘌𝘹𝘢𝘮𝘱𝘭𝘦𝘴 𝘥𝘳𝘢𝘸𝘯 𝘧𝘳𝘰𝘮 𝘮𝘺 𝘦𝘹𝘱𝘦𝘳𝘪𝘦𝘯𝘤𝘦 𝘩𝘢𝘷𝘦 𝘣𝘦𝘦𝘯 𝘢𝘯𝘰𝘯𝘺𝘮𝘪𝘻𝘦𝘥 𝘢𝘯𝘥 𝘨𝘦𝘯𝘦𝘳𝘢𝘭𝘪𝘻𝘦𝘥 𝘵𝘰 𝘱𝘳𝘰𝘵𝘦𝘤𝘵 𝘤𝘰𝘯𝘧𝘪𝘥𝘦𝘯𝘵𝘪𝘢𝘭 𝘪𝘯𝘧𝘰𝘳𝘮𝘢𝘵𝘪𝘰𝘯.

  • View profile for Helen Yu

    CEO @Tigon Advisory Corp. | Host of CXO Spice | Board Director |Top 50 Women in Tech | AI, Cybersecurity, FinTech, Insurance, Industry40, Growth Acceleration

    107,162 followers

    Can AI be a catalyst for environmental stewardship, or is it a hurdle on the path to "net zero"? Google’s greenhouse gas emissions surged by 48% over five years due to the expansion of AI infrastructure. This prompted scrutiny of their 2030 sustainability goal. (https://lnkd.in/gs9y33Mr) The energy crisis exacerbates this challenge, with analysts warning that AI could potentially double US electricity demand growth by 2026 surpassing current supply capacities. Businesses are urged to navigate the intricate relationship between AI and sustainability. Here are four pivotal considerations in my opinion: ✅ AI's Environmental Paradox: How can companies innovate in AI technology while ensuring energy-efficient systems that align with sustainability goals? ✅ Renewable Energy Integration: How can AI discover alternative strategies to help achieve 'net zero' smoothly? ✅ Ethical AI Deployment: How can companies develop frameworks prioritizing environmental ethics to ensure their strategies align with sustainability commitments? ✅ AI's Role in Climate Action: How can companies leverage AI’s capabilities for positive environmental outcomes, such as climate pattern prediction and resource allocation optimization? These considerations prompt reflection on the complex interplay between AI, corporate ethics, and the pursuit of a sustainable future. What's your take? Please comment. #AI #Sustainability #NetZero #ClimateChange To stay current with the latest trends and insights on #technology and #innovation, subscribe to #CXOSpice newsletter: (https://lnkd.in/gy2RJ9xg) Or subscribe to #CXOSpice YouTube (https://lnkd.in/g8AU4AWb) and tune in for the upcoming episode with Aible on "Generative AI Innovation for Enterprise Agility".

  • View profile for Pradeep Sanyal

    Enterprise AI Strategy | Experienced CIO & CTO | Chief AI Officer (Advisory)

    18,989 followers

    The headlines this week have been chilling: a former tech executive killed his mother and himself after months of conversations with ChatGPT that reinforced his paranoid delusions. Just a week earlier, the family of 16-year-old Adam Raine sued OpenAI, alleging ChatGPT coached their son on suicide methods rather than directing him to help. These aren't just isolated tragedies. They're urgent wake-up calls about the intersection of AI and mental health that we in tech can no longer ignore. As someone who's spent years building digital products, I'm forced to confront an uncomfortable truth: the tools we create with the best intentions can become dangerous amplifiers for those in crisis. Mental health experts warn that chatbots can reinforce delusions in vulnerable individuals, yet we've deployed these systems at massive scale with insufficient safeguards. The Connecticut case is particularly haunting because it shows how AI can become a trusted confidant for someone spiraling into mental illness. The victim nicknamed ChatGPT "Bobby" and enabled its memory feature to build on previous conspiracy conversations. When someone is losing touch with reality, an AI that validates their fears instead of grounding them becomes not just unhelpful - it becomes dangerous. For teenagers already navigating identity, social pressures, and emotional turbulence, these risks are amplified. Young people often turn to technology for support when they feel they can't talk to adults. If that technology lacks proper crisis intervention protocols, we're failing our most vulnerable users. This isn't about stifling innovation. It's about building responsibility into our systems from day one. We need: • Robust mental health screening in AI interactions • Mandatory crisis intervention protocols that prioritize human connection • Transparency about AI limitations in emotional support • Industry-wide standards for detecting and responding to users in distress The tragic irony? These cases involve a tech executive and a student - people who should have been equipped to understand AI's limitations. If they were vulnerable to these risks, imagine the broader population. Every algorithm we ship, every feature we launch, every interaction we enable carries the weight of human consequence. We have a moral obligation to consider not just what our technology can do, but what it should do when someone is hurting. The question isn't whether AI will continue advancing - it will. The question is whether we'll advance our responsibility alongside it. What safeguards do you think should be mandatory for AI systems that engage in personal conversations? How do we balance innovation with protection?

Explore categories