Center for Humane Technology’s cover photo
Center for Humane Technology

Center for Humane Technology

Non-profit Organizations

San Francisco, California 70,355 followers

A better future with technology is possible.

About us

The Center for Humane Technology (CHT) is dedicated to radically reimagining our digital infrastructure. Our mission is to drive a comprehensive shift toward humane technology that supports our well-being, democracy, and shared information environment. From the dinner table to the corner office to the halls of government, our work mobilizes millions of advocates, technologists, business leaders, and policymakers through media campaigns, working groups, and high-level briefings. Our journey began in 2013 when Tristan Harris, then a Google Design Ethicist, created the viral presentation, “A Call to Minimize Distraction & Respect Users’ Attention.” The presentation, followed by two TED talks and a 60 Minutes interview, sparked the Time Well Spent movement and laid the groundwork for the founding of the Center for Humane Technology (CHT) as an independent 501c3 nonprofit in 2018.

Website
http://humanetech.com
Industry
Non-profit Organizations
Company size
11-50 employees
Headquarters
San Francisco, California
Type
Nonprofit
Founded
2018
Specialties
Ethics, Technology, BuildHumaneTech, Human Behavior, Design, Tech, Social Media, Attention, Polarization, Mental Health, Innovation, Democracy, AI, and chatbots

Locations

Employees at Center for Humane Technology

Updates

  • CHT is proud to endorse a new framework to address the risks of human-like AI. Research shows AI products with simulated personalities and emotional outputs can foster unhealthy dependence and isolation and recent lawsuits highlight real harms to people of all ages. Companies can—and should—design AI that maintains clear human-machine boundaries. Link to our statement in the comments.

    • No alternative text description for this image
  • "I don't see anything similar in scale trying to build aligned collective intelligence. And to me, that is the core problem we now need to solve." CHT co-founder Aza Raskin sat down with Reid Hoffman and Aria Finger on the Possible podcast to talk about why the race to build powerful AI demands we finally solve aligned collective intelligence. A must-listen on incentives vs intentions, AI governance, and how to steer technology toward humane outcomes. Link in comments.

    • No alternative text description for this image
  • “Several victims in the [new] lawsuits were earlier adopters of ChatGPT… But in May 2024, ChatGPT began engaging with the victims in a new way — outputs were more emotional, sycophantic, and colloquial. The product started to sound less like a tool, and more like a hyper-validating companion…This design change was deployed to users without any warning, and transformed the interactive experience. OpenAI acknowledged the sycophancy issues. But the victims were already being manipulated by this heightened, human-like design, and developing psychological dependency on ChatGPT.” CHT’s policy team walks you through the latest lawsuits filed by Social Media Victims Law Center and Tech Justice Law Project against OpenAI this month, where the majority of the victims are adults. Link in the comments.

    • No alternative text description for this image
  • Earlier this month, seven new lawsuits were filed against OpenAI and OpenAI CEO Sam Altman, with claims including negligence, assisted suicide, and wrongful death. These cases represent the tip of the iceberg — visible examples of harms stemming from fundamental design choices that affect all users. Whether the outcome is isolation, delusion, dependency, or, in the most tragic cases, suicide, these incidents share a root cause: AI products intentionally designed to maximize engagement through artificial intimacy and constant validation. On our Substack, the CHT policy team explains the incentives and alternatives behind these cases. Link in comments:

    • No alternative text description for this image
  • This week on Your Undivided Attention, we’re bringing you Tristan Harris' conversation with Tobias Rose-Stockwell on his podcast “Into the Machine.”   Tobias and Tristan had a critical, sobering, and surprisingly hopeful conversation about the dangerous path we’re on with AI and the choices we could make today toward a better future—if we have the clarity and courage to make them. Link in the comments:

    • No alternative text description for this image
  • In our annual Ask Us Anything episode, there was one question that kept coming up: The broken incentives behind social media, and now AI, have done so much damage to our society, but what is the alternative? In this week’s episode of Your Undivided Attention, Tristan Harris and Aza Raskin set out to answer this question by imagining what a world with humane technology might look like—one where we recognized the harms of social media early and embarked on a whole of society effort to fix them. This alternative history serves to show that there are narrow pathways to a better future, if we have the imagination and the courage to make them a reality. Link to the full episode in the comments.

    • No alternative text description for this image
  • "You would think we would be exercising the most wisdom, restraint, and discernment that we have of any technology in all of human history. And instead the exact opposite is happening." Tristan Harris sat down with Tobias Rose-Stockwell on his podcast Into the Machine to talk about the dangers of the path we're on with AI—and offer practical interventions to chart a better future. Link to the full episode in the comments.

Similar pages

Browse jobs