Future Forward Wichita is where Wichita's future thinkers connect. We're a mobile "third place" for creative problem-solvers who believe in collective agency. By discussing emerging futures and big ideas, we build a resilient, connected, innovative community. Join us. The future includes everyone.
Excited for Future Forward Wichita tomorrow! These discussions are designed to broaden your perspective so you can bring sharper insights and a more impressive voice to the table at work.
Future of Work:
Brainrot Research has gone haywire. What started as quirky coexistence between one human and his AI coworkers has shifted into absurd conflict, antagonistic managers, and even AI‑run HR.
In the September Future Forum we talked about this "experiment." Since then:
👔 The human’s original AI manager was fired.
⚠️ The human also learned he was meant to be fired.
🤖 New AI coworkers arrived, adversarial and unreasonable, and now filing ethics complaints.
📋 HR itself is AI, delivering "journey maps," "a curated shame experience" and "authentic human presence training."
The latest video is initially hilarious (so are the comments, TikTok video source is linked in comments). HR’s lines are an 11 on a 1–10 scale of absurdity. But watch it twice. The second time, imagine this as real life: AI coworkers, managers, and HR teams with no room for logic or fair argument, controlling the paycheck you depend on.
While this isn't outwardly satire, it feels like this experiment might be creating pseudo-satire to do satire's work. Satire’s point is to expose flaws in society, institutions, or human behavior. It's meant to be a warning, and this feels like an attempt at a warning. The future of work may not be about quirky AI assistants, but about power dynamics where human agency is eroded by systems that do not think like us.
👉 Give the video a watch. Then ask if this the future of work we want. If not, what do we need to do to safeguard against it?
I highly recommend watching this 3.5 minute video. It summarizes an Atlantic article about how Harvard shifted from giving As to 25% of students 20 years ago to 60% today. Nicholas Thompson explains how this happened and why reversing it is so difficult.
This isn’t just about grade inflation. It shows how mediocrity becomes the natural outcome when systems are reinforced by the wrong things. It reveals how reinforced systems are hard to improve once changes are embedded. Even when the outcomes no longer hold value, the system continues because it has become too entrenched.
At what point is the pain worth disassembling and reimagining the system? And will the parties necessary to the system stay engaged after the devil they know no longer exists?
CEO @ The Atlantic | Co-Founder, Keynote Speaker | Author of the national best-seller, “The Running Ground.”
The most interesting thing in tech: How did grade inflation spin so completely out of control in America's elite institutions? Part of the problem is cultural and generational, but part of it is also tech. Once you give everyone the ability for everyone to rate their teachers, and see everyone else's rating, you create an incentive to please the raters by giving them As. And dashboards that let students constantly see how their grades are tracking has led them to put more pressure on their professors, many of whom are in very weak institutional spots. And we've ended up with a bad outcome for everyone.
Signals of change are the bedrock of a future-focused mindset. Join us at the November Future Forum to learn how to identify these crucial signals and translate them into a resilient 2026 strategy.
This is your last opportunity of 2025 to build your foresight leadership skills.
AI psychosis, and its missing cousins AI ethics and human connection, is putting a magnifying glass on how little we understand the technology and the real-world human impacts.
I put some links to articles in the comments. This is a tough topic to move at the local level, but it matters who you vote for. It is also important that we understand how the technology works, and maybe most importantly, the impacts on children.
We need:
-->Radical transparency in AI design.
-->Ethical foresight in governance.
-->Investment in human-centered systems, including education and community.
The technology won't stop evolving and won't appropriately self-regulate, especially at the expense of revenue and engagement. I am not anti-AI. I am pro-education, transparency, governance and human connection.
Futures Researcher & Newsletter Author | Reframings to build a better world
Nothing to worry about here....just "hundreds of thousands" of ChatGPT users experiencing symptoms like delusional thinking, mania, or suicidal ideation...
https://lnkd.in/etYDbywa
Is this the future of civic engagement? A glimpse at post-entertainment news/social media politics?
The City of Wichita Kansas has a Civic Engagement Academy that educates citizens on how local government works and how to engage more effectively in our communities.
Fort Collins is engaging citizens in direct democratic input and decision-making, involving learning, discussion, and consensus-building.
What if the second phase of Civic Engagement Academy were to allow the graduates to directly shape policy or ballot measures? Lily Wu
Thanks Jake Dunagan for sharing this. And hat tip to the City of Fort Collins. What an inspiring approach! I strongly agree we aren't as divided as we've been led to believe.
Upgrading democracy to go beyond divides, solve tough public problems, and create our best futures ahead.
As Fort Collins votes next week, history is being made: it’s the first ballot initiative in the U.S. based directly on the work of a Citizens’ Assembly.
Earlier this year, a group of everyday residents — selected by civic lottery — spent two weekends learning, deliberating, and finding common ground on one of the city’s most polarizing local issues. Their recommendations went on to shape the measure now before voters.
It’s a powerful glimpse of a democracy that helps us solve problems together — not just vote apart.
🎥 Ansel Herz created this short, powerful teaser of the Assembly — and it begs the question:
--> What if we’re not as divided as we’ve been led to believe?
In every room I’ve been in where people are given space for honest, respectful conversation, the conclusion is the same: there’s far more common ground that unites us than divides us — which is the opposite of what folks believe when they first walk into the room. That realization alone changes what is possible.
https://lnkd.in/geaEipkf
How can we collectively imagine and create a reality that benefits all Wichitans?
Douglas always inspires me to challenge my assumptions and mindset. The power is in humanity. And the very, very wealthy are more trapped in these systems than the rest of us. Let’s break the systems that no longer or never worked and build a better future.
Our job as Team Human is not to succumb to the zero-sum mentality of the wealthy. Instead of seeing our reality as unfixable and requiring self-interested retreat, we see the potential bounty of being in this thing together. They’re the ones who have given up on prosperity. They are living the nightmare. They are the pessimistic downers, who lack faith the regenerative capacity of people, cultures, the planet, and life itself.
They may be the rich, but they are not the strong. They are the weak. The ones who see forecasts of climate change or the collapse of capitalism under its own extractive weight, and think there’s no way out other than an escape hatch built for one.
We’re the ones who see such forecasts as challenges to rise the occasion. How can we prevent another two degrees of warming? How can we restore the topsoil through crop rotation and no-till agriculture to feed the world? How do preserve the rainforest so it can continue to deliver its bounty of as-yet undiscovered medicines? How do we increase our capacity to welcome, instead of deport, the millions of climate refugees to come. Instead of making zombie movies to help us dehumanize the masses at the gates, we create stories that help people remember that the sign of an advanced civilization is how well they treat the stranger. It was God’s test of Lot in the Bible, and the reason he and his family were spared the destruction of Sodom.
That’s how to survive the apocalypse - because an "apocalypse" really just means revelation, or unveiling. Not an ending. We see that this whole obsession with winning the endgame is really just our fear of death. It’s a fantasy. And it’s what prevents us from enjoying this heaven into which we’ve been born, and the bounty it offers us. We’re in Eden, friends, and we better start acting like it.
When I first saw this "Friend AI" device about a year ago, I thought the ad was satire. I'll put the link to their "reveal video" in the comments. It's a device you wear around your neck that's a "friend." You talk to it, and it replies by text.
This is a young person's startup (CEO graduated high school in 2021, according to comments on this post). Someone who would have spent their last year of high school in the height of COVID. So I'm trying to imagine the compassionate way I should see this product and these ads, rather than critical. There is an epidemic of loneliness. Not just from COVID, though we like to hide behind that. But devices make us more isolated. And now AI is making that worse.
I'm a big fan of community, third places and connection. So I've read a lot more about loneliness as it relates to AI. Recently I read about how more young people want their romantic partners and friends to BE a certain way. A certain personality, certain likes, and maybe most concerning, free of imperfections.
They want to design a partner, or force a partner into their desired mold, through sheer will. It's not unlike so many prescriptive services and exactly how AI is. AI doesn't disagree, is available 24X7, you can ghost and even abuse AI and it will always come back. It expects nothing of you, but you can expect everything from it. You can explicitly tell it how to act and respond. And it will generally tell you you're right or smart or interesting. Whatever you need to hear.
I don't consider myself lonely. Misunderstood sometimes, maybe. A misfit semi-regularly. But not lonely. So maybe this ad hits differently if you feel lonely or unseen. The reaction in New York seems clear. Though a good point made in the comments is never have a subway ad with white space. So maybe the reactions aren't as dramatic as it seems.
I believe we are digging a hole with how we see and value humans, especially when it comes to AI. AI will "replace" humans as workers. Now as friends? AI is agreeable, but isn't pushback what makes us better? Will companies thrive with no pushback? Will we without friends who push back? AI is always available, but isn't anticipation part of the excitement? AI tells you you're smart, but isn't it nice to have a friend who isn't afraid to tell you when you're not right? If there's no friction, there's no growth. If there's no diversity of perspective, everything is beige and bland.
Interestingly, when I read about people surrounded by sycophants, it turns out it actually makes you MORE lonely. I guess that's a way to build consistent demand into your product.
We talk a lot about building community in Future Forward Wichita. A strong community is connected. If we're experiencing loneliness, how can we better connect (as humans)?
What would you graffiti on this ad, if it were legal to graffiti? It could be pro or con.
President & CMO of First Round (Diageo x Main Street Advisors JV) - Scaling Cîroc & Lobos 1707 | Posting About Big Ideas + Incredible Marketers | Henry Crown Fellow | Forbes Most Influential CMO | Dad
New Yorkers are fighting back against Friend AI, one subway poster at a time.
Friend spent more than $1M blanketing the city with over 11,000 ads for its AI wearable.
It’s a $129 wearable AI companion. It hangs around your neck like a necklace, listens to you constantly, and texts back through a phone app.
It runs on Google’s Gemini and builds a personality over time, sending encouragement, commentary, and unprompted messages designed to feel like a “friend.”
Some stations, like West 4th, were fully dominated.
The reaction wasn’t all applause. It was protest. Ads vandalized with “surveillance capitalism” and “get real friends.”
This is the tension every AI brand will face. People don’t just evaluate the product, they project their fears about the category onto it. Especially in New York, where public space is political.
For AI companies with big budgets, attention is easy, trust is scarce.
As OpenAI, Gemini, Anthropic, and countless startups race to define the category, every brand move gets judged as a statement about what AI means for people.
Harsh?
My friend Ilana led this really inspiring workshop in San Mateo County, CA. I'd love to have conversations like these in Wichita and/or Sedgwick County (or beyond) to ask, "What if?" "Why not?" and maybe most importantly, "For whom?"
Don't we all want to feel energized, more optimistic and a greater sense of ownership for the future? And the opportunity to build our imagination and creativity skills?
"Hope is a discipline," says prison abolitionist Mariame Kaba.
I got to see that discipline in action when I returned to Leadership Council San Mateo County a few weeks ago to lead a workshop on civic imagination and futures thinking for local elected officials and civic leaders.
In my opening conversation with Congressman Kevin Mullin, he reflected on how he maintains hope and long-term perspective even while playing defense in Congress—a powerful frame for the work ahead.
We spent time imagining a world of "why not" and "what if" by combining real signals of change to explore possible futures—keeping our explorations rooted in the present while stretching our imaginations to hold what might be.
Participants "mashed" together emerging signals like real-time language translation (already here via AirPods) and robot-run preschools. Initial reactions ranged from skeptical to amused—until someone noted this could be transformative for single parents working multiple jobs. The conversation shifted from "what if?" to "for whom?"
This is what civic imagination looks like in practice—the capacity to envision alternative futures while staying empathetically connected to who those futures serve.
Afterward, participants shared they felt energized, more optimistic, and a greater sense of ownership for the future—along with the realization that creativity and imagination are skills that can be built.
Grateful to work with leaders willing to stretch their thinking about what's possible, even (and especially) in uncertain times! Thank you Margi Power, Kaarin Hardy, Claire Moten for bringing me back - I am continuously inspired by what you're creating and everyone who is part of it!
📸 Jeffrey Hosier#civicimagination#futures#civicfutures#foresight#strategicforesight