User Experience Design for Healthcare

Explore top LinkedIn content from expert professionals.

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer
    216,449 followers

    ☂️ Designing For Edge Cases and Exceptions. Practical design guidelines to prevent dead-ends, lock-outs and other UX failures ↓ 🚫 People are never edge cases; “average” users don’t exist. ✅ Exceptions will occur eventually, it’s just a matter of time. ✅ To prevent failure, we need to explore unhappy paths early. ✅ Design full UI stack: blank, loading, partial, error, ideal states. ✅ Design defaults deliberately to prevent slips and mistakes. ✅ Start by designing the core flow, then scrutinize every part of it. ✅ Allow users to override validators, or add an option manually. ✅ Design for incompatibility: contradicting filters, prefs, settings. 🚫 Avoid generic error messages: they are often main blockers. ✅ Suggest presets, templates, starter kits for quick recovery. ✅ Design extreme scales: extra long/short, wide/tall, offline/slow. ✅ Design irreversible actions, e.g. Delete, Forget, Cancel, Exit. ✅ Allow users to undo critical actions for some period of time. ✅ Design a recovery UX due to delays, lock-outs, missing data. ✅ Accessibility is a reliable way to ensure design resilience. Good design paves happy paths for everyone, but also casts a wide safety net when things go sideways. I love to explore unhappy paths by setting up a dedicated design review to discover exceptions proactively. It can be helpful to also ask AI tooling to come up with alternate scenarios. Once we start discussing exceptions, we start thinking outside of the box. We have to actively challenge generic expectations, stereotypes and assumptions that we as designers typically embed in our work, often unconsciously. And to me, that’s one of the most valuable assets of such discussions. And: whenever possible, flag any mentions of average users in your design discussions. Such people don’t exist, and often it’s merely an aggregated average of assumptions and hunches. Nothing stress tests your UX better then testing it in realistic conditions with realistic data sets with real people. Useful resources: How To Fix A Bad User Interface, by Scott Hurff https://lnkd.in/ecj6PGPU How To Design Edge Cases, by Tanner Christensen https://lnkd.in/ecs3kr8z How To Find Edge Cases In UX, by Edward Chechique https://lnkd.in/e2pfqqen Just About Everyone Is an Edge Case, by Kevin Ferris https://lnkd.in/eDdUVHyj Edge Cases In UX, by Krisztina Szerovay https://lnkd.in/eM2Xynba Recommended books: – Design For Real Life, by Sara Wachter-Boettcher, Eric Meyer – The End of Average, by Todd Rose – Think Like a UX Researcher, by David Travis, Philip Hodgson – Mismatch: How Inclusion Shapes Design, by Kat Holmes #ux #design

  • View profile for Marc Beierschoder
    Marc Beierschoder Marc Beierschoder is an Influencer

    Intersection of Business, AI & Data | Generative AI Innovation | Digital Strategy & Scaling | Advisor | Speaker | Recognized Global Tech Influencer

    140,360 followers

    𝟔𝟔% 𝐨𝐟 𝐀𝐈 𝐮𝐬𝐞𝐫𝐬 𝐬𝐚𝐲 𝐝𝐚𝐭𝐚 𝐩𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐬 𝐭𝐡𝐞𝐢𝐫 𝐭𝐨𝐩 𝐜𝐨𝐧𝐜𝐞𝐫𝐧. What does that tell us? Trust isn’t just a feature - it’s the foundation of AI’s future. When breaches happen, the cost isn’t measured in fines or headlines alone - it’s measured in lost trust. I recently spoke with a healthcare executive who shared a haunting story: after a data breach, patients stopped using their app - not because they didn’t need the service, but because they no longer felt safe. 𝐓𝐡𝐢𝐬 𝐢𝐬𝐧’𝐭 𝐣𝐮𝐬𝐭 𝐚𝐛𝐨𝐮𝐭 𝐝𝐚𝐭𝐚. 𝐈𝐭’𝐬 𝐚𝐛𝐨𝐮𝐭 𝐩𝐞𝐨𝐩𝐥𝐞’𝐬 𝐥𝐢𝐯𝐞𝐬 - 𝐭𝐫𝐮𝐬𝐭 𝐛𝐫𝐨𝐤𝐞𝐧, 𝐜𝐨𝐧𝐟𝐢𝐝𝐞𝐧𝐜𝐞 𝐬𝐡𝐚𝐭𝐭𝐞𝐫𝐞𝐝. Consider the October 2023 incident at 23andMe: unauthorized access exposed the genetic and personal information of 6.9 million users. Imagine seeing your most private data compromised. At Deloitte, we’ve helped organizations turn privacy challenges into opportunities by embedding trust into their AI strategies. For example, we recently partnered with a global financial institution to design a privacy-by-design framework that not only met regulatory requirements but also restored customer confidence. The result? A 15% increase in customer engagement within six months. 𝐇𝐨𝐰 𝐜𝐚𝐧 𝐥𝐞𝐚𝐝𝐞𝐫𝐬 𝐫𝐞𝐛𝐮𝐢𝐥𝐝 𝐭𝐫𝐮𝐬𝐭 𝐰𝐡𝐞𝐧 𝐢𝐭’𝐬 𝐥𝐨𝐬𝐭? ✔️ 𝐓𝐮𝐫𝐧 𝐏𝐫𝐢𝐯𝐚𝐜𝐲 𝐢𝐧𝐭𝐨 𝐄𝐦𝐩𝐨𝐰𝐞𝐫𝐦𝐞𝐧𝐭: Privacy isn’t just about compliance. It’s about empowering customers to own their data. When people feel in control, they trust more. ✔️ 𝐏𝐫𝐨𝐚𝐜𝐭𝐢𝐯𝐞𝐥𝐲 𝐏𝐫𝐨𝐭𝐞𝐜𝐭 𝐏𝐫𝐢𝐯𝐚𝐜𝐲: AI can do more than process data, it can safeguard it. Predictive privacy models can spot risks before they become problems, demonstrating your commitment to trust and innovation. ✔️ 𝐋𝐞𝐚𝐝 𝐰𝐢𝐭𝐡 𝐄𝐭𝐡𝐢𝐜𝐬, 𝐍𝐨𝐭 𝐉𝐮𝐬𝐭 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞: Collaborate with peers, regulators, and even competitors to set new privacy standards. Customers notice when you lead the charge for their protection. ✔️ 𝐃𝐞𝐬𝐢𝐠𝐧 𝐟𝐨𝐫 𝐀𝐧𝐨𝐧𝐲𝐦𝐢𝐭𝐲: Techniques like differential privacy ensure sensitive data remains safe while enabling innovation. Your customers shouldn’t have to trade their privacy for progress. Trust is fragile, but it’s also resilient when leaders take responsibility. AI without trust isn’t just limited - it’s destined to fail. 𝐇𝐨𝐰 𝐰𝐨𝐮𝐥𝐝 𝐲𝐨𝐮 𝐫𝐞𝐠𝐚𝐢𝐧 𝐭𝐫𝐮𝐬𝐭 𝐢𝐧 𝐭𝐡𝐢𝐬 𝐬𝐢𝐭𝐮𝐚𝐭𝐢𝐨𝐧? 𝐋𝐞𝐭’𝐬 𝐬𝐡𝐚𝐫𝐞 𝐚𝐧𝐝 𝐢𝐧𝐬𝐩𝐢𝐫𝐞 𝐞𝐚𝐜𝐡 𝐨𝐭𝐡𝐞𝐫 👇 #AI #DataPrivacy #Leadership #CustomerTrust #Ethics

  • View profile for Juan Campdera
    Juan Campdera Juan Campdera is an Influencer

    Creativity & Design for Beauty Brands | CEO at Aktiva

    72,972 followers

    Beauty narratives, design & your Brain. How viral brands tap into the brain’s natural chemistry? Activating circuits that build trust, enhance memory, and deliver a sense of reward? This narrative-based marketing shapes how people perceive and connect with a brand. Do you care? >>Brain favors EMOTIONALLY driven decisions<< Stories prompt the brain to release chemicals like oxytocin and dopamine, key players in empathy, trust, and motivation. When packaging reflects a narrative, it becomes not just a container, but a memory cue. +23% rise in oxytocin builds emotional closeness with a brand. +40% better recall for emotionally charged experiences. <<Story-led packaging drives RECOGNITION>> Packaging that incorporates a storyline helps consumers remember the brand long after the first interaction. +97% increase in recognition with consistent visual narratives. +95% of buying decisions are made subconsciously and guided by emotion. >>Visibility in a SATURATED market<< Among countless choices on the shelf, story-centric packaging stands out. It grabs attention and communicates meaning instantly. +306% increase in lifetime value for emotionally resonant brands. +22× better recall when messages are told as stories. <<The impact of visual language>> Design elements like color, illustration, and typography tell their own stories. When aligned consistently, they strengthen identity and make brands unforgettable. +82% of emotionally invested consumers are loyal to their favorite brands. +22× more memorable when information is conveyed as narrative. >>TRUST through transparent storytelling<< Consumers value honesty. Sharing stories around sustainability, sourcing, or values builds credibility and fosters trust. +81% of buyers say trust is a key factor in purchasing decisions. +60% agree that attractive packaging makes products feel premium. <<E-Commerce Growth Through Shareable Design>> Packaging with a strong narrative invites social sharing, expanding a brand’s digital footprint, especially crucial for online retail where physical presence is absent. +40% would post a product on social media if the packaging is creative. +52% online shoppers are more likely to repurchase when packaging tells a story. Final Thoughts. Storytelling affects the brain at a deep level, through emotional triggers, chemical responses, and immersive experiences. This emotional engagement builds loyalty and drives decisions in a way that data-driven marketing alone cannot. Explore real-world examples from standout brands and get inspired to craft a narrative that resonates. Featured Brands: Ace beauty Bondy Sands Colorkey Daise Drunk Elephant Heart Full Mixik Naming Laneige Scandy Tan Tuesday Urban Jungle #beautybusiness #beautyprofessionals #luxurybusiness #luxuryprofessionals

    • +7
  • View profile for Matt Wood
    Matt Wood Matt Wood is an Influencer

    CTIO, PwC

    75,345 followers

    𝔼𝕍𝔸𝕃 field note (2 of 3): Finding the benchmarks that matter for your own use cases is one of the biggest contributors to AI success. Let's dive in. AI adoption hinges on two foundational pillars: quality and trust. Like the dual nature of a superhero, quality and trust play distinct but interconnected roles in ensuring the success of AI systems. This duality underscores the importance of rigorous evaluation. Benchmarks, whether automated or human-centric, are the tools that allow us to measure and enhance quality while systematically building trust. By identifying the benchmarks that matter for your specific use case, you can ensure your AI system not only performs at its peak but also inspires confidence in its users. 🦸♂️ Quality is the superpower—think Superman—able to deliver remarkable feats like reasoning and understanding across modalities to deliver innovative capabilities. Evaluating quality involves tools like controllability frameworks to ensure predictable behavior, performance metrics to set clear expectations, and methods like automated benchmarks and human evaluations to measure capabilities. Techniques such as red-teaming further stress-test the system to identify blind spots. 👓 But trust is the alter ego—Clark Kent—the steady, dependable force that puts the superpower into the right place at the right time, and ensures these powers are used wisely and responsibly. Building trust requires measures that ensure systems are helpful (meeting user needs), harmless (avoiding unintended harm), and fair (mitigating bias). Transparency through explainability and robust verification processes further solidifies user confidence by revealing where a system excels—and where it isn’t ready yet. For AI systems, one cannot thrive without the other. A system with exceptional quality but no trust risks indifference or rejection - a collective "shrug" from your users. Conversely, all the trust in the world without quality reduces the potential to deliver real value. To ensure success, prioritize benchmarks that align with your use case, continuously measure both quality and trust, and adapt your evaluation as your system evolves. You can get started today: map use case requirements to benchmark types, identify critical metrics (accuracy, latency, bias), set minimum performance thresholds (aka: exit criteria), and choose complementary benchmarks (for better coverage of failure modes, and to avoid over-fitting to a single number). By doing so, you can build AI systems that not only perform but also earn the trust of their users—unlocking long-term value.

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice | Founder: AHT Group - Informivity - Bondi Innovation

    33,799 followers

    For superior Humans + AI decision-making people need to have "appropriate confidence" in AI recommendations. Since Human and AI are a system, there are many aspects to how AI outputs can best be used in human cognition and decisions. The issues range from across how LLMs assess confidence levels, how accurate they are in this, how they communicate those confidence levels, how humans assess and interpret those confidence levels, overall trust levels in AI, mental models for the systems, and how they more generally use varied inputs in decisions. I'm currently doing a literature review of AI confidence and trust calibration in Humans + AI decision making. I'll share the most practical insights later, but there are essentially two elements. 🤝 Systems for AI trust building and communication. The current scope of initiatives in the space is captured in this review article image (reference below). 🧑💼 Human leaders developing skills at interacting with AI systems in their decision-making, including understanding the nature and reliability of AI outputs and confidence assessments, use of relevant decision frameworks, and joint confidence calibration. Developing relevant 1. AI capabilities, and 2. leadership skills in parallel will be critical to making the most of the absolutely massive potential of Humans + AI decision making. Image source: Mehrotra et al., "A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and Challenges" (link in comments). How to apply these insights in practice is covered in my cohort course "AI-Enhanced Thinking & Decision-Making" (link in comments).

  • View profile for Keila Hill-Trawick, CPA, MBA
    Keila Hill-Trawick, CPA, MBA Keila Hill-Trawick, CPA, MBA is an Influencer

    Forbes Top 200 Accountant | Firm Owner | Building to Enough | Empowering entrepreneurs to build and sustain the business of their dreams

    9,606 followers

    It doesn't matter how amazing your benefits package if your team doesn't use it. I've learned that what I value might not be the same as what my team values. As I shared on Episode 136 of "Build to Enough," at Little Fish, I've implemented unique benefits that make my employees feel valued while also recognizing that they are human. For example, I offer "Sick and Sad Days"—time off that isn't counted against anyone if they're sick or just can't do it that day. I wanted to ensure they have room to take time off when they aren't at their best. We also close for five weeks out of the year: one week during spring break for tax season, one week at the end of summer, and two weeks at the end of the year. These breaks are automatically built in and fully paid for everyone. We offer flexible work hours with some overlapping core hours, but they can work at a time that suits them best. Plus, we have an annual all-expenses-paid company retreat, a 401k match, and internet reimbursement. Now, I didn't start with all of this. Bit by bit, I figured out what made the most sense for the business and what the team actually wanted. If you're looking to develop a benefits package that truly supports your team, here are some steps to consider: 1. Assess your team's wants and needs - Ask them what they value and what perks would make a difference in their lives. 2. Prioritize core benefits - Focus on essentials like PTO, health benefits, and retirement plans, but don't forget to explore other perks. 3. Research your options - There are many health and retirement plans available for small teams. Do your homework to see what will work best for your team (and your budget 😉 ). 4. Consider supplemental benefits - Look for inexpensive perks that have a significant impact, like flexible hours or remote work options. 5. Maximize your budget - Allocate a specific amount for benefits and make the most of it. Seek group buying opportunities and tiered benefits to offer more without overspending. 6. Review and adjust regularly - Benefits aren't a set-it-and-forget-it deal. As your team evolves, so should your benefits package. Creating a benefits offering that truly supports your team not only helps retain your current employees but also makes your company a place where people want to work.

  • View profile for ISHLEEN KAUR

    Revenue Growth Therapist | LinkedIn Top Voice | On the mission to help 100k entrepreneurs achieve 3X Revenue in 180 Days | International Business Coach | Inside Sales | Personal Branding Expert | IT Coach |

    24,425 followers

    𝐎𝐧𝐞 𝐥𝐞𝐬𝐬𝐨𝐧 𝐦𝐲 𝐰𝐨𝐫𝐤 𝐰𝐢𝐭𝐡 𝐚 𝐬𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝐝𝐞𝐯𝐞𝐥𝐨𝐩𝐦𝐞𝐧𝐭 𝐭𝐞𝐚𝐦 𝐭𝐚𝐮𝐠𝐡𝐭 𝐦𝐞 𝐚𝐛𝐨𝐮𝐭 𝐔𝐒 𝐜𝐨𝐧𝐬𝐮𝐦𝐞𝐫𝐬: Convenience sounds like a win… But in reality—control builds the trust that scales. 𝐋𝐞𝐭 𝐦𝐞 𝐞𝐱𝐩𝐥𝐚𝐢𝐧 👇 We were working on improving product adoption for a US-based platform. Most founders would instinctively look at cutting down clicks and removing steps in the onboarding journey. Faster = Better, right? That’s what we thought too—until real usage patterns showed us something very different. Instead of shortening the journey, we tried something counterintuitive: -We added more decision points -Let the user customize their flow -Gave options to manually choose settings instead of setting defaults And guess what? Conversion rates went up. Engagement improved. And most importantly—user trust deepened. 𝐇𝐞𝐫𝐞’𝐬 𝐰𝐡𝐚𝐭 𝐈 𝐫𝐞𝐚𝐥𝐢𝐬𝐞𝐝: You can design a sleek 2-click journey…  …but if the user doesn’t feel in control, they hesitate. Especially in the US market, where data privacy and digital autonomy are hot-button issues—transparency and control win. 𝐒𝐨𝐦𝐞 𝐞𝐱𝐚𝐦𝐩𝐥𝐞𝐬 𝐭𝐡𝐚𝐭 𝐬𝐭𝐨𝐨𝐝 𝐨𝐮𝐭 𝐭𝐨 𝐦𝐞: → People often disable auto-fill just to manually type things in.  → They skip quick recommendations to do their own comparisons.  → Features that auto-execute without explicit confirmation? Often uninstalled. 💡 Why? It’s not inefficiency. It’s digital self-preservation. It’s a mindset of: “Don’t decide for me. Let me drive.” And I’ve seen this mistake firsthand: One client rolled out a smart automation feature that quietly activated behind the scenes. Instead of delighting users, it alienated 15–20% of their base. Because the perception was: "You took control without asking." On the other hand, platforms that use clear confirmation prompts (“Are you sure?”, “Review before submitting”, toggles, etc.)—those build long-term trust. That’s the real game. Here’s what I now recommend to every tech founder building for the US market: -Don’t just optimize for frictionless onboarding. -Optimize for visible control. -Add micro-trust signals like “No hidden fees,” “You can edit this later,” and clear toggles. -Let the user feel in charge at every key point. Because trust isn’t built by speed. It’s built by respecting the user’s right to decide. If you’re a tech founder or product owner: Stop assuming speed is everything. Start building systems that say, “You’re in control.” That’s what creates adoption that sticks. What’s your experience with this? Would love to hear in the comments. 👇 #ProductDesign #UserExperience #TrustByDesign #TechForUSMarket #DigitalAutonomy #businesscoach #coachishleenkaur Linkedin News LinkedIn News India LinkedIN for small businesses

  • View profile for Oliver King

    Founder & Investor | AI Operations for Financial Services

    5,021 followers

    Why would your users distrust flawless systems? Recent data shows 40% of leaders identify explainability as a major GenAI adoption risk, yet only 17% are actually addressing it. This gap determines whether humans accept or override AI-driven insights. As founders building AI-powered solutions, we face a counterintuitive truth: technically superior models often deliver worse business outcomes because skeptical users simply ignore them. The most successful implementations reveal that interpretability isn't about exposing mathematical gradients—it's about delivering stakeholder-specific narratives that build confidence. Three practical strategies separate winning AI products from those gathering dust: 1️⃣ Progressive disclosure layers Different stakeholders need different explanations. Your dashboard should let users drill from plain-language assessments to increasingly technical evidence. 2️⃣ Simulatability tests Can your users predict what your system will do next in familiar scenarios? When users can anticipate AI behavior with >80% accuracy, trust metrics improve dramatically. Run regular "prediction exercises" with early users to identify where your system's logic feels alien. 3️⃣ Auditable memory systems Every autonomous step should log its chain-of-thought in domain language. These records serve multiple purposes: incident investigation, training data, and regulatory compliance. They become invaluable when problems occur, providing immediate visibility into decision paths. For early-stage companies, these trust-building mechanisms are more than luxuries. They accelerate adoption. When selling to enterprises or regulated industries, they're table stakes. The fastest-growing AI companies don't just build better algorithms - they build better trust interfaces. While resources may be constrained, embedding these principles early costs far less than retrofitting them after hitting an adoption ceiling. Small teams can implement "minimum viable trust" versions of these strategies with focused effort. Building AI products is fundamentally about creating trust interfaces, not just algorithmic performance. #startups #founders #growth #ai

  • View profile for Antonio Vizcaya Abdo
    Antonio Vizcaya Abdo Antonio Vizcaya Abdo is an Influencer

    LinkedIn Top Voice | Sustainability Advocate & Speaker | ESG Strategy, Governance & Corporate Transformation | Professor & Advisor

    118,001 followers

    It is time to rethink how we talk about climate change 🌎 Sharing my latest article for Inc. Magazine on why fear alone is not an effective long term strategy for climate communication. Over the past decades, the climate narrative has centered on alarming data, catastrophic projections, and worst case scenarios. While this approach has successfully elevated the urgency of the issue, it has not always translated into meaningful behavioral or systemic change. Fear is a powerful motivator for immediate reaction, but its effect diminishes over time. Constant exposure to catastrophic framing often leads to emotional fatigue, desensitization, and disengagement. Without clear solutions or a sense of agency, the public is left concerned but uncertain about how to engage. The article argues for a more balanced and constructive communication approach. One that complements the sense of urgency with a forward looking and relatable vision. Rather than focusing only on sacrifice and decline, climate change can also be framed as an opportunity to rethink how we live, move, and produce. Drawing on insights from Futerra’s Sell the Sizzle report, the piece outlines four critical elements of effective climate messaging: Vision, Choice, Plan, and Participation. These components can help build a narrative that is not only accurate, but also engaging and action oriented. Reframing the story of climate change is not about reducing the severity of the issue. It is about increasing the relevance of the message. By presenting tangible and near term benefits, and by inviting people into the solution, communication can become a catalyst for broader participation and deeper commitment. You can read the full article here 👇 https://lnkd.in/g4hcb-Sd #sustainability #business #sustainable #esg

  • View profile for Stuart Winter-Tear

    Founder, Unhyped | Author of UNHYPED | Strategic Advisor | AI Architecture & Product Strategy | Clarity & ROI for Executives

    52,914 followers

    Meet the Quasi-Creature. We name the thing, set it beside the data, pour a cup of tea, and the strangeness becomes visible. The paper argues that the “uncanny valley” of GenAI is not about looks at all. It’s about agency - how agent-like a system seems versus how reliably it behaves. When autonomy outruns reliability, trust collapses. That is the uncanny valley of agency. The authors ground this idea in an illustrative study (‘Move 78’), involving 37 participants working with a tuned chatbot on a creative task. The sample is modest, the signals striking. On NASA-TLX - a standard workload scale from NASA - frustration averaged ~15 out of 20, with mental demand similarly high. Two patterns stood out. First, inefficiency drove frustration. When the system felt inefficient, frustration spiked (r ≈ −0.85). Second, more interaction wasn’t collaboration - it was repair. Higher message counts correlated with higher frustration (r ≈ +0.74). What looked like “engagement” was struggle. There was an expert twist. Those most familiar with AI were the most annoyed. Richer mental models meant sharper disappointment when the system broke its own implied rules. Regression confirmed it: inefficiency, negative reactions, and prior familiarity explained nearly 80% of frustration, with inefficiency the strongest factor. Beneath the stats lies what the authors call rupture and repair. We approach the system as a tool. It stumbles, we try to fix it as a tool. But when it resists in ways that feel agent-like, our perception shifts: no longer a hammer, but a house-guest. The Quasi-Creature sits across from us, uncanny and unpredictable. That flip is the valley in motion. The design implications are blunt. Stop chasing flawless mimicry. Instead, build seams and signals: show when context is lost, expose uncertainty, offer reasons instead of coy confidence. Make the system’s limits visible so people can predict failure and recover. This isn’t just UX polish - it’s an ethical stance about how much opacity and helplessness we normalise in the infosphere we now share. So invite the creature in, pour the tea, and place it beside the tables. Seated with the evidence, the oddity becomes diagnosis. That is the power of naming it: to see clearly why these systems can feel brilliant, baffling, and strangely exhausting - all at once. Here comes the sun. It’s alright.

Explore categories