AR faces a challenge that most developers never encounter… reality refuses to stay still. A storefront changes its display weekly. A stadium transforms from a baseball field to a concert venue overnight. Lighting shifts from harsh noon sun to soft evening glow. Crowds ebb and flow, creating constantly changing occlusion patterns. Traditional computer vision often expects consistency. It matches predefined patterns against stable environments. But real spaces are fluid, dynamic, and unpredictable. We learned this early in our journey while testing at major venues. Out-of-the-box visual positioning systems (VPS) often perform well in static, controlled conditions but struggle when confronted with even minor real-world changes…. like a sudden holiday decoration or unexpected weather shifts. This forced us to rethink AR from the ground up. Instead of trying to catalog every possible variation, we built systems that understand the fundamental structure of spaces. They recognize when change is meaningful versus superficial. Today, our AR experiences maintain accuracy even as environments evolve. They can adapt to seasonal transitions, temporary changes, and dynamic lighting conditions without requiring constant updates. Perfect isn't real. Real is what matters.
User Experience Design for Wearables
Explore top LinkedIn content from expert professionals.
-
-
Excited to share research from our lab at Georgia Tech, in collaboration with researchers at Google and the University of Toronto! As we get closer to monocular AR or HUD glasses that actually look like everyday eyewear, a key HCI question is where to position a monocular HUD so it’s comfortable to consume content—without being distracting or interruptive, especially with proactive AI. This review paper brings together decades of research, including recent work from our lab post-Google Glass—especially by my team over the past four years. We synthesize findings across performance, comfort, interruptions, and social perception, including design recommendations in Section 10. Read the preprint: https://lnkd.in/emjNqYqU Please share with any teams building in this space! Big thanks to all my co-authors: Ethan Kimmel, Katherine Huang, Tyler Kwok, Yukun Song, Sofia Vempala, Blue (Georgianna) Lin, Ozan Cakmakci, and Thad Starner #AR #HUD #AugmentedReality #Wearables #HCI #UX #ARglasses #HumanFactors #Hypernova Meta Google Snap Inc. Samsung Electronics Qualcomm Vuzix Corporation Even Realities XREAL Lumus Ltd. Applied Materials Dispelix DigiLens Inc. Avegant JBD
-
I've been noticing a lot of unconstructive reviews of the Apple Vision Pro. I wanted to take a different approach by offering a few suggestions that I think would improve the overall user experience. 1. Rotating windows on the Vision Pro feels awkward - you have to pinch and drag the Window Bar while tilting your head. What if there was a better interaction? Introducing the Rotate Bar - pinch and drag the Rotate Bar while moving your hand to rotate a window. 2. I spend too much time organizing my workspace on the Vision Pro and when I change locations, I have to do it over again. What if you could save location-based workspaces? In this mock, a workspace automatically populates at its saved location. 3. Sometimes it’s awkward to capture experiences from the POV cameras on the Vision Pro. What if you could capture AR scenes from other angles? Here’s a mock that uses an iPhone to capture the same scene that’s on the Vision Pro. 4. I love Vision Pro's Desktop Mirroring, but it's overwhelming to navigate inside a window with mouse/keyboard and everything else with pinch/gaze. What if your mouse could also control system level UI? Here's a mock up of using a mouse to adjust a window's position and scale. 5. The Vision Pro is difficult to use when on the go - windows stay anchored to the ground rather than following you when walking around. What if you could pin any window to the screen? Gaze at the Window Bar and double tap to pin a window in screen space.
-
𝗬𝗼𝘂𝗿 𝗪𝗲𝗮𝗿𝗮𝗯𝗹𝗲 𝗦𝗮𝘆𝘀 𝗬𝗼𝘂’𝗿𝗲 𝗦𝘁𝗿𝗲𝘀𝘀𝗲𝗱. 𝗕𝘂𝘁 𝗔𝗿𝗲 𝗬𝗼𝘂? 𝘋𝘢𝘵𝘢 𝘸𝘪𝘵𝘩𝘰𝘶𝘵 𝘤𝘰𝘯𝘵𝘦𝘹𝘵 𝘪𝘴𝘯’𝘵 𝘪𝘯𝘴𝘪𝘨𝘩𝘵—𝘪𝘵’𝘴 𝘯𝘰𝘪𝘴𝘦. 🧠⌚ I’ve used both ŌURA and POLAR to guide my recovery and training. During intense cycles—whether it’s preparing for a big push at work or stacking multiple workouts—those stress and recovery scores become part of my decision-making. When they drop into the red, I slow down. Rest. Recover. Sometimes, I worry. But a new study just made me rethink what I’m really reacting to: 📊 781 students. 3 months. 352 self-reports per participant. Each wore a Garmin Vivosmart 4 while logging real-time ratings of their stress, tiredness, and sleep quality. The researchers then compared the wearable data with how people actually felt. 𝚃̲𝚑̲𝚎̲ ̲𝚛̲𝚎̲𝚜̲𝚞̲𝚕̲𝚝̲𝚜̲?̲ 🟩 Sleep: Solid alignment between devices and self-perception. 🟨 Tiredness: The connection? Weak. 🟥 Stress: Virtually no correlation between what the wearable reported and what users felt emotionally. That “high stress” reading? Might be caffeine. Might be a tough interval session. Might even be joy. But it’s not always psychological stress. So, should we abandon our rings, bands, and watches? No. But we should stop outsourcing self-awareness to sensors. Wearables do sleep well. But they struggle with nuance. 🌀 Stress is contextual. 😴 Tiredness is experiential. 💬 Emotion is not electrical conductivity. This research—part of the WARN-D project funded by the European Research Council—reminds us: Wearables aren’t replacements for self-reflection. They’re amplifiers. 𝙷̲𝚎̲𝚛̲𝚎̲’̲𝚜̲ ̲𝚠̲𝚑̲𝚎̲𝚛̲𝚎̲ ̲𝚒̲𝚝̲ ̲𝚐̲𝚎̲𝚝̲𝚜̲ ̲𝚋̲𝚒̲𝚐̲𝚐̲𝚎̲𝚛̲:̲ 📈 Wearables are exploding. There are now 1.4 billion wearable devices globally. By 2030, the digital health market is expected to exceed $800 billion. More people than ever are basing real decisions—how to train, how to eat, even how to live—on what their device tells them. And that’s not necessarily a bad thing. But without context? ⚠️ It can lead to overcorrection. ⚠️ To false alarms. ⚠️ Even to anxiety and obsessive behavior masked as “optimization.” We need to be careful that in the pursuit of data-driven health, we don’t lose touch with body-driven wisdom. ✅ Use the data—but ask better questions. 🧠 Pair the signal with the story. 📲 Remember: a red alert doesn’t always mean red zone. If you’re in fitness, wellness, or health tech—this is your moment. We can play a crucial role by helping people bridge the gap between raw biometrics and real behavior. I’m not tossing my Oura or Polar. But I am evolving how I listen to them. Wearables are here to stay. Now it’s up to us to make the data meaningful. 📖 Read the study here: https://lnkd.in/ejsVjKdj #Wearables #Oura #Polar #WHOOP #FitnessTech #DigitalHealth #Stress #Recovery #MentalHealth #Longevity #DataScience #Coaching #WellnessIndustry #Leadership
-
🩺 “We’re saving lives, but we might be losing control of patient trust.” — A healthcare startup founder shared this during an intense product deep-dive few weeks ago. Their innovation was reshaping digital health: - Real-time patient diagnostics powered by wearable sensors + AI - Smart alerts that could predict complications before they arise - Rapid adoption across elder-care homes and clinic networks But behind their excitement was a growing concern: “We’re handling sensitive health data. HIPAA is just the beginning. What if we scale fast and misstep on privacy? We can’t afford a breach — not of data, and certainly not of trust.” That’s where ID-PRIVACY® stepped in — Not just as a compliance checkbox, but as a strategic enabler of responsible scale. Here’s what we enabled together: - Precision Data Mapping: Real-time visibility—who touches what, when, and where it flows - AI-driven PIA (Privacy Impact Assessment): Automated, adaptive, deeply contextual - Consent Intelligence: Patient-first controls, multilingual clarity, frictionless experience - Synthetic Data Labs: Safe AI model training, no real identities exposed Within weeks, worry gave way to confidence: “I can walk into any hospital boardroom now and say — we’re privacy-proof by design.” That’s Precision Privacy in action — not a defense, but a growth foundation. Not an overhead, but a trust multiplier. This healthtech startup didn’t just meet regulatory standards — they earned credibility. And in today’s trust economy, that’s the real currency of leadership. Let’s architect more such stories. Let’s make privacy the launchpad for tomorrow’s healthcare breakthroughs. ID-PRIVACY® | AI-Powered. Human-Centric. Future-Ready. Data Safeguard Inc. | Data Safeguard India #HealthcareInnovation #DataPrivacy #TrustByDesign #DigitalHealth #PrivacyEngineering #HIPAACompliance #HumanCentricAI #DataSafeguard #CXOLeadership #EthicalAI
-
⛔️ Beware of tunnel vision. 🤦🏽♂️ 15 years ago, I made this mistake... and I hope students working on exoskeletons, prosthetics, and other wearable assistive tech can learn from this and don't need to repeat it themselves. 💥 The mistake: ⚙️ I designed wearable assistive devices solely to improve specific biomechanical performance metrics as much as possible. I was hyper-focused on a few movement tasks and proof of concept lab testing. Success was showing something worked (benefited users biomechanically) in the lab, plus an academic publication. 🦄 But this often led to device concepts and knowledge generation that had limited or no hope of translating into societal impact. And I was not thinking deeply enough about actual end-users, their lives, and their needs. 🎯 What I learned (and what I do now): 💡 Now, I design wearable assistive tech to improve key performance metrics while being as practical as possible. My design mantra is: practical + effective. And I try to be brutally honest with myself about the practicality or impracticality of new concepts. ⚖️ I'm focused on user needs and user experience, in addition to biomechanics, and intentional about design trade-offs (not just trying to optimize biomechanical aspects). 🤔 I also spend a lot more time thinking proactively about how any new tech will (or will not) potentially fit into a user’s daily life. 🤷🏾♂️ Success is figuring out if a practical solution (or one I anticipate could be practical in the future) can be effective and useful. And if not, then why? Failing is sometimes the greatest success because we gain insight into the real limiting factors (which often aren't the biomechanics), and this motivates future research directions. 🚀 Over the last decade, this has been a major shift in my own R&D thinking and approach. I hope students are exposed to this way of thinking early in their careers, to help maximize the chances that their biomechanical science and assistive tech will have a positive impact on the research field and society. #exoskeletons #exosuits #prosthetics #wearabletech
-
In a world where data privacy concerns are no longer theoretical but existential, we're reimagining the very foundations of emotion AI. At the heart of our mission is a simple yet radical idea: user sovereignty must never be the price of innovation. Further to that; privacy can become the innovation. This was front and centre in our recent strategic discussions, where we explored how to architect systems that honour individual autonomy while still advancing the science of emotion. For us, trust isn’t a slogan - it’s an infrastructure. We’re designing frameworks where users remain in full control of their emotional data. This isn’t just about alleviating surveillance fears; it’s about pioneering a future where emotional sovereignty is a human right. As we build towards this vision, two technologies stand out as essential pillars: federated learning and a blockchain-based biotech two-factor authentication (2FA) layer. Federated learning allows emotional models to be trained on-device, so raw data never leaves the user’s wearable. Only the insights - never the source - are shared with our central systems. This still positions inTruth data as some of the most valuable data in the world (a true insight into global demographics) - without having our users data exploited by databrokers and misaligned shareholders. It’s a paradigm shift in how we think about AI: decentralised, privacy-preserving, and deeply ethical. The blockchain biotech 2FA layer adds another level of protection. By using biometrics as a key, we ensure that only the individual can unlock and authorise access to their emotional records. Together, these layers redefine the relationship between people and their digital selves. We’ve already achieved what many thought impossible - deriving valence and arousal insights from beat-to-beat interval (BBI) data. This is very much just the beginning. The next frontier is building a scalable infrastructure that supports real-time emotional intelligence without ever compromising user autonomy. Think an AWS for the emotion data-layer in every industry.
-
On wearable devices: many people feel they promise control by providing metrics on your sleep, your heart rate, your recovery, etc, but they stop short of helping you actually feel better. That’s because in between the data and you feeling better is action. But what action? In this The New York Times piece by Madison Malone Kircher, we see what can happen when tools like ŌURA deliver rich data, but leave users struggling with unintended psychological consequences. We’re getting really good at tracking, and the future of health isn’t in more numbers, it’s in the translation to bridge the information - implementation gap. (Hello, AI health coaching!) What if your ring didn’t just tell you that you slept poorly? What if it coached you, to reframe the numbers, work through the stress, shift your schedule… A wearable should help you connect and care for your body, not panic and obsess over the numbers. For that we need to design in empathy + expertise equally. We also need an understanding of who our end users are. The work is underway, we’re making progress, but there’s still work to do.