Why AI upskilling is failing, and how you can fix it

Overview

In this episode of Today in Tech, host Keith Shaw is joined by Yvette Brown, co-founder of XPROMOS and a leading voice in generative AI education. They dive deep into the growing disconnect between AI adoption and employee readiness โ€” with new research revealing that many AI projects are failing because upskilling efforts are falling short.

Yvette breaks down:
* Why relying on a โ€œDebbie the AI galโ€ approach won't scale
* How AI โ€œwork slopโ€ is flooding organizations with low-quality content
* What causes the โ€œgarbage in, garbage outโ€ problem
* Why iteration, specificity, and context are critical when prompting
* The surprising power of tools like deep research and agentic AI pilots

They also explore practical AI fluency tips for marketers, managers, and knowledge workers, plus discuss whether the holiday shopping season could be a breakthrough moment for consumer-facing AI agents.

Donโ€™t miss this episode if you care about:
* Upskilling your team for AI success
* Avoiding common prompt engineering mistakes
* Using AI as a true collaborator โ€” not just a shortcut
* Navigating the rise of agentic AI safely

Watch now and take on Yvetteโ€™s AI homework challenge: Ask an AI to analyze your job and help you work smarter.

Register Now

Transcript

Keith Shaw: As companies embrace AI, the big topic around employee usage was upskilling. But recent research suggests many AI projects are failing because upskilling is failing. Why is this happening? Weโ€™re going to dig into the issues on this episode of Today in Tech. Hi, everybody.

Welcome to Today in Tech. Iโ€™m Keith Shaw. Joining me on the show today is Yvette Brown, co-founder of Xpromos, which trains people on generative AI and AI fluency. Welcome to the show, Yvette. Yvette Brown: Thanks, Keith. Happy to be here.

Keith: There was recent research from MIT and others suggesting many AI projects are failing because employees lack the skills to run, deploy, or operate them. With upskilling such a big topic, why arenโ€™t employees getting the skills they need?

Yvette: Iโ€™m not sure I can answer why they arenโ€™t getting the skills, but it makes sense that a lack of skills is a big part of the problem. We often treat AI like a magic โ€œeasyโ€ button, and itโ€™s not.

Companies sometimes assume one person can be โ€œDebbie, the AI gal,โ€ and thatโ€™s not how this works. This is like basic computer literacy โ€” everyone needs to understand the tech stack if you want value from it. Many organizations still view this as something for engineers or tech teams.

In reality, to get good results, everyday knowledge workers need to be skilled up on how to use these tools. Keith: You mentioned people treating AI as an easy button.

We heard from another fully fluent guest who said the same thing โ€” thereโ€™s an assumption that AI will do everything you ask. From what youโ€™ve seen, what are the big mistakes people make when they take your courses or when you talk to them about AI fluency?

Yvette: Iโ€™ve been in marketing my entire career, and we have a saying about client briefs: โ€œGarbage in, garbage out.โ€

Yvette: As a marketer, you take what the client gives you and do your best โ€” making a lot of assumptions because they donโ€™t fill it out completely. AI is similar.

People say โ€œmake me a planโ€ or โ€œwrite me a blogโ€ without providing context, clarity, or constraints โ€” no tone, no brand guidance, no details.

Yvette: We also fixate on the remaining time, not the time saved. If a task took six hours and now takes 30 minutes, you gained five and a half hours. Yes, you may still need to edit, but youโ€™re working faster โ€” and often better โ€” than before AI.

Weโ€™re not embracing that win. Keith: It feels like weโ€™re still in the copy-paste era: take the answer, paste it, and call it done. Thatโ€™s where people โ€œsaveโ€ those five hours โ€” but it creates what people are calling AI โ€œwork slop.โ€ Yvette: One hundred percent.

Work slop is real, and Iโ€™m glad itโ€™s getting surfaced. Your AI output doesnโ€™t have to be like that. In our training, an early lesson is: donโ€™t take the first answer AI gives you.

Yvette: Thereโ€™s no reason to accept the first draft. Take 30 seconds and ask it to look again. Donโ€™t just check for bias and factual accuracy โ€” adjust the tone, push it to do better for your use case.

With a few iterations, youโ€™ll get a tighter, better answer, more focused on what youโ€™re trying to communicate.

Yvette: Part of the problem goes back to โ€œgarbage in, garbage out.โ€ People donโ€™t always know what they want. As Brian Tracy says, โ€œFuzzy goals get you fuzzy results.โ€

Keith: Is there a difference between asking very specific questions versus general ones? When I do image generation, if I start with a huge, very specific prompt, the model sometimes loses details. Iโ€™ll start generic and work toward specific. In your experience, is that better?

Yvette: Thereโ€™s nuance and a kind of hierarchy. Itโ€™s good to start with โ€œact as an expert in X,โ€ and the more specific, the better. Instead of โ€œact as a medical expert,โ€ say โ€œact as the leading thoracic surgeon at Cedars-Sinai,โ€ for example.

That puts the LLM in the right headspace to answer. Hierarchical framing helps it retain details because, yes, it can lose its way.

Yvette: One of my side hustles is prompt engineering for a couple of the big LLMs. A way to stump models is simply by changing the order of requests: they get it right once, then get confused the second time.

Keith: Are many people still in the question-and-answer mindset โ€” treating AI like an advanced search engine? Yvette: Absolutely. Sometimes thatโ€™s fine โ€” if you want a simple fact with a few parameters.

But for deeper work or multiple perspectives, you need more detail and a feedback loop: have a conversation.

Keith: Two tips you just gave: (1) after it answers, ask it to try again โ€” especially for writing or production tasks; (2) ask the question a second way and explore beyond simple Q&A. Yvette: Exactly. We joke itโ€™s like an international spice market: never take the first offer.

Push back and youโ€™ll get better output. Keith: Anecdotally, I think a lot of people pick one tool โ€” ChatGPT, Gemini, or Claude โ€” and stick with it. Is there value in exploring multiple tools, or is one enough? Yvette: It depends.

When youโ€™re learning and building AI fluency, one model is fine โ€” it โ€œknowsโ€ your tone and history, so itโ€™s easy to reference prior work.

Yvette: OpenAI has a huge market share, but there are many models. A friend of mine has hundreds of tabs open just for testing.

Once youโ€™re fluent, you can compare models more effectively because you know what to look for โ€” like testing fishing rods, you donโ€™t know what you donโ€™t know at first. Start with one, then expand.

Yvette: Also, we donโ€™t know where the economics of the tech giants land. If you build deep workflows in one model and it disappears or drastically changes pricing, youโ€™re exposed. In business, hedge your bets. Once you have a foundation, create Plan Bs and test alternatives to protect yourself.

Keith: On upskilling: are your clients mostly individuals trying to improve themselves, or companies investing in teams? Yvette: Mostly companies โ€” especially mid-size organizations, which are more nimble than some enterprises.

Yvette: Individuals are interested too. We sometimes run cohorts through trade associations or networking groups. The advantage of a single-org cohort is collaboration: 10 people from the same company share learnings, which benefits the organization.

Keith: From a writing background, I rarely use AI to write emails โ€” I can nail that task. Others struggle and rely on AI, and thatโ€™s where a lot of โ€œwork slopโ€ shows up. Any recommendations for using AI for email so it doesnโ€™t look like AI?

Yvette: I also have a writing background. I usually write a first draft myself, then ask AI: โ€œHereโ€™s what Iโ€™m trying to do โ€” how can this be better?โ€ It suggests improvements, I select a few, and iterate.

Yvette: For better emails, be clear about your audience and objective. If you need to say, โ€œThis is a bad ideaโ€ in a corporate, non-hurtful way, tell the model that. Also, ask AI what signals make text sound AI-generated; it can list common giveaways and help filter them out.

We even built a small GPT to help version thought-leadership pieces for different verticals โ€” not to write them, but to customize language and then strip out the โ€œAI-ishโ€ tells.

Keith: I once had ChatGPT โ€œimproveโ€ a LinkedIn article Iโ€™d written, and I hated the result โ€” it didnโ€™t capture my tone. Iโ€™ll use AI for outlines or key points, but I can usually write a better email myself. The people generating work slop often seem like buzzword lovers.

Yvette: If you give nuanced direction โ€” target audience, desired reaction, outcomes โ€” it can help. Also, not everyone writes naturally well. For people who can articulate intent but struggle with tone, AI can be useful.

Yvette: I had a client early on who said she wasnโ€™t very empathetic. A friend had a mishap, and she wanted to send a thoughtful note. AI helped her express what she felt.

How you use AI varies by person โ€” its purpose is to amplify your skills and fill gaps, not be everything to everyone.

Keith: Itโ€™s interesting because AI often isnโ€™t great at empathy or humor, which is why people say managers wonโ€™t be replaced by AI. Yvette: Right. Short messages can still be guided. And for humor โ€”

Yvette:  โ€” I once worked on a project asking AI to write jokes. A year and a half ago, they wereโ€ฆ not good.

Keith: Letโ€™s talk about under-appreciated capabilities beyond โ€œwrite thisโ€ or โ€œanalyze thatโ€ or โ€œdraw a picture.โ€ Yvette: One underutilized use is problem-solving through expert lenses. Traditionally, youโ€™d read a book โ€” say Russell Brunson on funnels โ€” then adapt it. LLMs contain a lot of this world knowledge.

You can pose a business problem โ€” even a small one like โ€œweโ€™re not getting enough top-of-funnel leadsโ€ โ€” and ask AI to convene a panel of experts and present frameworks.

Yvette: You can then say, โ€œGiven my business context, combine these perspectives into a plan,โ€ drill it into a 90-day roadmap, then weekly tasks and checklists. You can simulate an entire strategy session in a couple of hours.

Keith: My concern with long LLM sessions is the โ€œrabbit holeโ€ effect โ€” like YouTube, where you drift from your original intent. I worry about hallucinations or just losing the plot. Yvette: Fair. Some models (ChatGPT especially) suggest next steps aggressively. I often ignore those and keep the thread focused.

Yes, sometimes you go deeper than you planned; step away, then come back and extract what matters.

Yvette: Another underused capability is solving one-off life tasks. I had a 40-year-old sprinkler system I couldnโ€™t remember how to test. I snapped a photo and asked AI. It gave me a 30-second step list โ€” worked immediately.

Same with rarely used software tasks: โ€œHow do I add a page in Elementor?โ€ You can save 30 minutes โ€” and a lot of frustration.

Yvette: When GPT-5 launched, Sam Altman mentioned only about 7% of users had tried Deep Research. That blew my mind โ€” itโ€™s so good. Try it for anything where you want sourced depth: competitors, a potential revenue stream, a neighborhood, a vacation plan.

It asks a few clarifying questions, works for a bit, and returns a thorough report with links โ€” usually less hallucinatory than ad-hoc chat citations.

Keith: Maybe the name โ€œDeep Researchโ€ scares people โ€” sounds academic or โ€œfor professors.โ€

Yvette: Youโ€™re not wrong. Sometimes engineers name things, and, wellโ€ฆ marketing doesnโ€™t always get a pass at the label.

Keith: Weโ€™re moving into agentic AI. Are we going to repeat the same mistakes from early generative AI? Can you safely use agents without basic fluency? Yvette: In my opinion, no. If you arenโ€™t AI-fluent, agents are much riskier than chat models.

The opportunity for things to go south is larger.

Yvette: Itโ€™s early days for agents. Playing with models in a contained way is one thing. But when you connect agents to databases and sensitive information โ€” what could go wrong? A lot. Beyond model failure, you have cybersecurity risks.

Keith: Spoken like an IT security pro. People fear connecting agents to secrets. You donโ€™t want someone casually unleashing the KFC recipe. Yvette: Exactly.

Keith: For someone looking to upskill on agents, what should they do โ€” keep building general fluency first? Yvette: Yes. One path is learning to create an AI pilot.

Treat it like product development: iterate, track versions, shadow the agentโ€™s output before going live, define pass/fail criteria, and run test batches. Then, schedule periodic reviews to ensure it hasnโ€™t drifted or started hallucinating.

Yvette: We imagine knowledge workers managing multiple agentic workflows โ€” updating them as models change โ€” rather than doing every deliverable manually. Keith: Have you seen learners go from clueless to confident by the end of your program? Yvette: Definitely.

People arrive at different levels; early on, some think, โ€œYeah, I already do that.โ€ As we progress, they lean in. By the end, most say, โ€œI didnโ€™t know what I didnโ€™t know.โ€

Yvette: Marketers, for example, learn to build an AI pilot and produce a Pilot Handbook โ€” stepping toward a Center of Excellence. Many say, โ€œThis is great, but I donโ€™t want to build it โ€” at least now I know what to ask the tech team.โ€

Keith: Low-code/no-code and โ€œcitizen developersโ€ worry me. I once asked for a pizza-night scheduling system; it returned code and server steps โ€” I was out. Yvette: And thatโ€™s fine. AI is great for a concept or MVP you can show clients โ€” better concept boards than we used to hand-sketch.

Yvette: As a marketer, I know interactive lead magnets outperform static PDFs. I built a 10-question interactive quiz with progress bar and outcomes in a couple of hours โ€” something that used to take ~20 hours with developers. Iโ€™m not a coder, but it was just enough HTML to ship.

Keith: Give viewers an โ€œAI homework assignmentโ€ to improve fluency โ€” something practical beyond โ€œtake a course.โ€ Yvette: First, show that AI already knows your job.

Paste your LinkedIn profile or describe your role (โ€œIโ€™m X at a mid-size Y-type companyโ€) and ask it to list your likely deliverables and how it can help you do each better. Then ask it to prioritize those deliverables and estimate time savings.

People are often shocked by how accurately it understands roles. Second, run one Deep Research on anything relevant to your life โ€” college options, a neighborhood, a market landscape โ€” just to experience the depth.

Keith: Weโ€™re entering the holiday shopping season. Many people are using AI to help with shopping. Could this be the consumer โ€œkiller appโ€ or an agentic push? Yvette: Maybe. Starting with gift lists based on what you know about people is a great use case.

Will full in-chat purchasing (e.g., Walmart integrations) become the thing? Not sure.

Yvette: Even if people learn one use (like shopping), they may not realize everything AI can do โ€” like new phone features most of us never use. Keith: Exactly.

Iโ€™m often a few iPhone generations behind, so I miss features for years. I worry the same happens with AI โ€” people using only 10% of whatโ€™s possible. Yvette: I agree. Thatโ€™s why weโ€™re out here โ€” like Johnny Appleseed โ€” helping people understand the breadth of capability.

AI can give you time back and improve quality so you can focus on higher-value work. It enhances the human experience โ€” whether thatโ€™s more time with prospects in sales or better creative outputs.

Yvette: If you have a question about anything in life, ask AI and see where the conversation goes โ€” youโ€™ll be surprised. Keith: And we got a Johnny Appleseed reference!

I live near where he grew up โ€” Leominster, Massachusetts โ€” home of a big Johnny Appleseed festival, so Iโ€™m thrilled. Yvette, thanks for the tips and insights. Yvette: Thanks for having me. Keith: Thatโ€™s all the time we have for this weekโ€™s episode.

Be sure to like the video, subscribe to the channel, and add your thoughts below if youโ€™re watching on YouTube. Join us every week for new episodes of Today in Tech. Iโ€™m Keith Shaw โ€” thanks for watching.