skip to navigation
skip to content

Planet Python

Last update: November 23, 2025 09:43 PM UTC

November 22, 2025


Daniel Roy Greenfeld

TIL: Default code block languages for mkdocs

When using Mkdocs with Material, you can set default languages for code blocks in your mkdocs.yml configuration file. This is particularly useful for inline code examples that may not have explicit language tags.

markdown_extensions:
  - pymdownx.highlight:
      default_lang: python

You can see what this looks like in practice with Air's API reference for forms here: feldroy.github.io/air/api/forms/. With this configuration, any code block without a specified language defaults to Python syntax highlighting, making documentation clearer and more consistent.

November 22, 2025 12:08 PM UTC


Brett Cannon

Should I rewrite the Python Launcher for Unix in Python?

I want to be upfront that this blog post is for me to write down some thoughts that I have on the idea of rewriting the Python Launcher for Unix from Rust to pure Python. This blog post is not meant to explicitly be educational or enlightening for others, but I figured if I was going to write this down I might as well just toss it online in case someone happens to find it interesting. Anyway, with that caveat out of the way...

I started working on the Python Launcher for Unix in May 2018. At the time I used it as my Rust starter project and I figured distributing it would be easiest as a single binary since if I wrote it in Python how do you bootstrap yourself in launching Python with Python? But in the intervening 7.5 years, a few things have happened:

All of this has come together for me to realize now is the time to reevaluate whether I want to stick with Rust or pivot to using pure Python.

Performance

The first question I need to answer for myself is whether performance is good enough to switch. My hypothesis is that the Python Launcher for Unix is mostly I/O-bound (specifically around file system access), and so using Python wouldn&apost be a hindrance. To test this, I re-implemented enough of the Python Launcher for Unix in pure Python to make py --version work:

It only took 72 lines, so it was a quick hack. I compared the Rust version to the Python version on my machine running Fedora 43 by running hyperfine "py --version". If I give Rust an optimistic number by picking its average lower-bound and Python a handicap of picking its average upper-bound, we get:

So 11x slower for Python. But when the absolute performance is fast enough to let you run the Python Launcher for Unix over 30 times a second, does it actually matter? And you&aposre not about to run the Python Launcher for Unix in some tight loop or even in production (as it&aposs a developer tool), so I don&apost think that worst-case performance number (on my machine) makes performance a concern in making my decision.

Distribution

Right now, you can get the Python Launcher for Unix via:

  1. crates.io
  2. GitHub Releases as tarballs of a single binary, manpage, license file, readme, and Fish shell completions
  3. Various package managers (e.g. Homebrew, Fedora, and Nix)

If I rewrote the Python Launcher for Unix in Python, could I get equivalent distribution channels? Substituting crates.io for PyPI makes that one easy. The various package managers also know how to package Python applications already, so they would take care of the bootstrapping problem of getting Python your machine to run the Python Launcher for Unix.

So that leaves what I distribute myself via GitHub Releases. After lamenting on Mastodon that I wished there was an easy, turn-key solution to getting pure Python code and bundling it with a prebuilt Python binary, the conversation made me realized that Briefcase should actually get me what I&aposm after.

Add in the fact that I&aposm working towards prebuilt binaries for python.org and it wouldn&apost even necessarily be an impediment if the Python Launcher for Unix were ever to be distributed via python.org as well. I could imagine some shell script to download Python and then use it to run a Python script to get the Python Launcher for Unix installed on one&aposs machine (if relative paths for shebangs were relative to the script being executed then I could see just shipping an internal copy of Python with the Python Launcher for Unix, but a quick search online suggests such relative paths are relative to the working directory). So I don&apost see using Python as being a detriment to distribution.

Maximizing the impact of my time

I am a dad to a toddler. That means my spare time is negligible and restricted to nap time (which is shrinking), or in the evening (which I can&apost code past 21:00, else I have really wonky dreams or I simply can&apost fall asleep due to my brain not shutting off). Now I know I should eventually get some spare time back, but that&aposs currently measured in years according to other parents, and so this time restriction to work on this fun project is not about to improve in the near to mid-future.

This has led me, as of late, to look at how best to use my spare time. I could continue to grow my Rust experience while solving problems, or I could lean into my Python experience and solve more problems in the same amount of time. This somewhat matters if I decide that increasing the functionality of the Python Launcher for Unix is the more fun for me than getting more Rust experience at this current point of my life.

And if I think the feature set is the most important, then doing it in Python has a greater chance of getting external contribution from the Python Launcher for Unix&aposs user base. Compare that to now where there have been 11 human contributors over the project&aposs entire lifetime.

Conclusion?

So have I talked myself into rewriting the Python Launcher for Unix into Python?

November 22, 2025 12:18 AM UTC


Bruno Ponne / Coding The Past

Data Science Quiz For Humanities

Test your skills with this interactive data science quiz covering statistics, Python, R, and data analysis.

.quiz-container { font-family: Inter, system-ui, -apple-system, "Segoe UI", Roboto, "Helvetica Neue", Arial; max-width: 900px; margin: 2rem auto; padding: 1.25rem; } .meta { text-align: center; color: #555; margin-bottom: 1.25rem; } .progress-wrap { background:#eee; border-radius:999px; overflow:hidden; height:14px; margin-bottom:1rem; box-shadow: inset 0 1px 2px rgba(0,0,0,0.03); } .progress-bar { height:100%; width:0%; transition: width 450ms cubic-bezier(.2,.8,.2,1); background: linear-gradient(90deg,#4f46e5,#06b6d4); } .question { background:#fbfdff; border:1px solid #eef2ff; padding:14px; border-radius:12px; margin-bottom:14px; box-shadow: 0 1px 2px rgba(13,17,25,0.03); } .q-head { display:flex; justify-content:space-between; align-items:center; gap:12px; } .q-num { background:#eef2ff; color:#3730a3; padding:6px 10px; border-radius:999px; font-weight:600; font-size:0.9rem; } .options label { display:block; margin:8px 0; padding:8px 10px; border-radius:8px; cursor:pointer; transition: background 180ms, transform 120ms; } .options input { margin-right:8px; } .options label:hover { transform: translateY(-2px); } .correct { background: #ecfdf5; border:1px solid #bbf7d0; } .incorrect { background: #ffefef; border:1px solid #fca5a5; } .muted { color:#666; font-size:0.9rem; } .controls { display:flex; gap:12px; justify-content:flex-end; align-items:center; margin-top:12px; } button.primary { background:#4f46e5; color:white; border:none; padding:10px 16px; border-radius:10px; cursor:pointer; font-weight:600; } button.ghost { background:transparent; border:1px solid #e5e7eb; padding:8px 12px; border-radius:10px; cursor:pointer; } #result { margin-top:16px; font-size:1.05rem; font-weight:700; text-align:center; } .explanation { margin-top:8px; font-size:0.95rem; color:#0f172a; } .fade-in { animation: fadeIn 380ms ease both; } @keyframes fadeIn { from { opacity:0; transform: translateY(6px);} to {opacity:1; transform:none;} } Progress
Answered 0 of 15
1
Which of the following best describes a z-score?
2
What is the main advantage of using tidy data principles in R?
3
In Python, which library is most commonly used for data manipulation?
4
Which metric is best for evaluating a classification model on imbalanced data?
5
In a linear regression, what does RÂČ represent?
6
In historical or humanities datasets, which challenge occurs most frequently?
7
What does the groupby() function do in pandas?
8
What is the primary purpose of cross-validation?
9
Feature engineering refers to:
10
Which visualization is most appropriate for the distribution of a continuous variable?
11
A z-score of +2.5 means:
12
Which is an advantage of using R for statistical analysis?
13
Normalization in data preprocessing means:
14
Why may historical datasets be biased?
15
Which Python function can compute a z-score?

November 22, 2025 12:00 AM UTC


Stéphane Wirtel

Claude Code : comment un assistant IA m'a fait gagner des jours de développement

TL;DR

AprĂšs une semaine d’utilisation intensive de Claude Code1 pendant PyCon Ireland et sur mes projets personnels, je suis complĂštement bluffĂ© par les gains de productivitĂ©. L’outil m’a permis de migrer automatiquement le site Python Ireland de Django 5.0 vers 5.2 et Wagtail 6.2 vers 7.2, de dĂ©velopper un outil de conversion de livres scannĂ©s en 5 minutes, et de gĂ©nĂ©rer une documentation complĂšte en quelques minutes. Contrairement Ă  Cursor ou Windsurf, Claude Code s’intĂšgre partout (PyCharm, VS Code, Zed, Neovim, ligne de commande), ce qui en fait un vĂ©ritable game changer pour les dĂ©veloppeurs professionnels.

November 22, 2025 12:00 AM UTC


Armin Ronacher

LLM APIs are a Synchronization Problem

The more I work with large language models through provider-exposed APIs, the more I feel like we have built ourselves into quite an unfortunate API surface area. It might not actually be the right abstraction for what’s happening under the hood. The way I like to think about this problem now is that it’s actually a distributed state synchronization problem.

At its core, a large language model takes text, tokenizes it into numbers, and feeds those tokens through a stack of matrix multiplications and attention layers on the GPU. Using a large set of fixed weights, it produces activations and predicts the next token. If it weren’t for temperature (randomization), you could think of it having the potential of being a much more deterministic system, at least in principle.

As far as the core model is concerned, there’s no magical distinction between “user text” and “assistant text”—everything is just tokens. The only difference comes from special tokens and formatting that encode roles (system, user, assistant, tool), injected into the stream via the prompt template. You can look at the system prompt templates on Ollama for the different models to get an idea.

The Basic Agent State

Let’s ignore for a second which APIs already exist and just think about what usually happens in an agentic system. If I were to have my LLM run locally on the same machine, there is still state to be maintained, but that state is very local to me. You’d maintain the conversation history as tokens in RAM, and the model would keep a derived “working state” on the GPU — mainly the attention key/value cache built from those tokens. The weights themselves stay fixed; what changes per step are the activations and the KV cache.

One further clarification: when I talk about state I don’t just mean the visible token history because the model also carries an internal working state that isn’t captured by simply re-sending tokens. In other words: you can replay the tokens and regain the text content, but you won’t restore the exact derived state the model had built.

From a mental-model perspective, caching means “remember the computation you already did for a given prefix so you don’t have to redo it.” Internally, that usually means storing the attention KV cache for those prefix tokens on the server and letting you reuse it, not literally handing you raw GPU state.

There are probably some subtleties to this that I’m missing, but I think this is a pretty good model to think about it.

The Completion API

The moment you’re working with completion-style APIs such as OpenAI’s or Anthropic’s, abstractions are put in place that make things a little different from this very simple system. The first difference is that you’re not actually sending raw tokens around. The way the GPU looks at the conversation history and the way you look at it are on fundamentally different levels of abstraction. While you could count and manipulate tokens on one side of the equation, extra tokens are being injected into the stream that you can’t see. Some of those tokens come from converting the JSON message representation into the underlying input tokens fed into the machine. But you also have things like tool definitions, which are injected into the conversation in proprietary ways. Then there’s out-of-band information such as cache points.

And beyond that, there are tokens you will never see. For instance, with reasoning models you often don’t see any real reasoning tokens, because some LLM providers try to hide as much as possible so that you can’t retrain your own models with their reasoning state. On the other hand, they might give you some other informational text so that you have something to show to the user. Model providers also love to hide search results and how those results were injected into the token stream. Instead, you only get an encrypted blob back that you need to send back to continue the conversation. All of a sudden, you need to take some information on your side and funnel it back to the server so that state can be reconciled on either end.

In completion-style APIs, each new turn requires resending the entire prompt history. The size of each individual request grows linearly with the number of turns, but the cumulative amount of data sent over a long conversation grows quadratically because each linear-sized history is retransmitted at every step. This is one of the reasons long chat sessions feel increasingly expensive. On the server, the model’s attention cost over that sequence also grows quadratically in sequence length, which is why caching starts to matter.

The Responses API

One of the ways OpenAI tried to address this problem was to introduce the Responses API, which maintains the conversational history on the server (at least in the version with the saved state flag). But now you’re in a bizarre situation where you’re fully dealing with state synchronization: there’s hidden state on the server and state on your side, but the API gives you very limited synchronization capabilities. To this point, it remains unclear to me how long you can actually continue that conversation. It’s also unclear what happens if there is state divergence or corruption. I’ve seen the Responses API get stuck in ways where I couldn’t recover it. It’s also unclear what happens if there’s a network partition, or if one side got the state update but the other didn’t. The Responses API with saved state is quite a bit harder to use, at least as it’s currently exposed.

Obviously, for OpenAI it’s great because it allows them to hide more behind-the-scenes state that would otherwise have to be funneled through with every conversation message.

State Sync API

Regardless of whether you’re using a completion-style API or the Responses API, the provider always has to inject additional context behind the scenes—prompt templates, role markers, system/tool definitions, sometimes even provider-side tool outputs—that never appears in your visible message list. Different providers handle this hidden context in different ways, and there’s no common standard for how it’s represented or synchronized. The underlying reality is much simpler than the message-based abstractions make it look: if you run an open-weights model yourself, you can drive it directly with token sequences and design APIs that are far cleaner than the JSON-message interfaces we’ve standardized around. The complexity gets even worse when you go through intermediaries like OpenRouter or SDKs like the Vercel AI SDK, which try to mask provider-specific differences but can’t fully unify the hidden state each provider maintains. In practice, the hardest part of unifying LLM APIs isn’t the user-visible messages—it’s that each provider manages its own partially hidden state in incompatible ways.

It really comes down to how you pass this hidden state around in one form or another. I understand that from a model provider’s perspective, it’s nice to be able to hide things from the user. But synchronizing hidden state is tricky, and none of these APIs have been built with that mindset, as far as I can tell. Maybe it’s time to start thinking about what a state synchronization API would look like, rather than a message-based API.

The more I work with these agents, the more I feel like I don’t actually need a unified message API. The core idea of it being message-based in its current form is itself an abstraction that might not survive the passage of time.

Learn From Local First?

There’s a whole ecosystem that has dealt with this kind of mess before: the local-first movement. Those folks spent a decade figuring out how to synchronize distributed state across clients and servers that don’t trust each other, drop offline, fork, merge, and heal. Peer-to-peer sync, and conflict-free replicated storage engines all exist because “shared state but with gaps and divergence” is a hard problem that nobody could solve with naive message passing. Their architectures explicitly separate canonical state, derived state, and transport mechanics — exactly the kind of separation missing from most LLM APIs today.

Some of those ideas map surprisingly well to models: KV caches resemble derived state that could be checkpointed and resumed; prompt history is effectively an append-only log that could be synced incrementally instead of resent wholesale; provider-side invisible context behaves like a replicated document with hidden fields.

At the same time though, if the remote state gets wiped because the remote site doesn’t want to hold it for that long, we would want to be in a situation where we can replay it entirely from scratch — which for instance the Responses API today does not allow.

Future Unified APIs

There’s been plenty of talk about unifying message-based APIs, especially in the wake of MCP (Model Context Protocol). But if we ever standardize anything, it should start from how these models actually behave, not from the surface conventions we’ve inherited. A good standard would acknowledge hidden state, synchronization boundaries, replay semantics, and failure modes — because those are real issues. There is always the risk that we rush to formalize the current abstractions and lock in their weaknesses and faults. I don’t know what the right abstraction looks like, but I’m increasingly doubtful that the status-quo solutions are the right fit.

November 22, 2025 12:00 AM UTC

November 21, 2025


Trey Hunner

Python Morsels Lifetime Access Sale

If you code in Python regularly, you’re already learning new things everyday.

You hit a wall, or something breaks. Then you search around, spend some hours on Stack Overflow, and eventually, you figure it out.

But this kind of learning is unstructured. It’s reactive, instead of intentional.

You fix the problem at hand, but the underlying gaps in your knowledge remain unaddressed.

A more structured way to improve your Python skills

Python Morsels gives you a structured, hands-on way to improve your Python skills through weekly practice:

Python Morsels is a subscription service because I’m adding new learning resources almost every week.

But through December 1st, you can get lifetime access for a one-time payment.

How it works

When you sign up for Python Morsels, you’ll choose your current Python skill level, from novice to advanced.

Based on your skill level, each Monday I’ll send you a personalized routine with:

Think of Python Morsels as a gym for your Python skills: you come in for quick training sessions, put in the reps, and make a little more progress each time.

All these resources are accessible to you forever, and with lifetime access you’ll never pay another subscription fee.

What Python Morsels includes

Python Morsels has grown a lot over the past 8 years. Currently, Python Morsels has:

I’ll be sending you personalized recommendations every week, but you can use these resources however they fit your routine: as learning guides, hands-on practice sessions, quick cheatsheets, long-term reference material, or quick Python workouts.

In addition to this, Python Morsels also gives you access to:

Because Python Morsels runs as an active subscription service, I’m always adding new screencasts, new exercises, and updated material on a weekly or monthly cycle. I also keep everything up-to-date with each new Python release, incorporating newly added features and retiring end-of-life’d Python versions.

Lock in lifetime access

Python Morsels usually costs $240/year but you can get lifetime access through December 1st for a one-time payment. I’ve only offered lifetime access once before in 8 years.

If you’ve been on the fence about subscribing to Python Morsels or want to invest in building a daily learning habit, this is a good time to do it.

If you have questions about the sale, please comment below or email me.

November 21, 2025 10:42 PM UTC


Tryton News

Security Release for issue #14366

Cédric Krier has found that trytond does not enforce access rights for data export (since version 6.0).

Impact

CVSS v3.0 Base Score: 6.5

Workaround

There is no workaround.

Resolution

All affected users should upgrade trytond to the latest version.

Affected versions per series:

Non affected versions per series:

Reference

Concerns?

Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/ with the confidential checkbox checked.

1 post - 1 participant

Read full topic

November 21, 2025 03:00 PM UTC

Security Release for issue #14363

Abdulfatah Abdillahi has found that sao does not escape the completion values. The content of completion is generally the record name which may be edited in many ways depending on the model. The content may include some JavaScript which is executed in the same context as sao which gives access to sensitive data such as the session.

Impact

CVSS v3.0 Base Score: 7.3

Workaround

There is no general workaround.

Resolution

All affected users should upgrade sao to the latest version.

Affected versions per series:

Non affected versions per series:

Reference

Concerns?

Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/ with the confidential checkbox checked.

1 post - 1 participant

Read full topic

November 21, 2025 03:00 PM UTC

Security Release for issue #14364

Mahdi Afshar has found that trytond does not enforce access rights for the route of the HTML editor (since version 6.0).

Impact

CVSS v3.0 Base Score: 7.1

Workaround

A possible workaround is to block access to the html editor.

Resolution

All affected users should upgrade trytond to the latest version.

Affected versions per series:

Non affected versions per series:

Reference

Concerns?

Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/ with the confidential checkbox checked.

1 post - 1 participant

Read full topic

November 21, 2025 03:00 PM UTC

Security Release for issue #14354

Mahdi Afshar and Abdulfatah Abdillahi have found that trytond sends the trace-back to the clients for unexpected errors. This trace-back may leak information about the server setup.

Impact

CVSS v3.0 Base Score: 4.3

Workaround

A possible workaround is to configure an error handler which would remove the trace-back from the response.

Resolution

All affected users should upgrade trytond to the latest version.

Affected versions per series:

Non affected versions per series:

Reference

Concerns?

Any security concerns should be reported on the bug-tracker at https://bugs.tryton.org/ with the confidential checkbox checked.

2 posts - 2 participants

Read full topic

November 21, 2025 03:00 PM UTC


Django Weblog

DSF member of the month - Akio Ogasahara

For November 2025, we welcome Akio Ogasahara as our DSF member of the month! ⭐

Akio is a technical writer and systems engineer. He contributed to the Japanese translation for many years. He has been a DSF member since June 2025. You can learn more about Akio by visiting Akio's X account and his GitHub Profile.

Let’s spend some time getting to know Akio better!

Can you tell us a little about yourself (hobbies, education, etc.)

I was born in 1986 in Rochester, Minnesota, to Japanese parents, and I’ve lived in Japan since I was one. I’ve been fascinated by machines for as long as I can remember. I hold a master’s degree in mechanical engineering. I’ve worked as a technical writer and a software PM, and I’m currently in QA at a Japanese manufacturer.

I'm curious, where does your nickname “libratech” come from?

I often used “Libra” as a handle because the symbol of Libra—a balanced scale—reflects a value I care deeply about: fairness in judgment. I combined that with “tech,” from “tech writer,” to create “libratech.”

How did you start using Django?

Over ten years ago, I joined a hands-on workshop using a Raspberry Pi to visualize sensor data, and we built the dashboard with Django. That was my first real experience.

What other framework do you know and if there is anything you would like to have in Django if you had magical powers?

I’ve used Flask and FastAPI. If I could wish for anything, I’d love “one-click” deployment that turns a Django project into an ultra-lightweight app running on Cloudflare Workers.

What projects are you working on now?

As a QA engineer, I’m building Pandas pipelines for quality-data cleansing and creating BI dashboards.

What are you learning about these days?

I’m studying for two Japanese certifications: the Database Specialist exam and the Quality Control Examination (QC Kentei).

Which Django libraries are your favorite (core or 3rd party)?

Django admin, without question. In real operations, websites aren’t run only by programmers—most teams eventually need CRM-like capabilities. Django admin maps beautifully to that practical reality.

What are the top three things in Django that you like?

You have contributed a lot on the Japanese documentation, what made you contribute to translate for the Japanese language in the first place?

I went through several joint surgeries and suddenly had a lot of time. I’d always wanted to contribute to open source, but I knew my coding skills weren’t my strongest asset. I did, however, have years of experience writing manuals—so translation felt like a meaningful way to help.

Do you have any advice for people who could be hesitant to contribute to translation of Django documentation?

Translation has fewer strict rules than code contributions, and you can start simply by creating a Transifex account. If a passage feels unclear, improve it! And if you have questions, the Django-ja translation team is happy to help on our Discord.

I know you have some interest in AI as a technical writer, do you have an idea on how Django could evolve with AI?

Today’s AI is excellent at working with existing code—spotting N+1 queries or refactoring SQL without changing behavior. But code written entirely by AI often has weak security. That’s why solid unit tests and Django’s strong security guardrails will remain essential: they let us harness AI’s creativity safely.

Django is celebrating its 20th anniversary, do you have a nice story to share?

The surgeries were tough, but they led me to documentation translation, which reconnected me with both English and Django. I’m grateful for that path.

What are your hobbies or what do you do when you’re not working?

Outside of computers, I enjoy playing drums in a band and watching musicals and stage plays! đŸŽ”

Is there anything else you’d like to say?

If you ever visit Japan, of course sushi and ramen are great—but don’t miss the sweets and ice creams you can find at local supermarkets and convenience stores! They’re inexpensive, come in countless varieties, and I’m sure you’ll discover a new favorite!🍩


Thank you for doing the interview, Akio !

November 21, 2025 01:00 PM UTC


Real Python

The Real Python Podcast – Episode #275: Building a FastAPI Application & Exploring Python Concurrency

What are the steps to get started building a FastAPI application? What are the different types of concurrency available in Python? Christopher Trudeau is back on the show this week, bringing another batch of PyCoder's Weekly articles and projects.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 21, 2025 12:00 PM UTC


Armin Ronacher

Agent Design Is Still Hard

I felt like it might be a good time to write about some new things I’ve learned. Most of this is going to be about building agents, with a little bit about using agentic coding tools.

TL;DR: Building agents is still messy. SDK abstractions break once you hit real tool use. Caching works better when you manage it yourself, but differs between models. Reinforcement ends up doing more heavy lifting than expected, and failures need strict isolation to avoid derailing the loop. Shared state via a file-system-like layer is an important building block. Output tooling is surprisingly tricky, and model choice still depends on the task.

Which Agent SDK To Target?

When you build your own agent, you have the choice of targeting an underlying SDK like the OpenAI SDK or the Anthropic SDK, or you can go with a higher level abstraction such as the Vercel AI SDK or Pydantic. The choice we made a while back was to adopt the Vercel AI SDK but only the provider abstractions, and to basically drive the agent loop ourselves. At this point we would not make that choice again. There is absolutely nothing wrong with the Vercel AI SDK, but when you are trying to build an agent, two things happen that we originally didn’t anticipate:

The first is that the differences between models are significant enough that you will need to build your own agent abstraction. We have not found any of the solutions from these SDKs that build the right abstraction for an agent. I think this is partly because, despite the basic agent design being just a loop, there are subtle differences based on the tools you provide. These differences affect how easy or hard it is to find the right abstraction (cache control, different requirements for reinforcement, tool prompts, provider-side tools, etc.). Because the right abstraction is not yet clear, using the original SDKs from the dedicated platforms keeps you fully in control. With some of these higher-level SDKs you have to build on top of their existing abstractions, which might not be the ones you actually want in the end.

We also found it incredibly challenging to work with the Vercel SDK when it comes to dealing with provider-side tools. The attempted unification of messaging formats doesn’t quite work. For instance, the web search tool from Anthropic routinely destroys the message history with the Vercel SDK, and we haven’t yet fully figured out the cause. Also, in Anthropic’s case, cache management is much easier when targeting their SDK directly instead of the Vercel one. The error messages when you get things wrong are much clearer.

This might change, but right now we would probably not use an abstraction when building an agent, at least until things have settled down a bit. The benefits do not yet outweigh the costs for us.

Someone else might have figured it out. If you’re reading this and think I’m wrong, please drop me a mail. I want to learn.

Caching Lessons

The different platforms have very different approaches to caching. A lot has been said about this already, but Anthropic makes you pay for caching. It makes you manage cache points explicitly, and this really changes the way you interact with it from an agent engineering level. I initially found the manual management pretty dumb. Why doesn’t the platform do this for me? But I’ve fully come around and now vastly prefer explicit cache management. It makes costs and cache utilization much more predictable.

Explicit caching allows you to do certain things that are much harder otherwise. For instance, you can split off a conversation and have it run in two different directions simultaneously. You also have the opportunity to do context editing. The optimal strategy here is unclear, but you clearly have a lot more control, and I really like having that control. It also makes it much easier to understand the cost of the underlying agent. You can assume much more about how well your cache will be utilized, whereas with other platforms we found it to be hit and miss.

The way we do caching in the agent with Anthropic is pretty straightforward. One cache point is after the system prompt. Two cache points are placed at the beginning of the conversation, where the last one moves up with the tail of the conversation. And then there is some optimization along the way that you can do.

Because the system prompt and the tool selection now have to be mostly static, we feed a dynamic message later to provide information such as the current time. Otherwise, this would trash the cache. We also leverage reinforcement during the loop much more.

Reinforcement In The Agent Loop

Every time the agent runs a tool you have the opportunity to not just return data that the tool produces, but also to feed more information back into the loop. For instance, you can remind the agent about the overall objective and the status of individual tasks. You can also provide hints about how the tool call might succeed when a tool fails. Another use of reinforcement is to inform the system about state changes that happened in the background. If you have an agent that uses parallel processing, you can inject information after every tool call when that state changed and when it is relevant for completing the task.

Sometimes it’s enough for the agent to self-reinforce. In Claude Code, for instance, the todo write tool is a self-reinforcement tool. All it does is take from the agent a list of tasks that it thinks it should do and echo out what came in. It’s basically just an echo tool; it really doesn’t do anything else. But that is enough to drive the agent forward better than if the only task and subtask were given at the beginning of the context and too much has happened in the meantime.

We also use reinforcements to inform the system if the environment changed during execution in a way that’s problematic for the agent. For instance, if our agent fails and retries from a certain step forward but the recovery operates off broken data, we inject a message informing it that it might want to back off a couple of steps and redo an earlier step.

Isolate Failures

If you expect a lot of failures during code execution, there is an opportunity to hide those failures from the context. This can happen in two ways. One is to run tasks that might require iteration individually. You would run them in a subagent until they succeed and only report back the success, plus maybe a brief summary of approaches that did not work. It is helpful for an agent to learn about what did not work in a subtask because it can then feed that information into the next task to hopefully steer away from those failures.

The second option doesn’t exist in all agents or foundation models, but with Anthropic you can do context editing. So far we haven’t had a lot of success with context editing, but we believe it’s an interesting thing we would love to explore more. We would also love to learn if people have success with it. What is interesting about context editing is that you should be able to preserve tokens for further down the iteration loop. You can take out of the context certain failures that didn’t drive towards successful completion of the loop, but only negatively affected certain attempts during execution. But as with the point I made earlier: it is also useful for the agent to understand what didn’t work, but maybe it doesn’t require the full state and full output of all the failures.

Unfortunately, context editing will automatically invalidate caches. There is really no way around it. So it can be unclear when the trade-off of doing that compensates for the extra cost of trashing the cache.

Sub Agents / Sub Inference

As I mentioned a couple of times on this blog already, most of our agents are based on code execution and code generation. That really requires a common place for the agent to store data. Our choice is a file system—in our case a virtual file system—but that requires different tools to access it. This is particularly important if you have something like a subagent or subinference.

You should try to build an agent that doesn’t have dead ends. A dead end is where a task can only continue executing within the sub-tool that you built. For instance, you might build a tool that generates an image, but is only able to feed that image back into one more tool. That’s a problem because you might then want to put those images into a zip archive using the code execution tool. So there needs to be a system that allows the image generation tool to write the image to the same place where the code execution tool can read it. In essence, that’s a file system.

Obviously it has to go the other way around too. You might want to use the code execution tool to unpack a zip archive and then go back to inference to describe all the images so that the next step can go back to code execution and so forth. The file system is the mechanism that we use for that. But it does require tools to be built in a way that they can take file paths to the virtual file system to work with.

So basically an ExecuteCode tool would have access to the same file system as the RunInference tool which could take a path to a file on that same virtual file system.

The Use Of An Output Tool

One interesting thing about how we structured our agent is that it does not represent a chat session. It will eventually communicate something to the user or the outside world, but all the messages that it sends in between are usually not revealed. The question is: how does it create that message? We have one tool which is the output tool. The agent uses it explicitly to communicate to the human. We then use a prompt to instruct it when to use that tool. In our case the output tool sends an email.

But that turns out to pose a few other challenges. One is that it’s surprisingly hard to steer the wording and tone of that output tool compared to just using the main agent loop’s text output as the mechanism to talk to the user. I cannot say why this is, but I think it’s probably related to how these models are trained.

One attempt that didn’t work well was to have the output tool run another quick LLM like Gemini 2.5 Flash to adjust the tone to our preference. But this increases latency and actually reduces the quality of the output. In part, I think the model just doesn’t word things correctly and the subtool doesn’t have sufficient context. Providing more slices of the main agentic context into the subtool makes it expensive and also didn’t fully solve the problem. It also sometimes reveals information in the final output that we didn’t want to be there, like the steps that led to the end result.

Another problem with an output tool is that sometimes it just doesn’t call the tool. One of the ways in which we’re forcing this is we remember if the output tool was called. If the loop ends without the output tool, we inject a reinforcement message to encourage it to use the output tool.

Model Choice

Overall our choices for models haven’t dramatically changed so far. I think Haiku and Sonnet are still the best tool callers available, so they make for excellent choices in the agent loop. They are also somewhat transparent with regards to what the RL looks like. The other obvious choices are the Gemini models. We so far haven’t found a ton of success with the GPT family of models for the main loop.

For the individual sub-tools, which in part might also require inference, our current choice is Gemini 2.5 if you need to summarize large documents or work with PDFs and things like that. That is also a pretty good model for extracting information from images, in particular because the Sonnet family of models likes to run into a safety filter which can be annoying.

There’s also probably the very obvious realization that token cost alone doesn’t really define how expensive an agent. A better tool caller will do the job in fewer tokens. There are some cheaper models available than sonnet today, but they are not necessarily cheaper in a loop.

But all things considered, not that much has changed in the last couple of weeks.

Testing and Evals

We find testing and evals to be the hardest problem here. This is not entirely surprising, but the agentic nature makes it even harder. Unlike prompts, you cannot just do the evals in some external system because there’s too much you need to feed into it. This means you want to do evals based on observability data or instrumenting your actual test runs. So far none of the solutions we have tried have convinced us that they found the right approach here. Unfortunately, I have to report that at the moment we haven’t found something that really makes us happy. I hope we’re going to find a solution for this because it is becoming an increasingly frustrating aspect of building an agent.

Coding Agent Updates

As for my experience with coding agents, not really all that much has changed. The main new development is that I’m trialing Amp more. In case you’re curious why: it’s not that it’s objectively a better agent than what I’m using, but I really quite like the way they’re thinking about agents from what they’re posting. The interactions of the different sub agents like the Oracle with the main loop is beautifully done, and not many other harnesses do this today. It’s also a good way for me to validate how different agent designs work. Amp, similar to Claude Code, really feels like a product built by people who also use their own tool. I do not feel every other agent in the industry does this.

Quick Stuff I Read And Found

That’s just a random assortment of things that I feel might also be worth sharing:

November 21, 2025 12:00 AM UTC

November 20, 2025


Brett Cannon

The varying strictness of TypedDict

I was writing some code where I was using httpx.get() and its params parameter. I decided to use a TypedDict for the dictionary I was passing as the argument since it was for a REST API, where the potential keys were fully known. I then ran Pyrefly over my code and got an unexpected error about how "object" is not a subtype of "str". I had no object in my TypedDict, so I didn&apost understand what was going on. I tried Pyright and it also failed. I then tried ty and it passed! What?! I know ty takes a less strict approach to typing to support a more gradual approach, so I figured there was a strict typing thing I was doing wrong. I did some digging and I found out that a new feature of TypedDict solves the issue for me, and so I figured I would share what I learned.

Starting in Python 3.15 and typing-extensions today, there are two dimensions to TypedDict and how keys and their existence are treated. The first dimension is whether the specified keys in a TypedDict are all required or not (controlled by the total argument or Required and NotRequired on a per-key basis). This represents whether every key specified in your TypedDict must be in the dictionary or not. So if you have a TypedDict of:

class OptionalOpen(typing_extensions.TypedDict, total=False):
    spam: str

it means the "spam" key is optional. To make it required you just set total=True or spam: Required[str]:

class RequiredOpen(typing_extensions.TypedDict, total=True):
    spam: str

This concept has been around since Python 3.8 when TypedDict was introduced, with Required and NotRequired added in Python 3.11.

But starting in Python 3.15, a second dimension has been introduced that affects whether the TypedDict is closed. By default, a dictionary that is typed to a TypedDict can have any optional keys that it wants. So with either of our example TypedDict above, you could have any number of extra keys, each with any value. So what is a type checker to do if you reference some key that isn&apost defined by the TypedDict? Since the arbitrary keys are legal, you assume the "worst", and that the value for the key is object as that&aposs the base class of everything.

So, let&aposs say you have a function that takes a Mapping of str keys and str values:

def func(data: collections.abc.Mapping[str, str]) -> None:
    print(data["spam"])

It turns out that if you try to pass in a dictionary that is typed to either of our TypedDict examples you get a type failure like this (this is from Pyright):

/home/brett/py/typeddict_typing.py
  /home/brett/py/typeddict_typing.py:26:6 - error: Argument of type "OptionalOpen" cannot be assigned to parameter "data" of type "Mapping[str, str]" in function "func"
    "OptionalOpen" is not assignable to "Mapping[str, str]"
      Type parameter "_VT_co@Mapping" is covariant, but "object" is not a subtype of "str"
        "object" is not assignable to "str" (reportArgumentType)

This happens because Mapping[str, str] only accepts values of str, but with our TypedDict there is the possibility of some unspecified key having a value of object. As such, e.g. Pyright complains that you can&apost use an object where str is expected, since you can&apost substitute anything that inherits from object for a str (that&aposs what the variance bit is all about in that error message).

So how do you solve this? You say the TypedDict cannot have any keys that are not specified; it&aposs closed via the closed argument introduced in PEP 728 (currently, there are no docs for this in Python 3.15 even though it&aposs implemented):

class OptionalClosed(typing_extensions.TypedDict, total=False, closed=True):
    spam: str
    

With that argument you tell the type checkers that unless a key is specified in the TypedDict, the key isn&apost allowed to exist. That means our example TypedDict will only ever have keys that have a str value since we only have one possible key and its type is str. As such, that makes it a Mapping[str, str] since the only key it can ever have has a value type of str.

Another way to make this work is with the extra_items parameter that also came from PEP 728. What that parameter lets you do is specify the value type for any keys that are not defined by the TypedDict:

class RequiredOpen(typing_extensions.TypedDict, extra_items=str):
    spam: str

So now any dictionary that is typed to this TypedDict will be presumed to have str be the type for any keys that aren&apost spam. That then means our TypedDict supports the Mapping[str, str] type as the only defined key is str and we have said any other key will have a value type of str.

November 20, 2025 09:18 PM UTC

November 19, 2025


Django Weblog

Twenty years of Django releases

On November 16th 2005, Django co-creator Adrian Holovaty announced the first ever Django release, Django 0.90. Twenty years later, today here we are shipping the first release candidate of Django 6.0 🚀.

Since we’re celebrating Django’s 20th birthday this year, here are a few release-related numbers that represent Django’s history:

This is what decades’ worth of a stable framework looks like. Expect more gradual improvements and bug fixes over the next twenty years’ worth of releases. And if you like this kind of data, check out the State of Django 2025 report by JetBrains, with lots of statistics on our ecosystem (and there’s a Get PyCharm Pro with 30 % Off & Support Django offer).


Support Django

If you or your employer counts on Django’s 20 years of stability, consider whether you can support the project via donations to our non-profit Django Software Foundation.

Once you’ve done it, post with #DjangoBirthday and tag us on Mastodon / on Bluesky / on X / on LinkedIn so we can say thank you!

59%

Of our US $300,000.00 goal for 2025, as of November 19th, 2025, we are at:

  • 58.7% funded
  • $176,098.60 donated

Donate to support Django

November 19, 2025 03:27 PM UTC


Real Python

Build a Python MCP Client to Test Servers From Your Terminal

Building an MCP client in Python can be a good option when you’re coding MCP servers and want a quick way to test them. In this step-by-step project, you’ll build a minimal MCP client for the command line. It’ll be able to connect to an MCP server through the standard input/output (stdio) transport, list the server’s capabilities, and use the server’s tools to feed an AI-powered chat.

By the end of this tutorial, you’ll understand that:

  • You can build an MCP client app for the command line using the MCP Python SDK and argparse.
  • You can list a server’s capabilities by calling .list_tools(), .list_prompts(), and .list_resources() on a ClientSession instance.
  • You can use the OpenAI Python SDK to integrate MCP tool responses into an AI-powered chat session.

Next, you’ll move through setup, client implementation, capability discovery, chat handling, and packaging to test MCP servers from your terminal.

Prerequisites

To get the most out of this coding project, you should have some previous knowledge of how to manage a Python project with uv. You should also know the basics of working with the asyncio and argparse libraries from the standard library.

To satisfy these knowledge requirements, you can take a look at the following resources:

Familiarity with OpenAI’s Python API, openai, will also be helpful because you’ll use this library to power the chat functionality of your MCP client. You’ll also use the Model Context Protocol (MCP) Python SDK.

Don’t worry if you don’t have all of the prerequisite knowledge before starting this tutorial—that’s completely okay! You’ll learn through the process of getting your hands dirty as you build the project. If you get stuck, then take some time to review the resources linked above. Then, get back to the code.

You’ll also need an MCP server to try your client as you build it. Don’t worry if you don’t have one available—you can use the server provided in step 2.

In this tutorial, you won’t get into the details of creating MCP servers. To learn more about this topic, check out the Python MCP Server: Connect LLMs to Your Data tutorial. Finally, you can download the project’s source code and related files by clicking the link below.

Get Your Code: Click here to download the free sample code you’ll use to build a Python MCP client to test servers from your terminal.

Take the Quiz: Test your knowledge with our interactive “Build a Python MCP Client to Test Servers From Your Terminal” quiz. You’ll receive a score upon completion to help you track your learning progress:


Interactive Quiz

Build a Python MCP Client to Test Servers From Your Terminal

Learn how to create a Python MCP client, start an AI-powered chat session, and run it from the command line. Check your understanding.

Step 1: Set Up the Project and the Environment

To manage your MCP client project, you’ll use uv, a command-line tool for Python project management. If you don’t have this tool on your current system, then it’s worth checking out the Managing Python Projects With uv: An All-in-One Solution tutorial.

Note: If you prefer not to use uv, then you can use a combination of alternative tools such as pyenv, venv, pip, or poetry.

Once you have uv or another tool set up, go ahead and open a terminal window. Then, move to a directory where you typically store your projects. From there, run the following commands to scaffold and initialize a new mcp-client/ project:

Shell
$ uv init mcp-client
$ cd mcp-client/
$ uv add mcp openai

The first command creates a new Python project in an mcp-client/ directory. The resulting directory will have the following structure:

mcp-client/
├── .git/
├── .gitignore
├── .python-version
├── README.md
├── main.py
└── pyproject.toml

First, you have the .git/ directory and the .gitignore file, which will help you version-control the project.

The .python-version file contains the default Python version for the current project. This file tells uv which Python version to use when creating a dedicated virtual environment for the project. This file will contain the version number of the Python interpreter you’re currently using.

Next, you have an empty README.md file that you can use to provide basic documentation for your project. The main.py file provides a Python script that you can optionally use as the project’s entry point. You won’t use this file in this tutorial, so feel free to remove it.

Finally, you have the pyproject.toml file, which you’ll use to prepare your project for building and distribution.

Read the full article at https://realpython.com/python-mcp-client/ »


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 19, 2025 02:00 PM UTC


PyCharm

At JetBrains, we love seeing the developer community grow and thrive. That’s why we support open-source projects that make a real difference — the ones that help developers learn, build, and create better software together. We’re proud to back open-source maintainers with free licenses and to contribute to initiatives that strengthen the ecosystem and the people behind it.

In this post, we highlight five open‑source projects from different ecosystems, written in established languages like Python and JavaScript or fast‑growing ones like Rust. Different as they are, each shares the same goal: elevating the developer experience. Together, they show how the right tools boost productivity and make workflows more enjoyable.

Ratatui

Born as the community-driven successor to the discontinued tui-rs library, Ratatui brings elegance to terminal UIs. It’s modular, ergonomic, and designed to help developers build interactive dashboards, widgets, and even embedded interfaces that go beyond the terminal.

JetBrains IDEs help me focus on the code rather than the tooling. They’re self-contained, so I don’t need to configure much to get started – they just work. With powerful code highlighting, automatic fixes, refactorings, and structural search, I can easily jump around the codebase and make edits.

— Orhun Parmaksız, Ratatui Core Maintainer

The upcoming 0.30.0 release focuses on modularity, splitting the main crate into smaller, independently usable packages. This change simplifies maintenance and makes it easier to use widgets in other contexts. And with new no_std support, Ratatui is expanding to power a wide range of use cases beyond the terminal.

Django

If Ratatui brings usability to the terminal, Django brings it to the web. Originally created in 2003 to meet both fast-paced newsroom deadlines and the demands of experienced developers, Django remains the go-to framework for “perfectionists with deadlines”. It eliminates repetitive tasks, enforces clean, pragmatic design, and provides built-in solutions for security, scalability, and database management – helping developers write less code and achieve more.

JetBrains IDEs, especially PyCharm, boost productivity with built-in Django support – including project templates, automatic settings detection, and model-to-database migrations – as well as integrated debugging and testing tools that simplify finding and fixing issues. The version control integration also makes it easier for contributors to refine and polish their work.

— Sarah Boyce, Django Fellow

Backed by a thriving global community, Django’s roadmap includes composite primary key support, built-in CSP integration, and a focus on making Django accessible by default. Every eight-month release delivers incremental improvements while maintaining backward compatibility – clear proof that long-term stability and innovation can coexist.

JHipster

Both Django and JHipster help developers move fast, but they take different paths. JHipster began as the “anti-mullet stack” – serious in the back, party in the front – created to help developers quickly bootstrap full-stack applications with Spring on the backend and Angular.js on the frontend. Today, it’s still one of the most comprehensive open-source generators, offering a complete full-stack solution with built-in security, performance, and best practices.

JHipster has always been about great productivity and great tooling, so naturally, we’ve always been IntelliJ IDEA fans – we even have our own JHipster IntelliJ IDEA plugin! What I love most is the clean UI, the performance, and all the plugins that make my life so much easier. I use Maven and Docker support all the time, and they’re both absolutely top-notch.

— Julien Dubois, JHipster Creator

The project is now split into two teams – JHipster Classic, which focuses on the original full-stack generator written in JavaScript, and JHipster Lite, which develops a modernized, DDD-oriented version written in Java and targeted primarily at the backend. This structure allows the community to experiment more freely and attract new contributors.

As AI-assisted generation evolves, JHipster’s mission remains the same: empowering developers with the latest cutting-edge technology and a true full-stack approach.

Biome

Once the structure is in place, consistency becomes the next challenge. That’s where Biome, a modern, all-in-one toolchain for maintaining web projects, comes in. It supports every major web language and maintains a consistent experience between the CLI and the editor. The goal of its creators was simple: make a tool that can handle everything from development to production, with fewer dependencies, less setup time, faster CI runs, and clear, helpful diagnostics.

I’m a long-term user of JetBrains IDEs! RustRover has greatly improved since launch – its debugging features and new JavaScript module mean I can maintain all Biome projects, even our Astro-based website, in a single IDE. It’s great that JetBrains really listens to users and their feedback.

— Emanuele Stoppa, Biome Creator

Biome’s roadmap includes adding Markdown support, type inference, .d.ts file generation, JSDoc support, and embedded-language support. As a community-led project, Biome welcomes contributions of all kinds – every bit of help makes a difference.

Vuestic UI

When it’s time to polish the frontend, Vuestic UI takes over. This open-source project focuses on accessibility, theming, and a delightful developer experience. Built for Vue 3, it offers a flexible, easy-to-use component library that scales effortlessly from quick prototypes to enterprise-grade dashboards.

The right development environment makes a huge difference when building complex open-source tools like Vuestic UI and Vuestic Admin. Our team relies on JetBrains IDEs every day for their best-in-class refactoring tools that let us make bold changes with confidence, fast and reliable code navigation, and rock-solid performance. Most of what we need works right out of the box – no extra plugins or setup required. For us, JetBrains isn’t just a preference – it’s a productivity multiplier.

— Maxim Kobetz, Senior Vue.js Developer

After 12 years in frontend development, WebStorm – along with IntelliJ IDEA and PyCharm – has always been my trusted toolkit. Even now, when I’m not coding every day, I know I can rely on WebStorm for quick tweaks – every update feels smooth and never disrupts my workflow. It’s intuitive, beautiful, and just works the way I expect it to. I know switching IDEs is always a time sink, but with JetBrains, it’s absolutely worth it – you’ll never want to switch again.

— Anastasiia Zvenigorodskaia, Community Manager at Vuestic UI & Viuestic Admin

These projects showcase a common truth: Great developer experience happens when tools get out of your way. With JetBrains IDEs enhancing everything from code navigation to collaboration, these teams turn ideas into usable, elegant tools.

Explore these projects, contribute if you can, or start your own! RustRover, WebStorm, and PyCharm are free for open-source development and ready to help you code, collaborate, and contribute.

November 19, 2025 01:40 PM UTC


Django Weblog

Django 6.0 release candidate 1 released

Django 6.0 release candidate 1 is now available. It represents the final opportunity for you to try out a mosaic of modern tools and thoughtful design before Django 6.0 is released.

The release candidate stage marks the string freeze and the call for translators to submit translations. Provided no major bugs are discovered that can't be solved in the next two weeks, Django 6.0 will be released on or around December 3. Any delays will be communicated on the on the Django forum.

Please use this opportunity to help find and fix bugs (which should be reported to the issue tracker), you can grab a copy of the release candidate package from our downloads page or on PyPI.

The PGP key ID used for this release is Natalia Bidart: 2EE82A8D9470983E

November 19, 2025 12:00 PM UTC


Real Python

Quiz: Build a Python MCP Client to Test Servers From Your Terminal

In this quiz, you’ll test your understanding of how to Build a Python MCP Client to Test Servers From Your Terminal.

By working through this quiz, you’ll revisit how to add a minimal chat interface, create an AI handler to power the chat, handle runtime errors, and update the entry point to run the chat from the command line.

You will confirm when to initialize the AI handler and how to surface clear error messages to users. For a guided review, see the linked tutorial.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 19, 2025 12:00 PM UTC


Django Weblog

Going build-free with native JavaScript modules

For the last decade and more, we've been bundling CSS and JavaScript files. These build tools allowed us to utilize new browser capabilities in CSS and JS while still supporting older browsers. They also helped with client-side network performance, minimizing the content to be as small as possible and combining files into one large bundle to reduce network handshakes. We've gone through a lot of build tools iterations in the process; from Grunt (2012) to Gulp (2013) to Webpack (2014) to Parcel (2017) to esbuild (2020) and Vite (2020).

And with modern browser technologies there is less need for these build tools.

These build processes are complex, particularly for beginners to Django. The tools and associated best practices move quickly. There is a lot to learn and you need to understand how to utilize them with your Django project. You can build a workflow that stores the build results in your static folder, but there is no core Django support for a build pipeline, so this largely requires selecting from a number of third party packages and integrating them into your project.

The benefit this complexity adds is no longer as clear cut, especially for beginners. There are still advantages to build tools, but you can can create professional results without having to use or learn any build processes.

Build-free JavaScript tutorial

To demonstrate modern capabilities, let's expand Django’s polls tutorial with some newer JavaScript. We’ll use modern JS modules and we won’t require a build system.

To give us a reason to need JS let's add a new requirement to the polls; to allow our users to add their own suggestions, instead of only being able to vote on the existing options. We update our form to have a new option under the selection code:

or add your own <input type="text" name="choice_text" maxlength="200" />

Now our users can add their own options to polls if the existing ones don't fit. We can update our voting view to handle this new option, with more validation:

With our logic getting more complex it would be nicer if we had some JavaScript to do this. We can build a script that handles some of the form validation for us.

// Note the "export default" to make this function available for other modules.
export default function initFormValidation() {
  document.getElementById("polls").addEventListener("submit", function (e) {
    const choices = this.querySelectorAll('input[name="choice"]');
    const choiceText = this.querySelector('input[name="choice_text"]');

    const hasChecked = [...choices].some(r => r.checked);
    const hasText = choiceText?.value.trim() !== "";

    if (!hasChecked && !hasText) {
      e.preventDefault();
      alert("You didn't select a choice or provide a new one.");
    }

    if (hasChecked && hasText) {
      e.preventDefault();
      alert("You can't select a choice and also provide a new option.");
    }
  });
}

Note how we use export default in the above code. This means form_validation.js is a JavaScript module. When we create our main.js file, we can import it with the import statement:

import initFormValidation from "./form_validation.js";

initFormValidation();

Lastly, we add the script to the bottom of our details.html file, using Django’s usual static template tag. Note the type="module" this is needed to tell the browser we will be using import/export statements.

<script type="module" src="{% static 'polls/js/main.js' %}"></script>

That’s it! We got the modularity benefits of modern JavaScript without needing any build process. The browser handles the module loading for us. And thanks to parallel requests since HTTP/2, this can scale to many modules without a performance hit.

In production

To deploy, all we need is Django's support for collecting static files into one place and its support for adding hashes to filenames. In production it is a good idea to use ManifestStaticFilesStorage storage backend. It stores the file names it handles by appending the MD5 hash of the file’s content to the filename. This allows you to set far future cache expiries, which is good for performance, while still guaranteeing new versions of the file will make it to users’ browsers.

This backend is also able to update the reference to form_validation.js in the import statement, with its new versioned file name.

Future work

ManifestStaticFilesStorage works, but a lot of its implementation details get in the way. It could be easier to use as a developer.

We discussed those possible improvements at the Django on the Med đŸ–ïž sprints and I’m hopeful we can make progress.

I built django-manifeststaticfiles-enhanced to attempt to fix all these. The core work is to switch to a lexer for CSS and JS, based on Ned Batchelder’s JsLex that was used in Django previously. It was expanded to cover modern JS and CSS by working with Claude Code to do the grunt work of covering the syntax.

It also switches to using a topological sort to find dependencies, whereas before we used a more brute force approach of repeated processing until we saw no more changes, which lead to more work, particularly on storages that used the network. It also meant we couldn't handle circular dependencies.

To validate it works, I ran a performance benchmark on 50+ projects, it’s been tested issues and with similar (often improved) performance. On average, it’s about 30% faster.


While those improvements would be welcome, do go ahead with trying build-free JavaScript and CSS in your Django projects today! Modern browsers make it possible to create great frontend experiences without the complexity.

November 19, 2025 08:13 AM UTC


Python GUIs

Getting Started With DearPyGui for GUI Development — Your First Steps With the DearPyGui Library for Desktop Python GUIs

Getting started with a new GUI framework can feel daunting. This guide walks you through the essentials of DearPyGui. From installation and first app to widgets, layouts, theming, and advanced tooling.

With DearPyGui, you can quickly build modern, high‑performance desktop interfaces using Python.

Getting to Know DearPyGui

DearPyGui is a GPU‑accelerated and cross‑platform GUI framework for Python, built on Dear ImGui with a retained‑mode Python API. It renders all UI using the GPU rather than native OS widgets, ensuring consistent, high‑performance UI across Windows, Linux, macOS, and even Raspberry Pi 4.

Note that official wheels for Raspberry Pi may lag behind. Users sometimes compile from source.

DearPyGui's key features include the following:

This GUI framework is ideal for building interfaces ranging from simple utilities to real-time dashboards, data‑science tools, or interactive games.

Installing and Setting Up DearPyGui

You can install DearPyGui from PyPI using pip:

sh
$ pip install dearpygui

This command installs DearPyGui from PyPI.

Writing Your First GUI App

In general, DearPyGui apps follow the following structure:

  1. dpg.create_context() — Initialize DearPyGui and call it before anything else
  2. dpg.create_viewport() — Create the main application window or viewport
  3. Define UI widgets within windows or groups — Add and configure widgets and containers to build your interface
  4. dpg.setup_dearpygui() — Set up DearPyGui internals and resources before showing the viewport
  5. dpg.show_viewport() — Make the viewport window visible to the user
  6. dpg.start_dearpygui() — Start the DearPyGui main event and render loop
  7. dpg.destroy_context() — Clean up and release all DearPyGui resources on exit

Here's a quick application displaying a window with basic widgets:

python
import dearpygui.dearpygui as dpg

def main():
    dpg.create_context()
    dpg.create_viewport(title="Viewport", width=300, height=100)

    with dpg.window(label="DearPyGui Demo", width=300, height=100):
        dpg.add_text("Hello, World!")

    dpg.setup_dearpygui()
    dpg.show_viewport()
    dpg.start_dearpygui()
    dpg.destroy_context()

if __name__ == "__main__":
    main()

Inside main(), we initialize the library with dpg.create_context(), create a window (viewport) via dpg.create_viewport(), define the GUI, set up the library with dpg.setup_dearpygui(), show the viewport with dpg.show_viewport(), and run the render loop using dpg.start_dearpygui(). When you close the window, dpg.destroy_context() cleans up resources.

You define the GUI itself inside a dpg.window() context block, which parents the a text label with the "Hello, World!" Text.

Always follow the lifecycle order: create context → viewport → setup → show → start → destroy. Otherwise, the app may crash.

Run it! Here's what your first app looks like.

DearPyGui first app DearPyGui first app

Exploring Widgets

DearPyGui includes a wide variety of widgets:

Here's an example that showcases some basic DearPyGui widgets:

python
import dearpygui.dearpygui as dpg

def main():
    dpg.create_context()
    dpg.create_viewport(title="Widgets Demo", width=400, height=450)

    with dpg.window(
        label="Common DearPyGui Widgets",
        width=380,
        height=420,
        pos=(10, 10),
    ):
        dpg.add_text("Static label")
        dpg.add_input_text(
            label="Text Input",
            default_value="Type some text here...",
            tag="widget_input",
        )
        dpg.add_button(label="Click Me!")
        dpg.add_checkbox(label="Check Me!")
        dpg.add_radio_button(
            ("DearPyGui", "PyQt6", "PySide6"),
        )

        dpg.add_slider_int(
            label="Int Slider",
            default_value=5,
            min_value=0,
            max_value=10,
        )
        dpg.add_slider_float(
            label="Float Slider",
            default_value=0.5,
            min_value=0.0,
            max_value=1.0,
        )

        dpg.add_combo(
            ("DearPyGui", "PyQt6", "PySide6"),
            label="GUI Library",
        )
        dpg.add_color_picker(label="Pick a Color")
        dpg.add_progress_bar(
            label="Progress",
            default_value=0.5,
            width=250,
        )

    dpg.setup_dearpygui()
    dpg.show_viewport()
    dpg.start_dearpygui()
    dpg.destroy_context()

if __name__ == "__main__":
    main()

This code uses the following functions to add the widgets to the GUI:

Run it! Here's what the app will look like.

DearPyGui basic widgets DearPyGui basic widgets

Laying Out the GUI

By default, DearPyGui stacks widgets vertically. However, positioning options include the following:

Widgets go inside containers like dpg.window(). You can nest containers to build complex GUI layouts:

python
import dearpygui.dearpygui as dpg

def main():
    dpg.create_context()
    dpg.create_viewport(title="Layout Demo", width=520, height=420)

    with dpg.window(
        label="Layout Demo",
        width=500,
        height=380,
        pos=(10, 10),
    ):
        dpg.add_text("1) Vertical layout:")
        dpg.add_button(label="Top")
        dpg.add_button(label="Middle")
        dpg.add_button(label="Bottom")

        dpg.add_spacer(height=12)

        dpg.add_text("2) Horizontal layout:")
        with dpg.group(horizontal=True):
            dpg.add_button(label="Left")
            dpg.add_button(label="Center")
            dpg.add_button(label="Right")

        dpg.add_spacer(height=12)

        dpg.add_text("3) Indentation:")
        dpg.add_checkbox(label="Indented at creation (30px)", indent=30)
        dpg.add_checkbox(label="Indented after creation (35px)", tag="indent_b")
        dpg.configure_item("indent_b", indent=35)

        dpg.add_spacer(height=12)

        dpg.add_text("4) Absolute positioning:")
        dpg.add_text("Positioned at creation: (x=100, y=300)", pos=(100, 300))
        dpg.add_text("Positioned after creation: (x=100, y=320)", tag="move_me")
        dpg.set_item_pos("move_me", [100, 320])

    dpg.setup_dearpygui()
    dpg.show_viewport()
    dpg.start_dearpygui()
    dpg.destroy_context()

if __name__ == "__main__":
    main()

In this example, we create an app that showcases basic layout options in DearPyGui. The first section of widgets shows the default vertical stacking by adding three buttons one after another. Then, you use dpg.add_spacer(height=12) to insert vertical whitespace between sections.

Then, we create a horizontal row of buttons with dpg.group(horizontal=True), which groups items side-by-side. Next, we have an indentation section that demonstrates how to indent widgets at creation (indent=30) and after creation using dpg.configure_item().

Finally, we use absolute positioning by placing one text item at a fixed coordinate using pos=(100, 300) and moving another after creation with dpg.set_item_pos(). These patterns are all part of DearPyGui’s container and item-configuration model, which we can use to arrange the widgets in a user-friendly GUI.

Run it! You'll get a window like the following.

DearPyGui layouts DearPyGui layouts

Event Handling with Callbacks

DearPyGui uses callbacks to handle events. Most widgets accept a callback argument, which is executed when we interact with the widget itself.

The example below provides a text input and a button. When you click the button, it launches a dialog with the input text:

python
import dearpygui.dearpygui as dpg

def on_click_callback(sender, app_data, user_data):
    text = dpg.get_value("input_text")
    dpg.set_value("dialog_text", f'You typed: "{text}"')
    dpg.configure_item("dialog", show=True)

def main() -> None:
    dpg.create_context()
    dpg.create_viewport(title="Callback Example", width=270, height=120)

    with dpg.window(label="Callback Example", width=250, height=80, pos=(10, 10)):
        dpg.add_text("Type something and press Click Me!")
        dpg.add_input_text(label="Input", tag="input_text")
        dpg.add_button(label="Click Me!", callback=on_click_callback)
        with dpg.window(
            label="Dialog",
            modal=True,
            show=False,
            width=230,
            height=80,
            tag="dialog",
            no_close=True,
            pos=(10, 10),
        ):
            dpg.add_text("", tag="dialog_text")
            dpg.add_button(
                label="OK",
                callback=lambda s, a, u: dpg.configure_item("dialog", show=False),
            )

    dpg.setup_dearpygui()
    dpg.show_viewport()
    dpg.start_dearpygui()
    dpg.destroy_context()

if __name__ == "__main__":
    main()

The button takes the on_click_callback() callback as an argument. When we click the button, DearPyGui invokes the callback with three standard arguments:

  1. sender, which holds the button's ID
  2. app_data, which holds extra data specific to certain widgets
  3. user_data, which holds custom data you could have supplied

Inside the callback, we pull the current text from the input widget using dpg.get_value(), and finally, we display the input text in a modal window.

Run it! You'll get a window like the following.

DearPyGui callbacks DearPyGui callbacks

To see this app in action, type some text into the input and click the Click Me! button.

Drawing Shapes and Plotting

DearPyGui comes with powerful plotting capabilities. It includes high-performance plots, including lines, bars, scatter, and histograms. These plots allow interactive zoom and pan and real-time data updates, making them excellent for scientific visualizations and dashboards.

Here's a quick example of how to create a plot using DearPyGui's plotting widgets:

python
import dearpygui.dearpygui as dpg
import numpy as np

def main() -> None:
    dpg.create_context()
    dpg.create_viewport(title="Plotting Example", width=420, height=320)

    x = np.linspace(0, 2 * np.pi, 100)
    y1 = np.sin(x)
    y2 = np.cos(x)

    with dpg.window(label="Plot Window", width=400, height=280, pos=(10, 10)):
        with dpg.plot(label="Sine and Cosine Plot", height=200, width=360):
            dpg.add_plot_legend()
            dpg.add_plot_axis(dpg.mvXAxis, label="X")
            with dpg.plot_axis(dpg.mvYAxis, label="Y"):
                dpg.add_line_series(x.tolist(), y1.tolist(), label="sin(x)")
                dpg.add_line_series(x.tolist(), y2.tolist(), label="cos(x)")

    dpg.setup_dearpygui()
    dpg.show_viewport()
    dpg.start_dearpygui()
    dpg.destroy_context()

if __name__ == "__main__":
    main()

In this example, we create two line series: sine and cosine curves. To plot them, we use NumPy‑generated data. We also add X and Y axes, plus a legend for clarity. You can update the series in a callback for live data dashboards.

Run it! You'll get a plot like the one shown below.

DearPyGui plotting demo DearPyGui plotting demo

Conclusion

DearPyGui offers a powerful and highly customizable GUI toolkit for desktop Python applications. With a rich widget set, interactive plotting, node editors, and built-in developer tools, it's a great choice for both simple and complex interfaces.

Try building your first DearPyGui app and experimenting with widgets, callbacks, layouts, and other interesting features!

November 19, 2025 08:00 AM UTC

November 18, 2025


The Python Coding Stack

I Don’t Like Magic ‱ Exploring The Class Attributes That Aren’t Really Class Attributes ‱ [Club]

I don’t like magic. I don’t mean the magic of the Harry Potter kind—that one I’d like if only I could have it. It’s the “magic” that happens behind the scenes when a programming language like Python does things out of sight. You’ll often find things you have to “just learn” along the Python learning journey. “That’s the way things are,” you’re told.

That’s the kind of magic I don’t like. I want to know how things work. So let me take you back to when I first learnt about named tuples—the NamedTuple in the typing module, not the other one—and data classes. They share a similar syntax, and it’s this shared syntax that confused me at first. I found these topics harder to understand because of this.

Their syntax is different from other stuff I had learnt up to that point. And I could not reconcile it with the stuff I knew. That bothered me. It also made me doubt the stuff I already knew. Here’s what I mean. Let’s look at a standard class first:

class Person:
    classification = “Human”
​
    def __init__(self, name, age, profession):
        self.name = name
        self.age = age
        self.profession = profession

You define a class attribute, .classification, inside the class block, but outside any of the special methods. All instances will share this class attribute. Then you define the .__init__() special method and create three instance attributes: .name, .age, and .profession. Each instance will have its own versions of these instance attributes. If you’re not familiar with class attributes and instance attributes, you can read my seven-part series on object-oriented programming: A Magical Tour Through Object-Oriented Programming in Python ‱ Hogwarts School of Codecraft and Algorithmancy

Now, let’s assume you don’t actually need the class attribute and that this class will only store data. It won’t have any additional methods. You decide to use a data class instead:

from dataclasses import dataclass
​
@dataclass
class Person:
    name: str
    age: int
    profession: str

Or you prefer to use a named tuple, and you reach out for typing.NamedTuple:

from typing import NamedTuple
​
class Person(NamedTuple):
    name: str
    age: int
    profession: str

The syntax is similar. I’ll tell you why I used to find this confusing soon.

Whichever option you choose, you can create an instance using Person(”Matthew”, 30, “Python Programmer”). And each instance you create will have its own instance attributes .name, .age, and .profession.

But wait a minute! The data class and the named tuple use syntax that’s similar to creating class attributes. You define these just inside the class block and not in an .__init__() method. How come they create instance attributes? “That’s just how they work” is not good enough for me.

These aren’t class attributes. Not yet. There’s no value associated with these identifiers. Therefore, they can’t be class attributes, even though you write them where you’d normally add class attributes in a standard class. However, they can be class attributes if you include a default value:

@dataclass
class Person:
    name: str
    age: int
    profession: str = “Python Programmer”

The .profession attribute now has a string assigned to it. In a data class, this represents the default value. But if this weren’t a data class, you’d look at .profession and recognise it as a class attribute. But in a data class, it’s not a class attribute, it’s an instance attribute, as are .name and .age, which look like
what do they look like, really? They’re just type hints. Yes, type hints without any object assigned. Python type hints allow you to do this:

>>> first_name: str

This line is valid in Python. It does not create the variable name. You can confirm this:

>>> first_name
Traceback (most recent call last):
  File “<input>”, line 1, in <module>
NameError: name ‘first_name’ is not defined

Although you cannot just write first_name if the identifier doesn’t exist, you can use first_name: str. This creates an annotation which serves as the type hint. Third-party tools now know that when you create the variable first_name and assign it a value, it ought to be a string.

So, let’s go back to the latest version of the Person data class with the default value for one of the attributes:

@dataclass
class Person:
    name: str
    age: int
    profession: str = “Python Programmer”

But let’s ignore the @dataclass decorator for now. Indeed, let’s remove this decorator:

class Person:
    name: str
    age: int
    profession: str = “Python Programmer”

You define a class with one class attribute, .profession and three type hints:

How can we convert this information into instance attributes when creating an instance of the class? I won’t try to reverse engineer NamedTuple or data classes here. Instead, I’ll explore my own path to get a sense of what might be happening in those tools.

Let’s start hacking away


Read more

November 18, 2025 10:01 PM UTC


PyCoder’s Weekly

Issue #709: deepcopy(), JIT, REPL Tricks, and More (Nov. 18, 2025)

#709 – NOVEMBER 18, 2025
View in Browser »

The PyCoder’s Weekly Logo


Why Python’s deepcopy Can Be So Slow

“Python’s copy.deepcopy() creates a fully independent clone of an object, traversing every nested element of the object graph.” That can be expensive. Learn what it is doing and how you can sometimes avoid the cost.
SAURABH MISRA

A Plan for 5-10%* Faster Free-Threaded JIT by Python 3.16

Just In Time compilation is under active development in the CPython interpreter. This blog post outlines the targets for the next two Python releases.
KEN JIN

Fast Container Builds: 202 - Check out the Deep Dive

alt

This blog explores the causes and consequences of slow container builds, with a focus on understanding how BuildKit’s capabilities support faster container builds. →
DEPOT sponsor

The Python Standard REPL: Try Out Code and Ideas Quickly

The Python REPL gives you instant feedback as you code. Learn to use this powerful tool to type, run, debug, edit, and explore Python interactively.
REAL PYTHON

Join in the PSF Year-End Fundraiser & Membership Drive!

PYTHON SOFTWARE FOUNDATION

PEP 814: Add Frozendict Built-in Type (Added)

PYTHON.ORG

PyBay 2025 Videos Released

YOUTUBE.COM

DjangoCon US 2025 Videos Released

YOUTUBE.COM

Python Jobs

Python Video Course Instructor (Anywhere)

Real Python

Python Tutorial Writer (Anywhere)

Real Python

More Python Jobs >>>

Articles & Tutorials

Preparing Data Science Projects for Production

How do you prepare your Python data science projects for production? What are the essential tools and techniques to make your code reproducible, organized, and testable? This week on the show, Khuyen Tran from CodeCut discusses her new book, “Production Ready Data Science.”
REAL PYTHON podcast

Becoming a Core Developer

Throughout your open source journey, you have no doubt been interacting with the core development team of the projects to which you have been contributing. Have you ever wondered how people become core developers of a project?
STEFANIE MOLIN

Modern, Self-Hosted Authentication

alt

Keep your users, your data and your stack with PropelAuth BYO. Easily add Enterprise authentication features like Enterprise SSO, SCIM and session management. Keep your sales team happy and give your CISO piece of mind →
PROPELAUTH sponsor

38 Things Python Developers Should Learn in 2025

Talk Python interviews Peter Wang and Calvin Hendrix-Parker and they discuss loads of things in the Python ecosystem that are worth learning, including free-threaded CPython, MCP, DuckDB, Arrow, and much more.
TALK PYTHON podcast

Trusted Publishing for GitLab Self-Managed and Organizations

The Trusted Publishing system for PyPI is seeing rapid adoption. This post talks about its growth along with the next steps: adding GitLab and handling organizations.
MIKE FIELDER

Decompression Is Up to 30% Faster in CPython 3.15

Zstandard compression got added in Python 3.14, but work is on-going. Python 3.15 is showing performance improvements in both zstd and other compression modules.
EMMA SMITH

__slots__ for Optimizing Classes

Most Python objects store their attributes in __dict__, which is a dictionary. Modules and classes always use __dict__, but not everything does.
TREY HUNNER

Convert Documents Into LLM-Ready Markdown

Get started with Python MarkItDown to turn PDFs, Office files, images, and URLs into clean, LLM-ready Markdown in seconds.
REAL PYTHON

Quiz: Convert Documents Into LLM-Ready Markdown

Practice MarkItDown basics. Convert PDFs, Word documents, Excel documents, and HTML documents to Markdown. Try the quiz.
REAL PYTHON

Convert Scikit-learn Pipelines into SQL Queries with Orbital

Orbital is a new library that converts Scikit-learn pipelines into SQL queries, enabling machine learning model inference directly within SQL databases.
POSIT sponsor

Python Operators and Expressions

Operators let you combine objects to create expressions that perform computations – the core of how Python works.
REAL PYTHON course

A Generator, Duck Typing, and a Branchless Conditional Walk Into a Bar

What’s your favorite line of code? Rodrigo expounds about generators, duck typing, and branchless conditionals.
RODRIGO GIRÃO SERRÃO

Projects & Code

fastapi-voyager: Explore Your API Interactively

GITHUB.COM/ALLMONDAY

Narwhals-daft: A Narwhals Plugin for Daft Dataframes

MARCO GORELLI ‱ Shared by Marco Gorelli

zensical: A Modern Static Site Generator

GITHUB.COM/ZENSICAL

django-subatomic: Control Transaction Logic in Django

GITHUB.COM/KRAKEN-TECH

httptap: CLI Measuring HTTP Request Phases

GITHUB.COM/OZERANSKII

Events

Weekly Real Python Office Hours Q&A (Virtual)

November 19, 2025
REALPYTHON.COM

DELSU Tech Invasion 3.0

November 19 to November 21, 2025
HAMPLUSTECH.COM

PyData Bristol Meetup

November 20, 2025
MEETUP.COM

PyLadies Dublin

November 20, 2025
PYLADIES.COM

Python Sul 2025

November 21 to November 24, 2025
PYTHON.ORG.BR


Happy Pythoning!
This was PyCoder’s Weekly Issue #709.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

November 18, 2025 07:30 PM UTC


Real Python

Break Out of Loops With Python's break Keyword

In Python, the break statement lets you exit a loop prematurely, transferring control to the code that follows the loop. This tutorial guides you through using break in both for and while loops. You’ll also briefly explore the continue keyword, which complements break by skipping the current loop iteration.

By the end of this video course, you’ll understand that:


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

November 18, 2025 02:00 PM UTC


Mike Driscoll

Black Friday Python Deals Came Early

Black Friday deals came early this year. You can get 50% off of any of my Python books or courses until the end of November. You can use this coupon code at checkout: BLACKISBACK 

The following links already have the discount applied:

Python eBooks

Python Courses

 

The post Black Friday Python Deals Came Early appeared first on Mouse Vs Python.

November 18, 2025 01:41 PM UTC