I Just Wanted to Update My Blog to Python 3.13. It Took a Whole Day.
This is just a quick post about what I had to do to update my blog stack (from Python 3.9 to Python 3.13). Hopefully, someone finds it helpful.
Why now: Aside from the obvious (keeping everything up to date for security reasons), this blog runs on AWS Lambda (I explained that setup in this post). Since Lambda requires using a supported runtime, I had to act after getting an email from AWS saying Python 3.9 is reaching end of life soon. Otherwise, I wouldn’t be able to create or update functions next year. It took me almost a full day to get everything working again. I learned a few interesting things along the way, so I figured I’d share them.
Had to fix 💣
Docker image for building the Python bundle
I updated the Python builder Docker image from public.ecr.aws/lambda/python:3.9
to public.ecr.aws/lambda/python:3.13
.
The only major change was switching from yum
to dnf
, since the image now uses Amazon Linux 2023 where yum is gone.
python-fido2
Because I like to try new things, I'd been using U2F to authenticate myself to this blog - total overkill, but cool. Well, I finally paid the price for being an early adopter.
U2F is basically old news now. FIDO2/WebAuthn/Passkey is the new standard. While WebAuthn supports U2F backward compatibility, the library I was using (yubico/python-fido2) didn’t. When it went from version 1 to 2, it removed the U2FFido2Server
class, which I was using.
Since version 1 isn’t even published for Python 3.13, I had no choice but to upgrade and that ended up being the biggest time sink (~4 hours).
There's an official migration doc, but it wasn’t really enough. I had to piece things together from their example server and by reading the source code (at least I can do that...). Here's what I learned that might help if you're stuck migrating from version 1:
- All server–client communication now uses JSON instead of CBOR.
- On
Fido2Server
:register_begin
now takes aPublicKeyCredentialUserEntity
object, and you also need to specifyuser_verification
andauthenticator_attachment
.- If you store user credentials as bytes, functions like
authenticate_begin
need extra handling - you have to parse the blob intoAttestedCredentialData
withAttestedCredentialData.unpack_from(blob)[0]
. In the old version, you could serialize and deserialize bytes directly, but now you need to restore the type manually. PublicKeyCredentialRpEntity
now requires keyword arguments for RP name and ID instead of positional ones.- Old U2F credential data isn't compatible anymore, so I gave up on backward compatibility entirely, which is fine for a personal project.
I’m still grateful for the library, but if anyone’s using this in production, you really need to understand it inside out before betting your company on it.
Nice to do ⚠️
datetime.utcnow() deprecation
I like storing time as ISO 8601 strings (like 2010-07-09T02:45:00.000
) with no timezone, assuming UTC.
It used to be easy with datetime.utcnow().isoformat()
, but now it throws a warning because it’s deprecated. I know exactly what I’m doing here, I just want a timezone-less timestamp with time in UTC.
Now I use:
datetime.datetime.now(datetime.UTC).replace(tzinfo=None).isoformat()
This preserves backward compatibility with my existing code by producing a 'naive' datetime (a datetime without a timezone). Obviously, don't do this unless you understand what it is doing.
If you're not familiar with naive vs aware datetime objects, the official docs explain it well. In short, without a timezone, it is not safe to do even simple operations like "adding 1 hour" to your timestamp because DST and timezone shifts.
beautifulsoup
beautifulsoup
is a Python library for parsing HTML. I mainly use it to test my blog. The maintainers decided to deprecate non-PEP8 methods like findAll
in favor of new ones like find_all
.
I found quite annoying - it adds no real value. Even the base Python library still uses non-PEP8 names like logging.getLogger()
, and no one's deprecating that. I get the datetime.utcnow()
change above because it prevents real bugs, but findAll
vs find_all
is just aesthetics. Changes like that make people take DeprecationWarning
less seriously.
Anyway, it's open source — they can do what they want. I still updated my code to remove the warnings.
pipenv to uv migration
Apparently, things changed again in Python-land and uv
is now the dependency manager of choice for people with "good taste". After doing a bit of research, I was convinced - it's faster, cleaner, and better designed than pipenv
. It doesn’t mess with global installations, doesn’t litter your home directory, and plays nicely with other tools (i.e., composable) via commands like uv run
and uv tool
.
It was very easy to migrate to uv. I just ran uvx migrate-to-uv
and the resulting pyproject.toml
mostly worked. I only had to fix unnecessary version restrictions (e.g., packageX<=3.1.1
).
In my Makefile for my Python image builder, I just changed
# from
pipenv lock -r > requirements.txt
# to
uv pip freeze > requirements.txt
One more thing - write automated tests, even for personal projects
I want to emphasize how important automated tests are, even for small personal projects. Once your project is more than a static page, it's impossible to manually check everything (unless you have a lot of free time).
You don't need fancy stuff like Selenium. My tests are simple — they just check things like:
- Blog article titles and text are there.
- Pagination works as expected.
def test_6_published_articles_second_page_should_show_one_article(test_client):
make_test_articles(6)
rv = test_client.get("/")
soup = make_soup(rv)
nextlink = soup.find_all("a", class_="pager-next")[0]
rv = test_client.get(nextlink["href"])
soup = make_soup(rv)
lst = soup.find_all("div", class_="main-content-inner")
first_article = lst[0]
first_article_text = first_article.get_text()
assert "TestArticle1" in first_article_text
assert "TestBody1" in first_article_text
Automating even small things saves so much time and embarrassment (especially if someone actually uses your app). Setting up a fake database or articles can be a bit annoying, but with today's "AI" tools, there's really no excuse not to.
I hope you found this useful - feel free to drop me a note if you did.