Python Bytes is a weekly podcast hosted by Michael Kennedy and Brian Okken. The show is a short discussion on the headlines and noteworthy news in the Python, developer, and data science space.
#467 Toads in my AI
- GreyNoise IP Check
- tprof: a targeting profiler
- TOAD is out
- Extras
- Joke
About the show
Sponsored by us! Support our work through:
Connect with the hosts
- Michael: @mkennedy@fosstodon.org / @mkennedy.codes (bsky)
- Brian: @brianokken@fosstodon.org / @brianokken.bsky.social
- Show: @pythonbytes@fosstodon.org / @pythonbytes.fm (bsky)
Join us on YouTube at pythonbytes.fm/live to be part of the audience. Usually Monday at 11am PT. Older video versions available there too.
Finally, if you want an artisanal, hand-crafted digest of every week of the show notes in email form? Add your name and email to our friends of the show list, we'll never share it.
Michael #1: GreyNoise IP Check
- GreyNoise watches the internet's background radiation—the constant storm of scanners, bots, and probes hitting every IP address on Earth.
- Is your computer sending out bot or other bad-actor traffic? What about the myriad of devices and IoT things on your local IP?
- Heads up: If your IP has recently changed, it might not be you (false positive).
Brian #2: tprof: a targeting profiler
- Adam Johnson
- Intro blog post: Python: introducing tprof, a targeting profiler
Michael #3: TOAD is out
- Toad is a unified experience for AI in the terminal
- Front-end for AI tools such as OpenHands, Claude Code, Gemini CLI, and many more.
- Better TUI experience (e.g. @ for file context uses fuzzy search and dropdowns)
- Better prompt input (mouse, keyboard, even colored code and markdown blocks)
- Terminal within terminals (for TUI support)
Brian #4: FastAPI adds Contribution Guidelines around AI usage
- Docs commit: Add contribution instructions about LLM generated code and comments and automated tools for PRs
- Docs section: Development - Contributing : Automated Code and AI
- Great inspiration and example of how to deal with this for popular open source projects
- “If the human effort put in a PR, e.g. writing LLM prompts, is less than the effort we would need to put to review it, please don't submit the PR.”
- With sections on
- Closing Automated and AI PRs
- Human Effort Denial of Service
- Use Tools Wisely
Extras
Brian:
- Apparently Digg is back and there’s a Python Community there
- Why light-weight websites may one day save your life - Marijke LuttekesHome
Michael:
- Blog posts about Talk Python AI Integrations
- Announcing Talk Python AI Integrations on Talk Python’s Blog
- Blocking AI crawlers might be a bad idea on Michael’s Blog
- Already using the compile flag for faster app startup on the containers:
RUN --mount=type=cache,target=/root/.cache uv pip install --compile-bytecode --python /venv/bin/python- I think it’s speeding startup by about 1s / container.
- Biggest prompt yet? 72 pages, 11, 000
Joke: A date
- via From Pat Decker