I have been taking my time with the unexpected career break, but I was tempted to dive into experimenting with a broad range of AI stuff.
The agentic-ai-engineering repo on GitHub was mentioned on a post in LinkedIn, so I had a browser tab open ready to dig into that.
A few days later I started to see mentions of an exploit involving a python LLM library, LiteLLM that would quietly syphon away details from pretty much any environment that it was involved in.
https://docs.litellm.ai/blog/security-update-march-2026#what-happened
Today I've taken a dip into the agentic-ai-engineering repo and sure enough it is specifying LiteLLM as a dependency in its pyproject.toml configuration file for the first level of demonstrating AI, 01-simple-llm-call.
If you may have been running code that involved LiteLLM, there is a guide for checking whether your setup may have been compromised
https://docs.litellm.ai/blog/security-update-march-2026#how-to-check-if-you-are-affected
Update
The involvement of uv.lock would have prevented the vulnerable version of LiteLLM from being involved for the project that I was evaluating.
No comments:
Post a Comment