Overlord
FastAPI backend integrating Langfuse prompt management with the flexibility of LiteLLM.
Using server-sent events (SSE) for real-time requests and thread pooling it is highly scalable while staying simple.
Specializing in scalable backend architecture, seamless AI integration, and rapid application development (RAD).
Expert in working with various generative AI provider APIs via Python, and backed by a design-focused mindset.
Based in and around Berlin, Germany and always available.
View ResumeFastAPI backend integrating Langfuse prompt management with the flexibility of LiteLLM.
Using server-sent events (SSE) for real-time requests and thread pooling it is highly scalable while staying simple.
Generative AI provider wrapper for the Lua language to interface with OpenAI, Anthropic, Google Gemini, etc.
It is abstracting away provider-specific payload structures and response parsing to simplify switching models.
Integrate AI and scale your business!
Reach Out