AI / Software Engineer (ALikesToCode)

Abhyudaya B Tharakan

I build production AI systems end-to-end, from data pipelines to LLM-powered products to deployment and monitoring.

What I build (in plain English)

  • LLM infrastructure: unified gateways, routing, auth, rate limiting, streaming.
  • Full-stack AI apps: chat UX, docs browsing, admin flows, secure auth.
  • MLOps/DevOps: CI/CD, containers, Kubernetes patterns, model lifecycle.
Quick links: View Projects · GitHub · LinkedIn · Resume · Email

Impact (selected)

A few highlights that reflect how I work: measurable outcomes, production constraints, and shipping.

  • Built and deployed AI tooling (LLMs/RAG) used at large scale in education.
  • Designed data pipelines and analytics workflows for large datasets.
  • Automated delivery with CI/CD to reduce manual deployment overhead.

More detail and specifics live inside the case studies below.

Featured Work

Product-minded builds with case studies, tradeoffs, and outcomes.

Architecture diagram

Client -> proxy -> router -> provider adapters, with auth, rate limits, and cache.

Systems

MultiLLM Proxy

Unified proxy server for multiple LLM providers with one consistent API.

Auth, rate limiting, streaming, monitoring, and provider health checks behind one gateway.

Auth + user managementRate limitingStreaming responses
Chat UI

Answering with citations and source links.

Product

CrustData Customer Support Agent

AI-powered support agent for an API product (Next.js + FastAPI).

Interactive docs browsing, real-time chat, Supabase auth, and admin tooling.

Interactive docs browsingReal-time chatSupabase auth
Calendar widget

Agenda view with quick actions and cleaner layout.

Open Source

plasma-applet-eventcalendar

Maintained fork of a KDE Plasma Event Calendar widget focused on daily usability.

Google Calendar sync fixes, quality-of-life improvements, and ongoing Plasma 6 support.

Google Calendar sync fixesAgenda improvementsQuality-of-life UX tweaks
Pipeline diagram

Train -> evaluate -> register -> deploy.

MLOps

Iris MLOps Pipeline

Production-style CI/CD for an ML API with CML, Docker, Kubernetes, and MLflow.

Automated testing, container builds, deployment patterns, and model lifecycle thinking.

CML-driven CI/CDDockerized buildsKubernetes deployment

How I work

Principles I use to ship reliable systems without overbuilding the first version.

Proof-first engineering

I optimize for outcomes and evidence: demos, docs, tests, and clear decisions.

Reliability is a feature

Auth, rate limits, retries, and observability are part of the product, not extras.

Design for iteration

Clean interfaces and adapters keep systems flexible as requirements evolve.

About

I work across AI/ML and software engineering, with a focus on systems that ship: APIs, pipelines, deployment, and the product layer users interact with.

I'm especially interested in education tech, healthcare tech, and applied AI where reliability and clarity matter.

Outside of work, I will happily fix the annoying edge case in an open-source tool because it makes someone's daily workflow smoother.

Now

Building reliable AI systems with tight feedback loops.

Location: Bengaluru, Karnataka, India

Focus: Retrieval eval, routing policies, and deployment hygiene.

Personal: KDE + Linux customization, small tools that improve daily flow.

Notes

When I learn something useful, I write it down.

Published2024

GenAI notes (IITM)

Notes and experiments from the IITM GenAI track.

Coming soon2024

How I think about RAG evaluation

A lightweight evaluation loop for retrieval quality and answer grounding.

Coming soon2024

Rate limiting strategies for multi-provider LLM gateways

Guardrails and policy choices that keep routing stable under real traffic.

Contact

If you're hiring or building something in LLM infrastructure, full-stack AI, or MLOps, I'm happy to talk.

Email: abhyudaya@aloves.codesLocation: Bengaluru, Karnataka, India