LLM-powered search router that chooses direct answers or live web retrieval, returning concise, validated responses with optional source citations via a modular Express + LangChain backend.
-
Updated
Feb 24, 2026 - TypeScript
LLM-powered search router that chooses direct answers or live web retrieval, returning concise, validated responses with optional source citations via a modular Express + LangChain backend.
A collection of runnable examples using LangChain and Groq LLMs, demonstrating different AI workflow patterns like sequence, parallel, branch, passthrough, and lambda operations. Perfect for learning modular AI development with LLMs.
LangChainRunnables is a Python-based project showcasing and learing five distinct LangChain workflows—Branch, Lambda, Parallel, Passthrough, and Sequence—using OpenRouter’s free API. It demonstrates AIdriven text processing tasks generating facts, summarizing reports, creating notes and quizzes, responding to sentiment feedback, and crafting jokes
Examples of LangChain Runnables including Sequence, Parallel, Passthrough, Lambda, and Branch. Demonstrates how to build modular, scalable pipelines for tasks like text processing, summarization, and conditional workflows using reusable components.
demonstrates LangChain Runnables for building modular AI workflows. It covers sequential execution, parallel processing, conditional branching, lambda-based transformations, and direct passthroughs using Google Gemini and OpenAI models. Each file showcases a different runnable: generating reports, summarization, joke creation and content writing
Add a description, image, and links to the runnable-lambda topic page so that developers can more easily learn about it.
To associate your repository with the runnable-lambda topic, visit your repo's landing page and select "manage topics."