[{"data":1,"prerenderedAt":1622},["ShallowReactive",2],{"navigation":3,"\u002Fposts\u002Fagents_mcp_rag_local_foss-content":42,"\u002Fposts\u002Fagents_mcp_rag_local_foss-surround":1253,"repo-modelcontextprotocol\u002Fpython-sdk-github":1258,"repo-langchain-ai\u002Flanggraph-github":1344,"repo-huggingface\u002Fsmolagents-github":1435,"repo-gradio-app\u002Fgradio-github":1515,"repo-ColinMietka\u002Flocal-assistant-gitlab":1602},[4],{"title":5,"path":6,"stem":7,"children":8,"page":41},"Posts","\u002Fposts","posts",[9,13,17,21,25,29,33,37],{"title":10,"path":11,"stem":12},"Agents, MCP, RAG, Knowledge Graphs, all open source and local","\u002Fposts\u002Fagents_mcp_rag_local_foss","posts\u002Fagents_mcp_rag_local_foss",{"title":14,"path":15,"stem":16},"DeGoogle your phone","\u002Fposts\u002Fdegoogle_your_phone","posts\u002Fdegoogle_your_phone",{"title":18,"path":19,"stem":20},"Homelab Automation","\u002Fposts\u002Fhomelab_automation","posts\u002Fhomelab_automation",{"title":22,"path":23,"stem":24},"How I discovered Static Site Generators","\u002Fposts\u002Fhow_i_discovered_ssg","posts\u002Fhow_i_discovered_ssg",{"title":26,"path":27,"stem":28},"My Linux Journey","\u002Fposts\u002Fmy_switch_to_linux","posts\u002Fmy_switch_to_linux",{"title":30,"path":31,"stem":32},"Own your Data","\u002Fposts\u002Fown_your_data","posts\u002Fown_your_data",{"title":34,"path":35,"stem":36},"Self-host your AI assistant","\u002Fposts\u002Fself_host_your_ai_assistant","posts\u002Fself_host_your_ai_assistant",{"title":38,"path":39,"stem":40},"The move to Vue.js","\u002Fposts\u002Fthe_move_to_vuejs","posts\u002Fthe_move_to_vuejs",false,{"id":43,"title":10,"body":44,"date":1237,"description":1238,"extension":1239,"image":1240,"meta":1241,"navigation":340,"path":11,"readingTime":501,"seo":1242,"stem":12,"tags":1243,"__hash__":1252},"content\u002Fposts\u002Fagents_mcp_rag_local_foss.md",{"type":45,"value":46,"toc":1215},"minimark",[47,52,56,61,69,77,80,84,87,100,107,116,120,123,137,140,162,166,169,183,186,189,193,196,203,206,210,219,223,227,236,239,243,246,250,254,257,263,270,273,276,279,282,306,309,317,320,401,404,408,411,414,417,564,567,570,585,589,612,623,647,654,949,952,956,959,965,1007,1014,1017,1055,1062,1152,1159,1162,1168,1171,1175,1184,1187,1194,1198,1207,1211],[48,49,51],"h2",{"id":50},"some-definitions","Some Definitions",[53,54,55],"p",{},"Let's start with some context and definitions about what we are going to use.",[57,58,60],"h3",{"id":59},"agentic-ai","Agentic AI",[53,62,63,64,68],{},"Agentic AI refers to intelligent ",[65,66,67],"strong",{},"Agents"," that can perceive their environment, plan actions, and execute them autonomously to achieve high‑level goals.\nUnlike a simple prompt‑response model, an agent has the ability to plan ahead sub-tasks to perform and call adequate tools to achieve them.\nDepending on its setup, it can browse the web, analyze data, call APIs, run code, query databases. You can also include a Human-in-the-Loop to control\nthe agent actions.",[53,70,71],{},[72,73],"img",{"alt":74,"src":75,"title":76},"Basic Agent Workflow","\u002Fposts\u002Fagents_mcp_rag_local_foss\u002Fagent_flow.png","A basic agent workflow.",[53,78,79],{},"Roughly, the LLM acts as the brain of the Agent, planning and taking decisions while the tools are its mean to interact with its environment.",[57,81,83],{"id":82},"local-llm-inference","Local LLM inference",[53,85,86],{},"Local LLM inference means running a large language model directly on your own hardware (CPU, GPU) instead of sending data to the cloud.\nThe main benefits are:",[88,89,90,94,97],"ul",{},[91,92,93],"li",{},"No network round‑trips for every query.",[91,95,96],{},"Data privacy – no sensitive text leaves your premises.",[91,98,99],{},"Cost efficiency – you don't have to pay for each query.",[53,101,102],{},[72,103],{"alt":104,"src":105,"title":106},"Ollama Logo and Name","\u002Fposts\u002Fagents_mcp_rag_local_foss\u002Follama_name.svg","Ollama is a tool to run LLMs locally.",[53,108,109,110,115],{},"I discussed how to set up Ollama in a previous ",[111,112,114],"a",{"href":113},".\u002Fself_host_your_ai_assistant","post",". Obviously, the performance is limited by\nyour hardware but for experimenting it is a good choice. Other options would be to use free tiers of inference providers, or paid ones.\nHere, huggingFace shines again as it provides a simple way to call inference endpoints with various models they propose.\nYou also can connect to you favorite main service like OpenAI or Anthropic. All of it is explained in the tutorials from HuggingFace courses.",[57,117,119],{"id":118},"model-context-protocol-mcp","Model Context Protocol (MCP)",[53,121,122],{},"The Model Context Protocol (MCP) is a set of guidelines that standardize how external data (e.g., APIs, databases, files) is exposed\nto an LLM as context. Think of it as a contract that defines:",[88,124,125,128,131,134],{},[91,126,127],{},"Context schema – JSON schema for the data structure.",[91,129,130],{},"Metadata – Provenance, freshness, and privacy tags.",[91,132,133],{},"Access patterns – How to retrieve, cache, and stream the data.",[91,135,136],{},"Security controls – Token scopes and rate limits.",[53,138,139],{},"MCP enables plug‑and‑play context for agents: the same agent can query a weather API, a CSV file, or a Neo4j graph, all through a uniform interface.\nIt acts as an accelerator for the development of agents as any set of tools can be implemented independently of the AI models used to power them.",[53,141,142,143,147,148,151,152,155,156],{},"To explain in simple words how it works, let's say that the tools are exposed by an MCP ",[144,145,146],"em",{},"server",", your application or agent is called a ",[144,149,150],{},"host"," and this host\nimplements an MPC ",[144,153,154],{},"client"," as one of its functionalities. You can find a more in-depth explanation in this\n",[111,157,161],{"href":158,"rel":159},"https:\u002F\u002Fhuggingface.co\u002Flearn\u002Fmcp-course\u002Funit1\u002Fkey-concepts",[160],"nofollow","course",[57,163,165],{"id":164},"retrieval-augmented-generation-rag","Retrieval Augmented Generation (RAG)",[53,167,168],{},"Retrieval Augmented Generation is a technique that augments the LLM’s answer with external knowledge fetched in real time. The typical pipeline is:",[88,170,171,174,177,180],{},[91,172,173],{},"Query formulation – The agent generates a search query or a prompt.",[91,175,176],{},"Retrieval – The system looks up the best relevant passages from a vector store, knowledge graph, or web search.",[91,178,179],{},"Fusion – The retrieved snippets are concatenated with the prompt.",[91,181,182],{},"Generation – The LLM produces the final answer.",[53,184,185],{},"RAG is especially useful when dealing with highly specialized domains or rapidly changing data that the trained LLM would not have seen.",[53,187,188],{},"A quick word about Agentic RAG here. In the traditional RAG implementation, the tool for retrieval is called first, and it's answer is then passed\nas additional context to the LLM along with the initial use query. In agentic RAG, the workflow is more flexible because the agent will decide\nwhether to use the retriever tool. Another big advantage is the query reformulation that the agent performs. In traditional RAG,\nthe initial user query, often time a complete question, is generally passed as-is to the retriever, which is not optimal. In agentic RAG,\nthe agent LLM reformulates the query before passing it to the retriever improving the results.",[57,190,192],{"id":191},"vector-stores","Vector Stores",[53,194,195],{},"Vector Stores are a foundation of RAG implementation. Their role is it turn raw knowledge into searchable embeddings,\nenabling RAG systems to use specific information without having to load entire documents into memory.",[53,197,198],{},[72,199],{"alt":200,"src":201,"title":202},"Basic RAG Workflow with a vector Store","\u002Fposts\u002Fagents_mcp_rag_local_foss\u002FRAG.png","A basic RAG workflow using embeddings and vector stores.",[53,204,205],{},"In general, the input document are split into smaller chunks to turns a monolithic document into a collection of searchable,\nsemantically coherent units that fit within the LLM’s context window and that can be ranked accurately",[57,207,209],{"id":208},"knowledge-graphs","Knowledge Graphs",[53,211,212,213,218],{},"A knowledge graph is a graph‑structured representation of data with nodes and edges representing entities and their relationships.\nStored in a graph database like ",[111,214,217],{"href":215,"rel":216},"https:\u002F\u002Fneo4j.com\u002F",[160],"Neo4J",", a knowledge graph supports efficient traversal, semantic search, and query languages\nenabling agents to answer complex questions, discover hidden connections, and supply structured evidence.",[48,220,222],{"id":221},"where-to-start-where-to-learn","Where to start, where to learn ?",[57,224,226],{"id":225},"huggingface-learn","HuggingFace Learn",[53,228,229,230,235],{},"When looking for good tutorials to begin with, I came upon the HuggingFace hub ",[111,231,234],{"href":232,"rel":233},"https:\u002F\u002Fhuggingface.co\u002Flearn",[160],"Learn"," section.\nYou can find there complete lessons and tutorials on LLM usage and training,\nAgentic frameworks like SmolAgent and Langchain, and even an MCP course built just a few months ago as when I'm writing this.\nI found these courses complete and well-written with hands-on exercises that you can run either using their inference\noptions (beware the costs) or locally with your Ollama instance.",[53,237,238],{},"That being said, you can use whatever material you prefer. I often find YouTube videos to be of great use because they showcase\nsimple use cases that you can duplicate.",[57,240,242],{"id":241},"diy-is-always-a-good-choice","DIY is always a good choice",[53,244,245],{},"Nothing compares to practice when trying to learn and understand how things work. You could spend hours reading courses materials\nand videos online but implementing them yourself, even simple version designed for dummy use cases will give you a much better understanding.\nI decided to give a try at building my own agent using Langchain and Ollama.",[48,247,249],{"id":248},"my-experimental-agent-step-by-step","My Experimental Agent step by step",[57,251,253],{"id":252},"agentic-framework","Agentic Framework",[53,255,256],{},"Fortunately, we don't have to reinvent the wheel. There are multiple open source AI Agent frameworks available, all with their pros and cons.\nI tested out two of them, smolagents from Huggingface and Langchain\u002FLanggraph.",[258,259],"repo",{":show-thumbnail":260,"platform":261,"repo":262},"true","github","huggingface\u002Fsmolagents",[53,264,265,266,269],{},"The first one focuses mainly on ",[65,267,268],{},"Code Agents",", which are designed to be able to generate and execute code as they advance\nthrough their answering steps. It makes them really powerful but more unpredictable.",[258,271],{":show-thumbnail":260,"platform":261,"repo":272},"langchain-ai\u002Flanggraph",[53,274,275],{},"On the other side of the spectrum, there is Langgraph. Its graph based approach allows for more control over the agent workflow,\nbut it requires more setup and understanding. It is also an established framework among professionals, the control features\nbeing most relevant in business and enterprise use cases.",[53,277,278],{},"I decided to go with Langgraph as I thought to get more value out of my training, but ultimately I used both as you will see later.\nAnyway, if you're like me trying to experiment, I'll recommend to try both !",[53,280,281],{},"In langgraph, you have to define a state, nodes and edges to build the graph of your agent. The state is the content of your application that is\npassed through the graph nodes via the edges. For my use case, the state is the complete history of messages built by the different nodes. It consists\nin a series of Human, AI and Tool messages.",[283,284,289],"pre",{"className":285,"code":286,"language":287,"meta":288,"style":288},"language-python shiki shiki-themes material-theme-lighter material-theme material-theme-palenight","class AgentState(TypedDict):\n    messages: Annotated[list[AnyMessage], add_messages]\n","python","",[290,291,292,300],"code",{"__ignoreMap":288},[293,294,297],"span",{"class":295,"line":296},"line",1,[293,298,299],{},"class AgentState(TypedDict):\n",[293,301,303],{"class":295,"line":302},2,[293,304,305],{},"    messages: Annotated[list[AnyMessage], add_messages]\n",[53,307,308],{},"For the first implementation, I'm using two nodes:",[88,310,311,314],{},[91,312,313],{},"Assistant node - The LLM brain of the agent, it is defined as the start of the flow.",[91,315,316],{},"Tool node - the toolbox containing all the tools and their metadata.",[53,318,319],{},"The only specific edge is the link between assistant and tool nodes. It is a conditional relation activate by the LLM.",[283,321,323],{"className":285,"code":322,"language":287,"meta":288,"style":288},"builder = StateGraph(AgentState)\nmemory = InMemorySaver()\n\nbuilder.add_node(\"assistant\", assistant)\nbuilder.add_node(\"tools\", ToolNode(tools))\n\nbuilder.add_edge(START, \"assistant\")\nbuilder.add_conditional_edges(\n    \"assistant\",\n    tools_condition,\n)\nbuilder.add_edge(\"tools\", \"assistant\")\nagent = builder.compile(checkpointer=memory)\n",[290,324,325,330,335,342,348,354,359,365,371,377,383,389,395],{"__ignoreMap":288},[293,326,327],{"class":295,"line":296},[293,328,329],{},"builder = StateGraph(AgentState)\n",[293,331,332],{"class":295,"line":302},[293,333,334],{},"memory = InMemorySaver()\n",[293,336,338],{"class":295,"line":337},3,[293,339,341],{"emptyLinePlaceholder":340},true,"\n",[293,343,345],{"class":295,"line":344},4,[293,346,347],{},"builder.add_node(\"assistant\", assistant)\n",[293,349,351],{"class":295,"line":350},5,[293,352,353],{},"builder.add_node(\"tools\", ToolNode(tools))\n",[293,355,357],{"class":295,"line":356},6,[293,358,341],{"emptyLinePlaceholder":340},[293,360,362],{"class":295,"line":361},7,[293,363,364],{},"builder.add_edge(START, \"assistant\")\n",[293,366,368],{"class":295,"line":367},8,[293,369,370],{},"builder.add_conditional_edges(\n",[293,372,374],{"class":295,"line":373},9,[293,375,376],{},"    \"assistant\",\n",[293,378,380],{"class":295,"line":379},10,[293,381,382],{},"    tools_condition,\n",[293,384,386],{"class":295,"line":385},11,[293,387,388],{},")\n",[293,390,392],{"class":295,"line":391},12,[293,393,394],{},"builder.add_edge(\"tools\", \"assistant\")\n",[293,396,398],{"class":295,"line":397},13,[293,399,400],{},"agent = builder.compile(checkpointer=memory)\n",[53,402,403],{},"In this sample code, I also added a memory checkpointer to allow the LLM to remember interactions, like you would expect in\na chat application. It also allows to follow up queries with additional information.",[57,405,407],{"id":406},"mcp-tools-integration","MCP Tools integration",[53,409,410],{},"After their release late 2024, MCP have been the center of the AI world. I figured that while I was experimenting with LLMs tools\nand agents, I'd take a look at MCP servers and clients along the way. Obviously, for the little experiment I'm building, I didn't need\nany of this more complex setup.",[258,412],{":show-thumbnail":260,"platform":261,"repo":413},"modelcontextprotocol\u002Fpython-sdk",[53,415,416],{},"Tools are the mean of interaction for the agent. For a tool to be efficient, it needs to have a precise documentation and typed inputs.\nIn the python version of the MCP server I used, all this is done trough typing and docstrings. Apart from that the MCP implementation\nis quite simple. Look at this math MCP server:",[283,418,420],{"className":285,"code":419,"language":287,"meta":288,"style":288},"from mcp.server.fastmcp import FastMCP\n\nmcp = FastMCP(\"Math\")\n\n\n@mcp.tool()\ndef multiply(a: float, b: float) -> float:\n    \"\"\"\n    Multiplies two numbers.\n    Args:\n        a (float): the first number\n        b (float): the second number\n    \"\"\"\n    return a * b\n\n\n@mcp.tool()\ndef add(a: float, b: float) -> float:\n    \"\"\"\n    Adds two numbers.\n    Args:\n        a (float): the first number\n        b (float): the second number\n    \"\"\"\n    return a + b\n\nif __name__ == \"__main__\":\n    mcp.run(transport=\"stdio\")\n",[290,421,422,427,431,436,440,444,449,454,459,464,469,474,479,483,489,494,499,504,510,515,521,526,531,536,541,547,552,558],{"__ignoreMap":288},[293,423,424],{"class":295,"line":296},[293,425,426],{},"from mcp.server.fastmcp import FastMCP\n",[293,428,429],{"class":295,"line":302},[293,430,341],{"emptyLinePlaceholder":340},[293,432,433],{"class":295,"line":337},[293,434,435],{},"mcp = FastMCP(\"Math\")\n",[293,437,438],{"class":295,"line":344},[293,439,341],{"emptyLinePlaceholder":340},[293,441,442],{"class":295,"line":350},[293,443,341],{"emptyLinePlaceholder":340},[293,445,446],{"class":295,"line":356},[293,447,448],{},"@mcp.tool()\n",[293,450,451],{"class":295,"line":361},[293,452,453],{},"def multiply(a: float, b: float) -> float:\n",[293,455,456],{"class":295,"line":367},[293,457,458],{},"    \"\"\"\n",[293,460,461],{"class":295,"line":373},[293,462,463],{},"    Multiplies two numbers.\n",[293,465,466],{"class":295,"line":379},[293,467,468],{},"    Args:\n",[293,470,471],{"class":295,"line":385},[293,472,473],{},"        a (float): the first number\n",[293,475,476],{"class":295,"line":391},[293,477,478],{},"        b (float): the second number\n",[293,480,481],{"class":295,"line":397},[293,482,458],{},[293,484,486],{"class":295,"line":485},14,[293,487,488],{},"    return a * b\n",[293,490,492],{"class":295,"line":491},15,[293,493,341],{"emptyLinePlaceholder":340},[293,495,497],{"class":295,"line":496},16,[293,498,341],{"emptyLinePlaceholder":340},[293,500,502],{"class":295,"line":501},17,[293,503,448],{},[293,505,507],{"class":295,"line":506},18,[293,508,509],{},"def add(a: float, b: float) -> float:\n",[293,511,513],{"class":295,"line":512},19,[293,514,458],{},[293,516,518],{"class":295,"line":517},20,[293,519,520],{},"    Adds two numbers.\n",[293,522,524],{"class":295,"line":523},21,[293,525,468],{},[293,527,529],{"class":295,"line":528},22,[293,530,473],{},[293,532,534],{"class":295,"line":533},23,[293,535,478],{},[293,537,539],{"class":295,"line":538},24,[293,540,458],{},[293,542,544],{"class":295,"line":543},25,[293,545,546],{},"    return a + b\n",[293,548,550],{"class":295,"line":549},26,[293,551,341],{"emptyLinePlaceholder":340},[293,553,555],{"class":295,"line":554},27,[293,556,557],{},"if __name__ == \"__main__\":\n",[293,559,561],{"class":295,"line":560},28,[293,562,563],{},"    mcp.run(transport=\"stdio\")\n",[53,565,566],{},"You can define as many MCP servers as you need, I created a math one, one for fetching the weather, one for searching the web, ...\nOne of the advantages of this MCP setup is the re-usability of these servers. You implement a tool once, and then you can use them in\nas many agents or applications you want.",[53,568,569],{},"You can also use publicly available MCP servers from the web, like the ones from GitHub or Google to be able to interact with their services.\nIt is a formidable way to build quickly well-integrated toolboxes but the downside is the potential transit of data to remote servers.",[53,571,572,573,576,577,580,581,584],{},"To ",[144,574,575],{},"connect"," you agent LLM node to the different MCP servers, I used the provided ",[290,578,579],{},"MultiServerMCPClient"," from ",[290,582,583],{},"langchain_mcp_adapters"," and bind\ntools methods.",[57,586,588],{"id":587},"adding-data-analysis-capabilities-with-code-agent","Adding Data Analysis Capabilities with Code Agent",[53,590,591,592,595,596,599,600,605,606,611],{},"Until now, everything we did was based on simple tools with no ",[144,593,594],{},"real"," added value. Add the ability to browse the web with ",[290,597,598],{},"DuckDuckGoSearchRun"," from langchain\nis certainly useful to overcome to lack of up-to-date data of the LLMs, but you can have this now in any GUI interface like\n",[111,601,604],{"href":602,"rel":603},"https:\u002F\u002Fgithub.com\u002Fn4ze3m\u002Fpage-assist",[160],"PageAssist"," or ",[111,607,610],{"href":608,"rel":609},"https:\u002F\u002Fopenwebui.com\u002F",[160],"OpenWebUI",". What can I add to my agent that could be really useful to me ?",[53,613,614,615,618,619,622],{},"I'm a Data Scientist, and as such, most of my time is spent ",[144,616,617],{},"analyzing"," data. It means, exploring, computing statistics and metrics and visualizing them.\nIt is a repetitive task but not so simple to automate because for each use case, the exploration will be different. What if I could ask an agent to ",[144,620,621],{},"explore"," a dataset ?\nIt would mean load let's say a CSV file, compute some statistics, draw charts and reflect on the outputs.",[53,624,625,626,628,629,634,635,640,641,646],{},"Now, I said earlier that the ",[144,627,234],{}," section from HuggingFace was an excellent starting point for the journey. It's even better, it's a gold mine !\nThe ",[111,630,633],{"href":631,"rel":632},"https:\u002F\u002Fhuggingface.co\u002Flearn\u002Fcookbook\u002Findex",[160],"Open-Source AI Cookbook"," contains a lot of recipes that can be adapted to your needs.\nIt's also a lot of inspiration on the possible applications of LLMs and Agents. I found implementations of an\n",[111,636,639],{"href":637,"rel":638},"https:\u002F\u002Fhuggingface.co\u002Flearn\u002Fcookbook\u002Fagent_data_analyst",[160],"Analytics Assistant"," and a ",[111,642,645],{"href":643,"rel":644},"https:\u002F\u002Fhuggingface.co\u002Flearn\u002Fcookbook\u002Frag_with_knowledge_graphs_neo4j",[160],"Knowledge Graph RAG",".",[53,648,649,650,653],{},"I decided to follow the implementation of the Data Analysis Agent using a smolagent ",[65,651,652],{},"CodeAgent"," and expose its analysis capabilities as a tool to my manager agent.\nI am not sure that this architecture is a recommended one, but it will do for the time being. Code Agents as I mentioned have the ability to\nexecute code snippets while progressing through their tasks. I think of it as an agent that can build his own tools, at least simple ones.\nLet's have a look at my version:",[283,655,657],{"className":285,"code":656,"language":287,"meta":288,"style":288},"from mcp.server.fastmcp import FastMCP\nfrom smolagents import CodeAgent, LiteLLMModel\n\nmcp = FastMCP(\"Data Analysis\")\n\nmodel = LiteLLMModel(\n    model_id=\"ollama_chat\u002Fyour_model_name\",\n    api_base=OLLAMA_HOST,\n    num_ctx=8192,\n)\n\nagent = CodeAgent(\n    tools=[],\n    model=model,\n    additional_authorized_imports=[\n        \"numpy\", \"pandas\", \"matplotlib.pyplot\", \"seaborn\"],\n)\n\n@mcp.tool()\ndef run_analysis(additional_notes: str, source_file: str) -> str:\n    \"\"\"Analyses the content of a given csv file.\n    Args:\n        additional_notes (str): notes to guide the analysis\n        source_file (str): path to local source file\n\n    \"\"\"\n    prompt = f\"\"\"You are an expert data analyst.\n        Please load the source file and analyze its content.\n        \n        The first analysis to perform is a generic content exploration, \n        with simple statistics, null values, outliers, and types \n        of each columns.\n        \n        Secondly, according to the variables you have, list 3 \n        interesting questions that could be asked on this data, \n        for instance about specific correlations.\n        Then answer these questions one by one, by finding the \n        relevant numbers. Meanwhile, plot some figures using \n        matplotlib\u002Fseaborn and save them to the (already existing) \n        folder '.\u002Ffigures\u002F': take care to clear each figure \n        with plt.clf() before doing another plot.\n        \n        In your final answer: summarize the initial analysis and \n        these correlations and trends. After each number derive \n        real worlds insights. Your final answer should have at \n        least 3 numbered and detailed parts.\n        \n        - Here are additional notes and query to guide \n          your analysis: {additional_notes}.\n        - Here is the file path: {source_file}.\n        \"\"\"\n\n    return agent.run(prompt)\n\nif __name__ == \"__main__\":\n    mcp.run(transport=\"stdio\")\n",[290,658,659,663,668,672,677,681,686,691,696,701,705,709,714,719,724,729,734,738,742,746,751,756,760,765,770,774,778,783,788,794,800,806,812,817,823,829,835,841,847,853,859,865,870,876,882,888,894,899,905,911,917,923,928,934,939,944],{"__ignoreMap":288},[293,660,661],{"class":295,"line":296},[293,662,426],{},[293,664,665],{"class":295,"line":302},[293,666,667],{},"from smolagents import CodeAgent, LiteLLMModel\n",[293,669,670],{"class":295,"line":337},[293,671,341],{"emptyLinePlaceholder":340},[293,673,674],{"class":295,"line":344},[293,675,676],{},"mcp = FastMCP(\"Data Analysis\")\n",[293,678,679],{"class":295,"line":350},[293,680,341],{"emptyLinePlaceholder":340},[293,682,683],{"class":295,"line":356},[293,684,685],{},"model = LiteLLMModel(\n",[293,687,688],{"class":295,"line":361},[293,689,690],{},"    model_id=\"ollama_chat\u002Fyour_model_name\",\n",[293,692,693],{"class":295,"line":367},[293,694,695],{},"    api_base=OLLAMA_HOST,\n",[293,697,698],{"class":295,"line":373},[293,699,700],{},"    num_ctx=8192,\n",[293,702,703],{"class":295,"line":379},[293,704,388],{},[293,706,707],{"class":295,"line":385},[293,708,341],{"emptyLinePlaceholder":340},[293,710,711],{"class":295,"line":391},[293,712,713],{},"agent = CodeAgent(\n",[293,715,716],{"class":295,"line":397},[293,717,718],{},"    tools=[],\n",[293,720,721],{"class":295,"line":485},[293,722,723],{},"    model=model,\n",[293,725,726],{"class":295,"line":491},[293,727,728],{},"    additional_authorized_imports=[\n",[293,730,731],{"class":295,"line":496},[293,732,733],{},"        \"numpy\", \"pandas\", \"matplotlib.pyplot\", \"seaborn\"],\n",[293,735,736],{"class":295,"line":501},[293,737,388],{},[293,739,740],{"class":295,"line":506},[293,741,341],{"emptyLinePlaceholder":340},[293,743,744],{"class":295,"line":512},[293,745,448],{},[293,747,748],{"class":295,"line":517},[293,749,750],{},"def run_analysis(additional_notes: str, source_file: str) -> str:\n",[293,752,753],{"class":295,"line":523},[293,754,755],{},"    \"\"\"Analyses the content of a given csv file.\n",[293,757,758],{"class":295,"line":528},[293,759,468],{},[293,761,762],{"class":295,"line":533},[293,763,764],{},"        additional_notes (str): notes to guide the analysis\n",[293,766,767],{"class":295,"line":538},[293,768,769],{},"        source_file (str): path to local source file\n",[293,771,772],{"class":295,"line":543},[293,773,341],{"emptyLinePlaceholder":340},[293,775,776],{"class":295,"line":549},[293,777,458],{},[293,779,780],{"class":295,"line":554},[293,781,782],{},"    prompt = f\"\"\"You are an expert data analyst.\n",[293,784,785],{"class":295,"line":560},[293,786,787],{},"        Please load the source file and analyze its content.\n",[293,789,791],{"class":295,"line":790},29,[293,792,793],{},"        \n",[293,795,797],{"class":295,"line":796},30,[293,798,799],{},"        The first analysis to perform is a generic content exploration, \n",[293,801,803],{"class":295,"line":802},31,[293,804,805],{},"        with simple statistics, null values, outliers, and types \n",[293,807,809],{"class":295,"line":808},32,[293,810,811],{},"        of each columns.\n",[293,813,815],{"class":295,"line":814},33,[293,816,793],{},[293,818,820],{"class":295,"line":819},34,[293,821,822],{},"        Secondly, according to the variables you have, list 3 \n",[293,824,826],{"class":295,"line":825},35,[293,827,828],{},"        interesting questions that could be asked on this data, \n",[293,830,832],{"class":295,"line":831},36,[293,833,834],{},"        for instance about specific correlations.\n",[293,836,838],{"class":295,"line":837},37,[293,839,840],{},"        Then answer these questions one by one, by finding the \n",[293,842,844],{"class":295,"line":843},38,[293,845,846],{},"        relevant numbers. Meanwhile, plot some figures using \n",[293,848,850],{"class":295,"line":849},39,[293,851,852],{},"        matplotlib\u002Fseaborn and save them to the (already existing) \n",[293,854,856],{"class":295,"line":855},40,[293,857,858],{},"        folder '.\u002Ffigures\u002F': take care to clear each figure \n",[293,860,862],{"class":295,"line":861},41,[293,863,864],{},"        with plt.clf() before doing another plot.\n",[293,866,868],{"class":295,"line":867},42,[293,869,793],{},[293,871,873],{"class":295,"line":872},43,[293,874,875],{},"        In your final answer: summarize the initial analysis and \n",[293,877,879],{"class":295,"line":878},44,[293,880,881],{},"        these correlations and trends. After each number derive \n",[293,883,885],{"class":295,"line":884},45,[293,886,887],{},"        real worlds insights. Your final answer should have at \n",[293,889,891],{"class":295,"line":890},46,[293,892,893],{},"        least 3 numbered and detailed parts.\n",[293,895,897],{"class":295,"line":896},47,[293,898,793],{},[293,900,902],{"class":295,"line":901},48,[293,903,904],{},"        - Here are additional notes and query to guide \n",[293,906,908],{"class":295,"line":907},49,[293,909,910],{},"          your analysis: {additional_notes}.\n",[293,912,914],{"class":295,"line":913},50,[293,915,916],{},"        - Here is the file path: {source_file}.\n",[293,918,920],{"class":295,"line":919},51,[293,921,922],{},"        \"\"\"\n",[293,924,926],{"class":295,"line":925},52,[293,927,341],{"emptyLinePlaceholder":340},[293,929,931],{"class":295,"line":930},53,[293,932,933],{},"    return agent.run(prompt)\n",[293,935,937],{"class":295,"line":936},54,[293,938,341],{"emptyLinePlaceholder":340},[293,940,942],{"class":295,"line":941},55,[293,943,557],{},[293,945,947],{"class":295,"line":946},56,[293,948,563],{},[53,950,951],{},"As you can see, the tool is just a prompt where you ask the agent to analyze the given csv file ! The first few tests I have made\nwith this version are quite impressive already ! It created visualizations, metrics and reflected to get insights in its final answer.\nThis first version could be improved with dedicated tools like plots and given metrics to increase the control on what the agent\nis going to achieve when we call it.",[57,953,955],{"id":954},"using-knowledge-graphs-to-enhance-rag","Using Knowledge Graphs to enhance RAG",[53,957,958],{},"I gave a shot at Vector Stores and traditional RAG a few months back when first hearing about the technique. The idea is to improve\nthe quality of answers from an LLM using specific data, either more recent data or dedicated to a certain domain for example.\nThe typical example for me is a coding assistant capable of searching through the documentation of a specific language or package.\nAnother technique to achieve this would be finetuning but in the case of LLMs, the constraints are quite hard making RAG a good alternative.",[53,960,961,962,964],{},"More recently, ",[65,963,209],{}," (KG) have been introduced as a way to improve the LLM answers in the case of semantic searches by\nadding contextual understanding of the data. It also gives a way to better explain the reasoning made by the LLM.",[53,966,967,968,972,973,976,977,982,983,986,987,990,991,994,995,990,998,1001,1002,646],{},"The ",[111,969,971],{"href":643,"rel":970},[160],"recipe"," from HuggingFace is using ",[111,974,217],{"href":215,"rel":975},[160]," as\nthe graph database. I am using the docker version of Neo4J to host my sample database but there is free plan for hosting on ",[111,978,981],{"href":979,"rel":980},"https:\u002F\u002Fneo4j.com\u002Fproduct\u002Fauradb\u002F",[160],"Neo4J AuraDB","\nI'm using the proposed dataset as a base for the sake of the experiment.\nA graph containing ",[144,984,985],{},"Articles",", ",[144,988,989],{},"Authors"," and ",[144,992,993],{},"Topics"," nodes with edges building the relation between them: ",[144,996,997],{},"published by",[144,999,1000],{},"in topic",".\nIt is representative of a research AI assistant, with for example a database derived from ",[111,1003,1006],{"href":1004,"rel":1005},"https:\u002F\u002Farxiv.org\u002F",[160],"Arxiv",[53,1008,1009],{},[72,1010],{"alt":1011,"src":1012,"title":1013},"Neo4J Logo and Name","\u002Fposts\u002Fagents_mcp_rag_local_foss\u002Fneo4j-ar21.svg","Neo4J is one of the many option to build a Graph Database.",[53,1015,1016],{},"First load the Neo4J graph:",[283,1018,1020],{"className":285,"code":1019,"language":287,"meta":288,"style":288},"from langchain_community.graphs import Neo4jGraph\n\ngraph = Neo4jGraph(\n    url=os.environ[\"NEO4J_URI\"],\n    username=os.environ[\"NEO4J_USERNAME\"],\n    password=os.environ[\"NEO4J_PASSWORD\"],\n)\n",[290,1021,1022,1027,1031,1036,1041,1046,1051],{"__ignoreMap":288},[293,1023,1024],{"class":295,"line":296},[293,1025,1026],{},"from langchain_community.graphs import Neo4jGraph\n",[293,1028,1029],{"class":295,"line":302},[293,1030,341],{"emptyLinePlaceholder":340},[293,1032,1033],{"class":295,"line":337},[293,1034,1035],{},"graph = Neo4jGraph(\n",[293,1037,1038],{"class":295,"line":344},[293,1039,1040],{},"    url=os.environ[\"NEO4J_URI\"],\n",[293,1042,1043],{"class":295,"line":350},[293,1044,1045],{},"    username=os.environ[\"NEO4J_USERNAME\"],\n",[293,1047,1048],{"class":295,"line":356},[293,1049,1050],{},"    password=os.environ[\"NEO4J_PASSWORD\"],\n",[293,1052,1053],{"class":295,"line":361},[293,1054,388],{},[53,1056,1057,1058,1061],{},"In the case of a graph database, langchain provides a ",[290,1059,1060],{},"GraphCypherQAChain"," that allows us to query our graph database using natural language.\nLike in the case of the Data Analytics Assistant, the queries are handled by a dedicated agent, here from langgraph, with its own set of tools and instructions.",[283,1063,1065],{"className":285,"code":1064,"language":287,"meta":288,"style":288},"cypher_chain = GraphCypherQAChain.from_llm(\n    cypher_llm=ChatOllama(model = \"a_local_model\", temperature=0.),\n    qa_llm=ChatOllama(model = \"a_local_model\", temperature=0.),\n    graph=graph,\n    verbose=True,\n    allow_dangerous_requests=True, # should add control in real world\n)\n\ndef graph_retriever(query: str) -> str:\n    return cypher_chain.invoke({\"query\": query})\n\ngraph_retriever_tool = Tool(\n    name=\"graph_retriever_tool\",\n    func=graph_retriever,\n    description=\"\"\"Retrieves detailed information about \n    articles, authors and topics from graph database.\n    \"\"\"\n)\n",[290,1066,1067,1072,1077,1082,1087,1092,1097,1101,1105,1110,1115,1119,1124,1129,1134,1139,1144,1148],{"__ignoreMap":288},[293,1068,1069],{"class":295,"line":296},[293,1070,1071],{},"cypher_chain = GraphCypherQAChain.from_llm(\n",[293,1073,1074],{"class":295,"line":302},[293,1075,1076],{},"    cypher_llm=ChatOllama(model = \"a_local_model\", temperature=0.),\n",[293,1078,1079],{"class":295,"line":337},[293,1080,1081],{},"    qa_llm=ChatOllama(model = \"a_local_model\", temperature=0.),\n",[293,1083,1084],{"class":295,"line":344},[293,1085,1086],{},"    graph=graph,\n",[293,1088,1089],{"class":295,"line":350},[293,1090,1091],{},"    verbose=True,\n",[293,1093,1094],{"class":295,"line":356},[293,1095,1096],{},"    allow_dangerous_requests=True, # should add control in real world\n",[293,1098,1099],{"class":295,"line":361},[293,1100,388],{},[293,1102,1103],{"class":295,"line":367},[293,1104,341],{"emptyLinePlaceholder":340},[293,1106,1107],{"class":295,"line":373},[293,1108,1109],{},"def graph_retriever(query: str) -> str:\n",[293,1111,1112],{"class":295,"line":379},[293,1113,1114],{},"    return cypher_chain.invoke({\"query\": query})\n",[293,1116,1117],{"class":295,"line":385},[293,1118,341],{"emptyLinePlaceholder":340},[293,1120,1121],{"class":295,"line":391},[293,1122,1123],{},"graph_retriever_tool = Tool(\n",[293,1125,1126],{"class":295,"line":397},[293,1127,1128],{},"    name=\"graph_retriever_tool\",\n",[293,1130,1131],{"class":295,"line":485},[293,1132,1133],{},"    func=graph_retriever,\n",[293,1135,1136],{"class":295,"line":491},[293,1137,1138],{},"    description=\"\"\"Retrieves detailed information about \n",[293,1140,1141],{"class":295,"line":496},[293,1142,1143],{},"    articles, authors and topics from graph database.\n",[293,1145,1146],{"class":295,"line":501},[293,1147,458],{},[293,1149,1150],{"class":295,"line":506},[293,1151,388],{},[53,1153,1154,1155,1158],{},"I decided to bind this tool to a dedicated agent and build a multi-agent system mostly for experimentation purposes.\nBut the ",[290,1156,1157],{},"graph_retriever_tool"," can be used as a standalone tool for the manager agent, or even exposed through MCP as I did\nin the case of the data analytics.",[53,1160,1161],{},"I performed the tests suggested in the recipe. They are requests forcing the system to build complex cypher queries to traverse the graph\nsurch as",[1163,1164,1165],"blockquote",{},[53,1166,1167],{},"Are there any pair of researchers who have published more than three articles together?",[53,1169,1170],{},"and found the right answers ! The system was able to generate a complex query to answer and generate a coherent final response.",[57,1172,1174],{"id":1173},"gradio-for-chat-interaction","Gradio for chat interaction",[53,1176,1177,1178,1183],{},"The only missing piece to the puzzle is a way to interact with the system. That when ",[111,1179,1182],{"href":1180,"rel":1181},"https:\u002F\u002Fwww.gradio.app\u002F",[160],"Gradio"," comes into place.",[258,1185],{":show-thumbnail":260,"platform":261,"repo":1186},"gradio-app\u002Fgradio",[53,1188,1189,1190,1193],{},"It is an open-source Python package that allows to quickly build a demo or web application for AI models. I used the built-in\n",[290,1191,1192],{},"ChatInterface"," to create a simple chat webpage hosted locally to interact with the agent.",[48,1195,1197],{"id":1196},"conclusion","Conclusion",[53,1199,1200,1201,1206],{},"You can find the complete source code for this example on my ",[111,1202,1205],{"href":1203,"rel":1204},"https:\u002F\u002Fgitlab.com\u002FColinMietka\u002F",[160],"Gitlab",".\nKeep in mind that everything I presented here is evolving rapidly, is subject to change, and certainly can be improved !\nIf you have any questions or suggestions, feel free to reach out !",[258,1208],{":show-thumbnail":260,"platform":1209,"repo":1210},"gitlab","ColinMietka\u002Flocal-assistant",[1212,1213,1214],"style",{},"html .light .shiki span {color: var(--shiki-light);background: var(--shiki-light-bg);font-style: var(--shiki-light-font-style);font-weight: var(--shiki-light-font-weight);text-decoration: var(--shiki-light-text-decoration);}html.light .shiki span {color: var(--shiki-light);background: var(--shiki-light-bg);font-style: var(--shiki-light-font-style);font-weight: var(--shiki-light-font-weight);text-decoration: var(--shiki-light-text-decoration);}html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}html.dark .shiki span {color: var(--shiki-dark);background: var(--shiki-dark-bg);font-style: var(--shiki-dark-font-style);font-weight: var(--shiki-dark-font-weight);text-decoration: var(--shiki-dark-text-decoration);}",{"title":288,"searchDepth":302,"depth":302,"links":1216},[1217,1225,1229,1236],{"id":50,"depth":302,"text":51,"children":1218},[1219,1220,1221,1222,1223,1224],{"id":59,"depth":337,"text":60},{"id":82,"depth":337,"text":83},{"id":118,"depth":337,"text":119},{"id":164,"depth":337,"text":165},{"id":191,"depth":337,"text":192},{"id":208,"depth":337,"text":209},{"id":221,"depth":302,"text":222,"children":1226},[1227,1228],{"id":225,"depth":337,"text":226},{"id":241,"depth":337,"text":242},{"id":248,"depth":302,"text":249,"children":1230},[1231,1232,1233,1234,1235],{"id":252,"depth":337,"text":253},{"id":406,"depth":337,"text":407},{"id":587,"depth":337,"text":588},{"id":954,"depth":337,"text":955},{"id":1173,"depth":337,"text":1174},{"id":1196,"depth":302,"text":1197},"2025-09-15","Agentic AI and MCP are the new thing in 2025. I figured it's time to try them out and share the results. Witness the rise of my AI Agent, with Web Search, Data Analysis and Knowledge Graph enhanced RAG.","md","\u002Fposts\u002Fagents_mcp_rag_local_foss\u002Ffeatured.svg",{},{"title":10,"description":1238},[1244,1245,1246,1247,1248,1249,1250,1251],"AI","LLM","RAG","MCP","Agent","Knowledge Graph","Embeddings","Code","Q0AbdUDOnaKNuIqngyTyx9eubENU1rONlgvdZj2HwEI",[1254,1256],{"title":34,"path":35,"stem":36,"description":1255,"children":-1},"AI assistants are now commonly used for various tasks, powered by advanced machine learning models. While cloud-based services is the go to for most people, concerns over data privacy have led me to explore self-hosting as a cost-effective alternative.",{"title":18,"path":19,"stem":20,"description":1257,"children":-1},"It seems everyone building a homelab goes through this phase. This is my take on homelab automation, extensively using gitlab CI\u002FCD powers and Renovate bot.",{"id":1259,"node_id":1260,"name":1261,"full_name":413,"private":41,"owner":1262,"html_url":1280,"description":1281,"fork":41,"url":1282,"forks_url":1283,"keys_url":1284,"collaborators_url":1285,"teams_url":1286,"hooks_url":1287,"issue_events_url":1288,"events_url":1289,"assignees_url":1290,"branches_url":1291,"tags_url":1292,"blobs_url":1293,"git_tags_url":1294,"git_refs_url":1295,"trees_url":1296,"statuses_url":1297,"languages_url":1298,"stargazers_url":1299,"contributors_url":1300,"subscribers_url":1301,"subscription_url":1302,"commits_url":1303,"git_commits_url":1304,"comments_url":1305,"issue_comment_url":1306,"contents_url":1307,"compare_url":1308,"merges_url":1309,"archive_url":1310,"downloads_url":1311,"issues_url":1312,"pulls_url":1313,"milestones_url":1314,"notifications_url":1315,"labels_url":1316,"releases_url":1317,"deployments_url":1318,"created_at":1319,"updated_at":1320,"pushed_at":1321,"git_url":1322,"ssh_url":1323,"clone_url":1324,"svn_url":1280,"homepage":1325,"size":1326,"stargazers_count":1327,"watchers_count":1327,"language":1328,"has_issues":340,"has_projects":340,"has_downloads":340,"has_wiki":41,"has_pages":340,"has_discussions":41,"forks_count":1329,"mirror_url":1330,"archived":41,"disabled":41,"open_issues_count":1331,"license":1332,"allow_forking":340,"is_template":41,"web_commit_signoff_required":41,"has_pull_requests":340,"pull_request_creation_policy":1338,"topics":1339,"visibility":1279,"forks":1329,"open_issues":1331,"watchers":1327,"default_branch":1340,"temp_clone_token":1330,"custom_properties":1341,"organization":1342,"network_count":1329,"subscribers_count":1343},862584018,"R_kgDOM2n80g","python-sdk",{"login":1263,"id":1264,"node_id":1265,"avatar_url":1266,"gravatar_id":288,"url":1267,"html_url":1268,"followers_url":1269,"following_url":1270,"gists_url":1271,"starred_url":1272,"subscriptions_url":1273,"organizations_url":1274,"repos_url":1275,"events_url":1276,"received_events_url":1277,"type":1278,"user_view_type":1279,"site_admin":41},"modelcontextprotocol",182288589,"O_kgDOCt2AzQ","https:\u002F\u002Favatars.githubusercontent.com\u002Fu\u002F182288589?v=4","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fmodelcontextprotocol","https:\u002F\u002Fgithub.com\u002Fmodelcontextprotocol","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fmodelcontextprotocol\u002Ffollowers","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fmodelcontextprotocol\u002Ffollowing{\u002Fother_user}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fmodelcontextprotocol\u002Fgists{\u002Fgist_id}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fmodelcontextprotocol\u002Fstarred{\u002Fowner}{\u002Frepo}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fmodelcontextprotocol\u002Fsubscriptions","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fmodelcontextprotocol\u002Forgs","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fmodelcontextprotocol\u002Frepos","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fmodelcontextprotocol\u002Fevents{\u002Fprivacy}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fmodelcontextprotocol\u002Freceived_events","Organization","public","https:\u002F\u002Fgithub.com\u002Fmodelcontextprotocol\u002Fpython-sdk","The official Python SDK for Model Context Protocol servers and clients","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fforks","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fkeys{\u002Fkey_id}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fcollaborators{\u002Fcollaborator}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fteams","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fhooks","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fissues\u002Fevents{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fevents","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fassignees{\u002Fuser}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fbranches{\u002Fbranch}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Ftags","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fgit\u002Fblobs{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fgit\u002Ftags{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fgit\u002Frefs{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fgit\u002Ftrees{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fstatuses\u002F{sha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Flanguages","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fstargazers","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fcontributors","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fsubscribers","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fsubscription","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fcommits{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fgit\u002Fcommits{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fcomments{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fissues\u002Fcomments{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fcontents\u002F{+path}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fcompare\u002F{base}...{head}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fmerges","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002F{archive_format}{\u002Fref}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fdownloads","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fissues{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fpulls{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fmilestones{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fnotifications{?since,all,participating}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Flabels{\u002Fname}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Freleases{\u002Fid}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fmodelcontextprotocol\u002Fpython-sdk\u002Fdeployments","2024-09-24T21:01:35Z","2026-04-28T14:01:10Z","2026-04-25T22:14:31Z","git:\u002F\u002Fgithub.com\u002Fmodelcontextprotocol\u002Fpython-sdk.git","git@github.com:modelcontextprotocol\u002Fpython-sdk.git","https:\u002F\u002Fgithub.com\u002Fmodelcontextprotocol\u002Fpython-sdk.git","https:\u002F\u002Fmodelcontextprotocol.github.io\u002Fpython-sdk\u002F",6652,22805,"Python",3357,null,455,{"key":1333,"name":1334,"spdx_id":1335,"url":1336,"node_id":1337},"mit","MIT License","MIT","https:\u002F\u002Fapi.github.com\u002Flicenses\u002Fmit","MDc6TGljZW5zZTEz","all",[],"main",{"repo_protection_L3":260},{"login":1263,"id":1264,"node_id":1265,"avatar_url":1266,"gravatar_id":288,"url":1267,"html_url":1268,"followers_url":1269,"following_url":1270,"gists_url":1271,"starred_url":1272,"subscriptions_url":1273,"organizations_url":1274,"repos_url":1275,"events_url":1276,"received_events_url":1277,"type":1278,"user_view_type":1279,"site_admin":41},169,{"id":1345,"node_id":1346,"name":1347,"full_name":272,"private":41,"owner":1348,"html_url":1364,"description":1365,"fork":41,"url":1366,"forks_url":1367,"keys_url":1368,"collaborators_url":1369,"teams_url":1370,"hooks_url":1371,"issue_events_url":1372,"events_url":1373,"assignees_url":1374,"branches_url":1375,"tags_url":1376,"blobs_url":1377,"git_tags_url":1378,"git_refs_url":1379,"trees_url":1380,"statuses_url":1381,"languages_url":1382,"stargazers_url":1383,"contributors_url":1384,"subscribers_url":1385,"subscription_url":1386,"commits_url":1387,"git_commits_url":1388,"comments_url":1389,"issue_comment_url":1390,"contents_url":1391,"compare_url":1392,"merges_url":1393,"archive_url":1394,"downloads_url":1395,"issues_url":1396,"pulls_url":1397,"milestones_url":1398,"notifications_url":1399,"labels_url":1400,"releases_url":1401,"deployments_url":1402,"created_at":1403,"updated_at":1404,"pushed_at":1405,"git_url":1406,"ssh_url":1407,"clone_url":1408,"svn_url":1364,"homepage":1409,"size":1410,"stargazers_count":1411,"watchers_count":1411,"language":1328,"has_issues":340,"has_projects":340,"has_downloads":340,"has_wiki":41,"has_pages":340,"has_discussions":41,"forks_count":1412,"mirror_url":1330,"archived":41,"disabled":41,"open_issues_count":1413,"license":1414,"allow_forking":340,"is_template":41,"web_commit_signoff_required":41,"has_pull_requests":340,"pull_request_creation_policy":1338,"topics":1415,"visibility":1279,"forks":1412,"open_issues":1413,"watchers":1411,"default_branch":1340,"temp_clone_token":1330,"custom_properties":1432,"organization":1433,"network_count":1412,"subscribers_count":1434},676672661,"R_kgDOKFU0lQ","langgraph",{"login":1349,"id":1350,"node_id":1351,"avatar_url":1352,"gravatar_id":288,"url":1353,"html_url":1354,"followers_url":1355,"following_url":1356,"gists_url":1357,"starred_url":1358,"subscriptions_url":1359,"organizations_url":1360,"repos_url":1361,"events_url":1362,"received_events_url":1363,"type":1278,"user_view_type":1279,"site_admin":41},"langchain-ai",126733545,"O_kgDOB43M6Q","https:\u002F\u002Favatars.githubusercontent.com\u002Fu\u002F126733545?v=4","https:\u002F\u002Fapi.github.com\u002Fusers\u002Flangchain-ai","https:\u002F\u002Fgithub.com\u002Flangchain-ai","https:\u002F\u002Fapi.github.com\u002Fusers\u002Flangchain-ai\u002Ffollowers","https:\u002F\u002Fapi.github.com\u002Fusers\u002Flangchain-ai\u002Ffollowing{\u002Fother_user}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Flangchain-ai\u002Fgists{\u002Fgist_id}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Flangchain-ai\u002Fstarred{\u002Fowner}{\u002Frepo}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Flangchain-ai\u002Fsubscriptions","https:\u002F\u002Fapi.github.com\u002Fusers\u002Flangchain-ai\u002Forgs","https:\u002F\u002Fapi.github.com\u002Fusers\u002Flangchain-ai\u002Frepos","https:\u002F\u002Fapi.github.com\u002Fusers\u002Flangchain-ai\u002Fevents{\u002Fprivacy}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Flangchain-ai\u002Freceived_events","https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flanggraph","Build resilient language agents as graphs.","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fforks","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fkeys{\u002Fkey_id}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fcollaborators{\u002Fcollaborator}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fteams","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fhooks","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fissues\u002Fevents{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fevents","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fassignees{\u002Fuser}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fbranches{\u002Fbranch}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Ftags","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fgit\u002Fblobs{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fgit\u002Ftags{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fgit\u002Frefs{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fgit\u002Ftrees{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fstatuses\u002F{sha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Flanguages","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fstargazers","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fcontributors","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fsubscribers","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fsubscription","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fcommits{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fgit\u002Fcommits{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fcomments{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fissues\u002Fcomments{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fcontents\u002F{+path}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fcompare\u002F{base}...{head}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fmerges","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002F{archive_format}{\u002Fref}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fdownloads","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fissues{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fpulls{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fmilestones{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fnotifications{?since,all,participating}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Flabels{\u002Fname}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Freleases{\u002Fid}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Flangchain-ai\u002Flanggraph\u002Fdeployments","2023-08-09T18:33:12Z","2026-04-28T14:43:31Z","2026-04-28T14:41:49Z","git:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flanggraph.git","git@github.com:langchain-ai\u002Flanggraph.git","https:\u002F\u002Fgithub.com\u002Flangchain-ai\u002Flanggraph.git","https:\u002F\u002Fdocs.langchain.com\u002Foss\u002Fpython\u002Flanggraph\u002F",517719,30661,5239,511,{"key":1333,"name":1334,"spdx_id":1335,"url":1336,"node_id":1337},[1416,1417,1418,1419,1420,1421,1422,1423,1424,1425,1347,1426,1427,1428,1429,1430,287,1431],"agents","ai","ai-agents","chatgpt","deepagents","enterprise","framework","gemini","generative-ai","langchain","llm","multiagent","open-source","openai","pydantic","rag",{},{"login":1349,"id":1350,"node_id":1351,"avatar_url":1352,"gravatar_id":288,"url":1353,"html_url":1354,"followers_url":1355,"following_url":1356,"gists_url":1357,"starred_url":1358,"subscriptions_url":1359,"organizations_url":1360,"repos_url":1361,"events_url":1362,"received_events_url":1363,"type":1278,"user_view_type":1279,"site_admin":41},152,{"id":1436,"node_id":1437,"name":1438,"full_name":262,"private":41,"owner":1439,"html_url":1455,"description":1456,"fork":41,"url":1457,"forks_url":1458,"keys_url":1459,"collaborators_url":1460,"teams_url":1461,"hooks_url":1462,"issue_events_url":1463,"events_url":1464,"assignees_url":1465,"branches_url":1466,"tags_url":1467,"blobs_url":1468,"git_tags_url":1469,"git_refs_url":1470,"trees_url":1471,"statuses_url":1472,"languages_url":1473,"stargazers_url":1474,"contributors_url":1475,"subscribers_url":1476,"subscription_url":1477,"commits_url":1478,"git_commits_url":1479,"comments_url":1480,"issue_comment_url":1481,"contents_url":1482,"compare_url":1483,"merges_url":1484,"archive_url":1485,"downloads_url":1486,"issues_url":1487,"pulls_url":1488,"milestones_url":1489,"notifications_url":1490,"labels_url":1491,"releases_url":1492,"deployments_url":1493,"created_at":1494,"updated_at":1495,"pushed_at":1496,"git_url":1497,"ssh_url":1498,"clone_url":1499,"svn_url":1455,"homepage":1500,"size":1501,"stargazers_count":1502,"watchers_count":1502,"language":1328,"has_issues":340,"has_projects":340,"has_downloads":340,"has_wiki":340,"has_pages":41,"has_discussions":340,"forks_count":1503,"mirror_url":1330,"archived":41,"disabled":41,"open_issues_count":1504,"license":1505,"allow_forking":340,"is_template":41,"web_commit_signoff_required":41,"has_pull_requests":340,"pull_request_creation_policy":1338,"topics":1511,"visibility":1279,"forks":1503,"open_issues":1504,"watchers":1502,"default_branch":1340,"temp_clone_token":1330,"custom_properties":1512,"organization":1513,"network_count":1503,"subscribers_count":1514},898968194,"R_kgDONZUqgg","smolagents",{"login":1440,"id":1441,"node_id":1442,"avatar_url":1443,"gravatar_id":288,"url":1444,"html_url":1445,"followers_url":1446,"following_url":1447,"gists_url":1448,"starred_url":1449,"subscriptions_url":1450,"organizations_url":1451,"repos_url":1452,"events_url":1453,"received_events_url":1454,"type":1278,"user_view_type":1279,"site_admin":41},"huggingface",25720743,"MDEyOk9yZ2FuaXphdGlvbjI1NzIwNzQz","https:\u002F\u002Favatars.githubusercontent.com\u002Fu\u002F25720743?v=4","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fhuggingface","https:\u002F\u002Fgithub.com\u002Fhuggingface","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fhuggingface\u002Ffollowers","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fhuggingface\u002Ffollowing{\u002Fother_user}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fhuggingface\u002Fgists{\u002Fgist_id}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fhuggingface\u002Fstarred{\u002Fowner}{\u002Frepo}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fhuggingface\u002Fsubscriptions","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fhuggingface\u002Forgs","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fhuggingface\u002Frepos","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fhuggingface\u002Fevents{\u002Fprivacy}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fhuggingface\u002Freceived_events","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fsmolagents","🤗 smolagents: a barebones library for agents that think in code.","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fforks","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fkeys{\u002Fkey_id}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fcollaborators{\u002Fcollaborator}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fteams","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fhooks","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fissues\u002Fevents{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fevents","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fassignees{\u002Fuser}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fbranches{\u002Fbranch}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Ftags","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fgit\u002Fblobs{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fgit\u002Ftags{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fgit\u002Frefs{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fgit\u002Ftrees{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fstatuses\u002F{sha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Flanguages","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fstargazers","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fcontributors","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fsubscribers","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fsubscription","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fcommits{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fgit\u002Fcommits{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fcomments{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fissues\u002Fcomments{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fcontents\u002F{+path}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fcompare\u002F{base}...{head}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fmerges","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002F{archive_format}{\u002Fref}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fdownloads","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fissues{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fpulls{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fmilestones{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fnotifications{?since,all,participating}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Flabels{\u002Fname}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Freleases{\u002Fid}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fhuggingface\u002Fsmolagents\u002Fdeployments","2024-12-05T11:28:04Z","2026-04-28T14:43:33Z","2026-04-24T12:40:38Z","git:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fsmolagents.git","git@github.com:huggingface\u002Fsmolagents.git","https:\u002F\u002Fgithub.com\u002Fhuggingface\u002Fsmolagents.git","https:\u002F\u002Fhuggingface.co\u002Fdocs\u002Fsmolagents",7287,26953,2530,517,{"key":1506,"name":1507,"spdx_id":1508,"url":1509,"node_id":1510},"apache-2.0","Apache License 2.0","Apache-2.0","https:\u002F\u002Fapi.github.com\u002Flicenses\u002Fapache-2.0","MDc6TGljZW5zZTI=",[],{},{"login":1440,"id":1441,"node_id":1442,"avatar_url":1443,"gravatar_id":288,"url":1444,"html_url":1445,"followers_url":1446,"following_url":1447,"gists_url":1448,"starred_url":1449,"subscriptions_url":1450,"organizations_url":1451,"repos_url":1452,"events_url":1453,"received_events_url":1454,"type":1278,"user_view_type":1279,"site_admin":41},131,{"id":1516,"node_id":1517,"name":1518,"full_name":1186,"private":41,"owner":1519,"html_url":1535,"description":1536,"fork":41,"url":1537,"forks_url":1538,"keys_url":1539,"collaborators_url":1540,"teams_url":1541,"hooks_url":1542,"issue_events_url":1543,"events_url":1544,"assignees_url":1545,"branches_url":1546,"tags_url":1547,"blobs_url":1548,"git_tags_url":1549,"git_refs_url":1550,"trees_url":1551,"statuses_url":1552,"languages_url":1553,"stargazers_url":1554,"contributors_url":1555,"subscribers_url":1556,"subscription_url":1557,"commits_url":1558,"git_commits_url":1559,"comments_url":1560,"issue_comment_url":1561,"contents_url":1562,"compare_url":1563,"merges_url":1564,"archive_url":1565,"downloads_url":1566,"issues_url":1567,"pulls_url":1568,"milestones_url":1569,"notifications_url":1570,"labels_url":1571,"releases_url":1572,"deployments_url":1573,"created_at":1574,"updated_at":1575,"pushed_at":1576,"git_url":1577,"ssh_url":1578,"clone_url":1579,"svn_url":1535,"homepage":1580,"size":1581,"stargazers_count":1582,"watchers_count":1582,"language":1328,"has_issues":340,"has_projects":340,"has_downloads":340,"has_wiki":340,"has_pages":41,"has_discussions":41,"forks_count":1583,"mirror_url":1330,"archived":41,"disabled":41,"open_issues_count":1584,"license":1585,"allow_forking":340,"is_template":41,"web_commit_signoff_required":41,"has_pull_requests":340,"pull_request_creation_policy":1338,"topics":1586,"visibility":1279,"forks":1583,"open_issues":1584,"watchers":1582,"default_branch":1340,"temp_clone_token":1330,"custom_properties":1599,"organization":1600,"network_count":1583,"subscribers_count":1601},162405963,"MDEwOlJlcG9zaXRvcnkxNjI0MDU5NjM=","gradio",{"login":1520,"id":1521,"node_id":1522,"avatar_url":1523,"gravatar_id":288,"url":1524,"html_url":1525,"followers_url":1526,"following_url":1527,"gists_url":1528,"starred_url":1529,"subscriptions_url":1530,"organizations_url":1531,"repos_url":1532,"events_url":1533,"received_events_url":1534,"type":1278,"user_view_type":1279,"site_admin":41},"gradio-app",51063788,"MDEyOk9yZ2FuaXphdGlvbjUxMDYzNzg4","https:\u002F\u002Favatars.githubusercontent.com\u002Fu\u002F51063788?v=4","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fgradio-app","https:\u002F\u002Fgithub.com\u002Fgradio-app","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fgradio-app\u002Ffollowers","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fgradio-app\u002Ffollowing{\u002Fother_user}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fgradio-app\u002Fgists{\u002Fgist_id}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fgradio-app\u002Fstarred{\u002Fowner}{\u002Frepo}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fgradio-app\u002Fsubscriptions","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fgradio-app\u002Forgs","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fgradio-app\u002Frepos","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fgradio-app\u002Fevents{\u002Fprivacy}","https:\u002F\u002Fapi.github.com\u002Fusers\u002Fgradio-app\u002Freceived_events","https:\u002F\u002Fgithub.com\u002Fgradio-app\u002Fgradio","Build and share delightful machine learning apps, all in Python. 🌟 Star to support our work!","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fforks","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fkeys{\u002Fkey_id}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fcollaborators{\u002Fcollaborator}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fteams","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fhooks","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fissues\u002Fevents{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fevents","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fassignees{\u002Fuser}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fbranches{\u002Fbranch}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Ftags","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fgit\u002Fblobs{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fgit\u002Ftags{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fgit\u002Frefs{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fgit\u002Ftrees{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fstatuses\u002F{sha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Flanguages","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fstargazers","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fcontributors","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fsubscribers","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fsubscription","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fcommits{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fgit\u002Fcommits{\u002Fsha}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fcomments{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fissues\u002Fcomments{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fcontents\u002F{+path}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fcompare\u002F{base}...{head}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fmerges","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002F{archive_format}{\u002Fref}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fdownloads","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fissues{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fpulls{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fmilestones{\u002Fnumber}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fnotifications{?since,all,participating}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Flabels{\u002Fname}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Freleases{\u002Fid}","https:\u002F\u002Fapi.github.com\u002Frepos\u002Fgradio-app\u002Fgradio\u002Fdeployments","2018-12-19T08:24:04Z","2026-04-28T11:38:59Z","2026-04-28T14:20:45Z","git:\u002F\u002Fgithub.com\u002Fgradio-app\u002Fgradio.git","git@github.com:gradio-app\u002Fgradio.git","https:\u002F\u002Fgithub.com\u002Fgradio-app\u002Fgradio.git","http:\u002F\u002Fwww.gradio.app",319866,42456,3423,465,{"key":1506,"name":1507,"spdx_id":1508,"url":1509,"node_id":1510},[1587,1588,1589,1590,1591,1518,1592,1593,1594,1595,287,1596,1597,1598],"data-analysis","data-science","data-visualization","deep-learning","deploy","gradio-interface","interface","machine-learning","models","python-notebook","ui","ui-components",{},{"login":1520,"id":1521,"node_id":1522,"avatar_url":1523,"gravatar_id":288,"url":1524,"html_url":1525,"followers_url":1526,"following_url":1527,"gists_url":1528,"starred_url":1529,"subscriptions_url":1530,"organizations_url":1531,"repos_url":1532,"events_url":1533,"received_events_url":1534,"type":1278,"user_view_type":1279,"site_admin":41},194,{"id":1603,"description":288,"name":1604,"name_with_namespace":1605,"path":1604,"path_with_namespace":1210,"created_at":1606,"default_branch":1340,"tag_list":1607,"topics":1608,"ssh_url_to_repo":1609,"http_url_to_repo":1610,"web_url":1611,"readme_url":1612,"forks_count":1613,"avatar_url":1330,"star_count":1613,"last_activity_at":1614,"visibility":1279,"namespace":1615},69426000,"local-assistant","Colin Mietka \u002F local-assistant","2025-04-30T05:09:08.658Z",[],[],"git@gitlab.com:ColinMietka\u002Flocal-assistant.git","https:\u002F\u002Fgitlab.com\u002FColinMietka\u002Flocal-assistant.git","https:\u002F\u002Fgitlab.com\u002FColinMietka\u002Flocal-assistant","https:\u002F\u002Fgitlab.com\u002FColinMietka\u002Flocal-assistant\u002F-\u002Fblob\u002Fmain\u002FREADME.md",0,"2025-09-17T15:07:47.407Z",{"id":1616,"name":1617,"path":1618,"kind":1619,"full_path":1618,"parent_id":1330,"avatar_url":1620,"web_url":1621},68571072,"Colin Mietka","ColinMietka","user","\u002Fuploads\u002F-\u002Fsystem\u002Fuser\u002Favatar\u002F14794192\u002Favatar.png","https:\u002F\u002Fgitlab.com\u002FColinMietka",1777387548788]