Tuesday, June 3, 2025

A Coding Implementation of an Clever AI Assistant with Jina Search, LangChain, and Gemini for Actual-Time Info Retrieval

On this tutorial, we show the way to construct an clever AI assistant by integrating LangChain, Gemini 2.0 Flash, and Jina Search instruments. By combining the capabilities of a robust massive language mannequin (LLM) with an exterior search API, we create an assistant that may present up-to-date data with citations. This step-by-step tutorial walks by organising API keys, putting in mandatory libraries, binding instruments to the Gemini mannequin, and constructing a customized LangChain that dynamically calls exterior instruments when the mannequin requires recent or particular data. By the top of this tutorial, we could have a completely purposeful, interactive AI assistant that may reply to person queries with correct, present, and well-sourced solutions.

%pip set up --quiet -U "langchain-community>=0.2.16" langchain langchain-google-genai

We set up the required Python packages for this mission. It consists of the LangChain framework for constructing AI purposes, LangChain Group instruments (model 0.2.16 or increased), and LangChain’s integration with Google Gemini fashions. These packages allow seamless use of Gemini fashions and exterior instruments inside LangChain pipelines.

import getpass
import os
import json
from typing import Dict, Any

We incorporate important modules into the mission. Getpass permits securely getting into API keys with out displaying them on the display screen, whereas os helps handle setting variables and file paths. JSON is used for dealing with JSON knowledge buildings, and typing supplies sort hints for variables, reminiscent of dictionaries and performance arguments, guaranteeing higher code readability and maintainability.

if not os.environ.get("JINA_API_KEY"):
    os.environ("JINA_API_KEY") = getpass.getpass("Enter your Jina API key: ")


if not os.environ.get("GOOGLE_API_KEY"):
    os.environ("GOOGLE_API_KEY") = getpass.getpass("Enter your Google/Gemini API key: ")

We be sure that the mandatory API keys for Jina and Google Gemini are set as setting variables. Suppose the keys will not be already outlined within the setting. In that case, the script prompts the person to enter them securely utilizing the getpass module, preserving the keys hidden from view for safety functions. This method allows seamless entry to those providers with out requiring the hardcoding of delicate data within the code.

from langchain_community.instruments import JinaSearch
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnableConfig, chain
from langchain_core.messages import HumanMessage, AIMessage, ToolMessage


print("🔧 Establishing instruments and mannequin...")

We import key modules and courses from the LangChain ecosystem. It introduces the JinaSearch software for net search, the ChatGoogleGenerativeAI mannequin for accessing Google’s Gemini, and important courses from LangChain Core, together with ChatPromptTemplate, RunnableConfig, and message buildings (HumanMessage, AIMessage, and ToolMessage). Collectively, these elements allow the mixing of exterior instruments with Gemini for dynamic, AI-driven data retrieval. The print assertion confirms that the setup course of has begun.

search_tool = JinaSearch()
print(f"✅ Jina Search software initialized: {search_tool.title}")


print("n🔍 Testing Jina Search instantly:")
direct_search_result = search_tool.invoke({"question": "what's langgraph"})
print(f"Direct search end result preview: {direct_search_result(:200)}...")

We initialize the Jina Search software by creating an occasion of JinaSearch() and confirming it’s prepared to be used. The software is designed to deal with net search queries inside the LangChain ecosystem. The script then runs a direct take a look at question, “what’s langgraph”, utilizing the invoke methodology, and prints a preview of the search end result. This step verifies that the search software is functioning appropriately earlier than integrating it into a bigger AI assistant workflow.

gemini_model = ChatGoogleGenerativeAI(
    mannequin="gemini-2.0-flash",
    temperature=0.1,
    convert_system_message_to_human=True  
)
print("✅ Gemini mannequin initialized")

We initialize the Gemini 2.0 Flash mannequin utilizing the ChatGoogleGenerativeAI class from LangChain. The mannequin is ready with a low temperature (0.1) for extra deterministic responses, and the convert_system_message_to_human=True parameter ensures system-level prompts are correctly dealt with as human-readable messages for Gemini’s API. The ultimate print assertion confirms that the Gemini mannequin is prepared to be used.

detailed_prompt = ChatPromptTemplate.from_messages((
    ("system", """You're an clever assistant with entry to net search capabilities.
    When customers ask questions, you should utilize the Jina search software to seek out present data.
   
    Directions:
    1. If the query requires current or particular data, use the search software
    2. Present complete solutions primarily based on the search outcomes
    3. All the time cite your sources when utilizing search outcomes
    4. Be useful and informative in your responses"""),
    ("human", "{user_input}"),
    ("placeholder", "{messages}"),
))

We outline a immediate template utilizing ChatPromptTemplate.from_messages() that guides the AI’s conduct. It features a system message outlining the assistant’s function, a human message placeholder for person queries, and a placeholder for software messages generated throughout software calls. This structured immediate ensures the AI supplies useful, informative, and well-sourced responses whereas seamlessly integrating search outcomes into the dialog.

gemini_with_tools = gemini_model.bind_tools((search_tool))
print("✅ Instruments sure to Gemini mannequin")


main_chain = detailed_prompt | gemini_with_tools


def format_tool_result(tool_call: Dict(str, Any), tool_result: str) -> str:
    """Format software outcomes for higher readability"""
    return f"Search Outcomes for '{tool_call('args')('question')}':n{tool_result(:800)}..."

We bind the Jina Search software to the Gemini mannequin utilizing bind_tools(), enabling the mannequin to invoke the search software when wanted. The main_chain combines the structured immediate template and the tool-enhanced Gemini mannequin, making a seamless workflow for dealing with person inputs and dynamic software calls. Moreover, the format_tool_result operate codecs search outcomes for a transparent and readable show, guaranteeing customers can simply perceive the outputs of search queries.

@chain
def enhanced_search_chain(user_input: str, config: RunnableConfig):
    """
    Enhanced chain that handles software calls and supplies detailed responses
    """
    print(f"n🤖 Processing question: '{user_input}'")
   
    input_data = {"user_input": user_input}
   
    print("📤 Sending to Gemini...")
    ai_response = main_chain.invoke(input_data, config=config)
   
    if ai_response.tool_calls:
        print(f"🛠️  AI requested {len(ai_response.tool_calls)} software name(s)")
       
        tool_messages = ()
        for i, tool_call in enumerate(ai_response.tool_calls):
            print(f"   🔍 Executing search {i+1}: {tool_call('args')('question')}")
           
            tool_result = search_tool.invoke(tool_call)
           
            tool_msg = ToolMessage(
                content material=tool_result,
                tool_call_id=tool_call('id')
            )
            tool_messages.append(tool_msg)
       
        print("📥 Getting last response with search outcomes...")
        final_input = {
            **input_data,
            "messages": (ai_response) + tool_messages
        }
        final_response = main_chain.invoke(final_input, config=config)
       
        return final_response
    else:
        print("ℹ️  No software calls wanted")
        return ai_response

We outline the enhanced_search_chain utilizing the @chain decorator from LangChain, enabling it to deal with person queries with dynamic software utilization. It takes a person enter and a configuration object, passes the enter by the primary chain (which incorporates the immediate and Gemini with instruments), and checks if the AI suggests any software calls (e.g., net search through Jina). If software calls are current, it executes the searches, creates ToolMessage objects, and reinvokes the chain with the software outcomes for a last, context-enriched response. If no software calls are made, it returns the AI’s response instantly.

def test_search_chain():
    """Check the search chain with numerous queries"""
   
    test_queries = (
        "what's langgraph",
        "newest developments in AI for 2024",
        "how does langchain work with totally different LLMs"
    )
   
    print("n" + "="*60)
    print("🧪 TESTING ENHANCED SEARCH CHAIN")
    print("="*60)
   
    for i, question in enumerate(test_queries, 1):
        print(f"n📝 Check {i}: {question}")
        print("-" * 50)
       
        attempt:
            response = enhanced_search_chain.invoke(question)
            print(f"✅ Response: {response.content material(:300)}...")
           
            if hasattr(response, 'tool_calls') and response.tool_calls:
                print(f"🛠️  Used {len(response.tool_calls)} software name(s)")
               
        besides Exception as e:
            print(f"❌ Error: {str(e)}")
       
        print("-" * 50)

The operate, test_search_chain(), validates the whole AI assistant setup by working a sequence of take a look at queries by the enhanced_search_chain. It defines a listing of numerous take a look at prompts, masking instruments, AI subjects, and LangChain integrations, and prints outcomes, indicating whether or not software calls had been used. This helps confirm that the AI can successfully set off net searches, course of responses, and return helpful data to customers, guaranteeing a sturdy and interactive system.

if __name__ == "__main__":
    print("n🚀 Beginning enhanced LangChain + Gemini + Jina Search demo...")
    test_search_chain()
   
    print("n" + "="*60)
    print("💬 INTERACTIVE MODE - Ask me something! (sort 'give up' to exit)")
    print("="*60)
   
    whereas True:
        user_query = enter("n🗣️  Your query: ").strip()
        if user_query.decrease() in ('give up', 'exit', 'bye'):
            print("👋 Goodbye!")
            break
       
        if user_query:
            attempt:
                response = enhanced_search_chain.invoke(user_query)
                print(f"n🤖 Response:n{response.content material}")
            besides Exception as e:
                print(f"❌ Error: {str(e)}")

Lastly, we run the AI assistant as a script when the file is executed instantly. It first calls the test_search_chain() operate to validate the system with predefined queries, guaranteeing the setup works appropriately. Then, it begins an interactive mode, permitting customers to sort customized questions and obtain AI-generated responses enriched with dynamic search outcomes when wanted. The loop continues till the person varieties ‘give up’, ‘exit’, or ‘bye’, offering an intuitive and hands-on solution to work together with the AI system.

In conclusion, we’ve efficiently constructed an enhanced AI assistant that leverages LangChain’s modular framework, Gemini 2.0 Flash’s generative capabilities, and Jina Search’s real-time net search performance. This hybrid method demonstrates how AI fashions can broaden their data past static knowledge, offering customers with well timed and related data from dependable sources. Now you can prolong this mission additional by integrating further instruments, customizing prompts, or deploying the assistant as an API or net app for broader purposes. This basis opens up countless prospects for constructing clever techniques which are each highly effective and contextually conscious.


Try the Pocket book on GitHub. All credit score for this analysis goes to the researchers of this mission. Additionally, be at liberty to comply with us on Twitter and don’t overlook to affix our 95k+ ML SubReddit and Subscribe to our E-newsletter.


Asif Razzaq is the CEO of Marktechpost Media Inc.. As a visionary entrepreneur and engineer, Asif is dedicated to harnessing the potential of Synthetic Intelligence for social good. His most up-to-date endeavor is the launch of an Synthetic Intelligence Media Platform, Marktechpost, which stands out for its in-depth protection of machine studying and deep studying information that’s each technically sound and simply comprehensible by a large viewers. The platform boasts of over 2 million month-to-month views, illustrating its recognition amongst audiences.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles