Making complex data accessible is a constant challenge. Dashboards are rigid, and query builders are often too complex for non-technical users. What if your team could simply ask your data questions in plain English, just like they would ask a colleague?
This is no longer a futuristic concept. By combining the power of Large Language Models (LLMs) with a robust data retrieval framework, you can build an intelligent, conversational search layer on top of any data source.
This guide will show you how to use Searches.do to build and deploy a powerful Search Agent that leverages an LLM to translate natural language questions into executable database queries, effectively unlocking your data for everyone.
Traditionally, accessing structured data requires a pre-defined path:
These methods force users to think like the database. A natural language interface does the opposite: it makes the database think like the user. Instead of painstakingly building a query with filters for "Status: Active", "Region: North America", and a specific date range, a user can simply ask:
"Show me all active users in North America from last quarter."
This is the power of conversational data access, and Searches.do is the perfect platform to build it.
Before diving into the AI integration, let's understand what Searches.do is. It’s an agentic workflow platform that lets you define complex data retrieval logic as simple, reusable APIs. You encapsulate your query logic—no matter how complex—into a "Search Agent."
Here’s a basic example of a Search Agent that finds a user by their email:
import { Search } from 'searches.do';
const findUserByEmail = new Search({
name: 'Find User By Email',
description: 'Retrieves a single user record by their email address.',
parameters: {
email: { type: 'string', required: true }
},
handler: async ({ email }) => {
// Your data retrieval logic goes here
const user = await db.collection('users').findOne({ email });
return user;
}
});
// This is now instantly available as a scalable API endpoint:
// POST /findUserByEmail
// { "email": "jane.doe@example.com" }
The key here is the handler function. It's just JavaScript/TypeScript code, meaning it can connect to any database, call any internal or external API, and, most importantly, integrate with an LLM.
Let's build a Search Agent that accepts a natural language query, uses an LLM to convert it into a MongoDB query, and then executes it against our database.
First, we'll define the structure of our new agent. Unlike the previous example, this one will take a single string parameter: the user's question.
import { Search } from 'searches.do';
import { LlmProvider } from './llm-service'; // Your LLM SDK
import { db } from './database'; // Your database connection
const naturalLanguageUserSearch = new Search({
name: 'Natural Language User Search',
description: 'Finds users based on a natural language query.',
parameters: {
question: { type: 'string', required: true }
},
handler: async ({ question }) => {
// LLM and database logic will go here
}
});
The real magic is in the prompt. We need to instruct the LLM on how to behave. A good prompt for this task includes three things:
Here’s an example of what that prompt might look like:
const getPrompt = (question: string) => {
const schema = `
Collection: 'users'
Fields:
- _id: ObjectId
- name: string
- email: string
- region: ['North America', 'Europe', 'Asia', 'South America', 'Africa']
- status: ['active', 'inactive', 'pending']
- createdAt: Date
`;
return `
You are an expert MongoDB query generator. Your task is to convert a natural language question into a valid MongoDB find query object.
Use the following schema for context:
${schema}
The current date is ${new Date().toISOString()}.
User Question: "${question}"
Return ONLY the BSON/JSON query object. Do not include any explanations or surrounding text.
`;
};
Now, we'll implement the handler to tie everything together.
// ... inside the Searches.do handler ...
handler: async ({ question }) => {
// 1. Construct the prompt
const prompt = getPrompt(question);
// 2. Call the LLM to get the structured query
const llmResponse = await LlmProvider.generate(prompt);
let mongoQuery;
try {
// 3. Parse the LLM's response (expecting a JSON string)
mongoQuery = JSON.parse(llmResponse);
// TODO: Add a validation/sanitization step here in a real application!
} catch (error) {
throw new Error('Failed to parse LLM response into a valid query.');
}
// 4. Execute the query against the database
console.log(`Executing generated query:`, mongoQuery);
const users = await db.collection('users').find(mongoQuery).toArray();
// 5. Return the results
return users;
}
With Searches.do, you simply save this file, and it’s instantly deployed as a scalable, secure API endpoint. Now, anyone can query your user data conversationally:
curl -X POST https://api.searches.do/your-instance/naturalLanguageUserSearch \
-H "Content-Type: application/json" \
-d '{ "question": "how many active users are in Europe?" }'
Building this kind of agentic search demonstrates the unique strengths of the Searches.do platform:
By combining the agentic framework of Searches.do with the intelligence of LLMs, you can move beyond rigid interfaces and create truly dynamic, user-friendly data experiences.
Ready to unlock your data? Visit Searches.do to build your first intelligent Search Agent today.
What is Searches.do?
Searches.do is an agentic workflow platform that lets you define complex data retrieval logic as simple, reusable APIs. Instead of writing one-off query scripts, you build intelligent Search Agents that can be called from anywhere.
How is this different from a standard database query?
Searches.do abstracts the underlying data source and its logic. You define what you want to find (e.g., 'Find Active Users by Region'), and the Search Agent handles how it's done. This makes your searches reusable, versionable, and scalable as services.
What kind of data sources can I search?
You can connect your Search Agents to virtually any data source, including SQL/NoSQL databases, internal APIs, data warehouses, or even third-party services. The handler logic is just code, giving you full flexibility.