placeholder

Building an intelligent AI research system: Combining ad hoc data with agentic workflows

Telechy bridges the gap between isolated AI chat and organized knowledge management.

FEB. 25, 2025
3 Min Read
by
Guilherme Castro
It’s never been easier to develop greenfield software projects—even those powered by AI. I recently took the opportunity to explore retrieval-augmented generation (RAG) and AI, ultimately building a knowledge management and research tool, Telechy, tailored to my needs. 
The goal was clear: build a platform that combines the flexibility of ad hoc data integration with the sophistication of agentic workflows to support deeper, more connected research processes.
Read on to learn how I developed this innovative system by leveraging techniques like RAG, graph-based interfaces, and real-time web access powered by the Tavily API.

Evolving from existing AI knowledge systems

Tools like ChatGPT (OpenAI) and Claude (Anthropic) have revolutionized how people interact with AI. However, these tools do have some limitations. Conversations with most AI-powered agents exist in isolation—although OpenAI has introduced cross-conversation continuity in recent updates. This can create friction for users who need to revisit or expand on previous threads. 
Integrating external data or organizing complex research can quickly exceed the number of tokens available in a conversation. This is particularly true with Claude, which will send the user a notice after a certain threshold of queries to let them know that longer conversations lead to reaching usage limits faster.  

Building a more personalized system

My inspiration for building an AI-powered knowledge management and research tool came from “second brain” applications like Notion and Evernote, which excel at organizing information. I saw an opportunity to integrate these organizational and knowledge-storing capabilities with the dynamic, conversational power of large language models (LLMs).
My goals for Telechy included:
  • Integrating diverse data sources to support various formats, from written text instructions to custom knowledge bases
  • Enhancing flexibility by allowing agile processing of ad hoc data.
  • Creating a cohesive knowledge management solution that combines AI’s generative power with structured organizational systems.

Building the knowledge architecture

Building an AI-powered knowledge management system requires several considerations, including how the user interacts with the solution and how data is sorted and accessed.

An innovative graph-based UI

One of Telechy's standout features is its graph-based user interface (UI). Unlike traditional chat UIs, where conversations are linear, the user engages with three panels. The first is the Workbook, where the conversation is visualized in a tree structure. Each node represents a key point in the discussion, enabling users to branch out into related topics or revisit earlier threads without starting over.

The next panel is the Thread, which acts as a familiar chat function. When the user clicks on a node in the Workbook panel, the Thread panel goes to the relevant point in the chat. The third panel is for adding knowledge—the user can input writing rules, specific information from a database, and more.
This design supports divergent exploration, allowing me to investigate tangential ideas without disrupting the main thread. Plus, a clear map of each conversation helps users easily interpret and navigate knowledge that's surfaced.

Organizing and retrieving data

Telechy leverages RAG, which combines AI-generated responses with real-time data retrieval from external or internal sources. This helps ensure that the AI outputs are accurate and contextually relevant. For example, I leverage the Tavily API for real-time web access 
I designed Telechy to organize knowledge with features like:
  • Segmented database categories that optimize knowledge organization, access, and retrieval. 
  • RAG that incorporates any knowledge added by the user and categorizes it by “Brain”. 
As the system is still in its early stages, there is room for improvement in areas such as self-aware indexing, which could empower it to refine its data structure autonomously. Future development could address this challenge while increasing the tool's ability to generate more advanced insights.

System intelligence and functionality

Telechy focuses on making research intuitive and efficient, giving users smarter tools to organize and manage data effortlessly. That said, like any AI-powered resource, any information output requires rigorous verification before being put to use.

Core features

The system prioritizes:
  • Processing ad hoc data: Incorporating user-provided knowledge into structured formats.
  • Hierarchical search: Leveraging internal knowledge bases first and adopting web search capabilities.
  • Intelligent query refinement: Leveraging additional resources to contextualize and respond to queries. 
  • Curated response generation: Shaping responses based on past interactions with the user.

Workflow automation

Agentic AI workflows are a central part of Telechy’s design. For example, the user is able to create and manage AI Agents. This could include an agent that has access to bespoke information about a domain as well as internet access and follows specific actions when answering queries based on their specific prompts. 
These agents enable the system to: 
  • Break down complex queries into actionable steps.
  • Use external tools (such as external APIs) to gather additional context.
  • Automate iterative research processes to make them more efficient.

Integration and agentic workflow components

I wanted the tool to leverage both local and web sources to provide customized and contextualized responses to my queries.

Bringing the web into research

Specifically, I integrated the Tavily API to provide high-quality, internet-sourced context alongside the model's existing internal knowledge. 
This combination ensures that I can:
  • Perform deep research with reliable, real-time data.
  • Optimize workflows by reducing the need to switch between tools.
  • Collect information from the research back into domain knowledge.

Structuring knowledge for scalability

I designed the tool’s processing framework to handle diverse file formats, including code, PDFs, and video transcripts. It also uses vectorization to structure data for efficient retrieval. Moving forward, I plan to scale Telechy to meet my evolving needs and integrate it with multiple data sources and models to increase its sophistication.

Applications and future development

Current applications

Telechy is already proving its versatility across various domains, including:
Research:
  • Code analysis and migration
  • Document processing
  • Transcript analysis
  • Multi-source research projects
Legacy system integrations:
  • Context gathering and indexing
  • Marketing technology integrations
From a compliance and security perspective, it has been built to follow industry standards and best practices for security and data privacy. 

Business impact: Transforming research operations

While technically innovative, this system’s true value lies in two key business impacts:

Operational excellence

- Transforms research timelines by condensing days of work into minutes
- Automates repetitive research tasks, significantly reducing operational overhead
- Streamlines knowledge management through intelligent organization
- Integrates seamlessly with existing business systems, maximizing ROI
- Enables more efficient allocation of research resources

Employee empowerment

- Eliminates tedious manual research tasks, allowing staff to focus on analysis and insights
- Creates a more engaging work environment through intuitive research tools
- Supports the development of deeper domain expertise
- Enables both independent and collaborative research workflows
- Provides clear visualization of research progress through graph-based interfaces
This combination of operational efficiency and employee empowerment makes the system a strategic asset for organizations looking to transform their research capabilities.

What’s next?

Looking ahead, I plan to make ongoing refinements including: 
  • Advanced workflow automation: Enhancing the interaction between agents and data sources.
  • UI refinements: Improving usability and accessibility with enhanced graph-based interactions.
  • Scalable knowledge bases: Supporting long-term growth with optimized data processing capabilities.
  • Improved data processing capabilities: Increasing the scope of data formats the tool can process. 
There is also room to continue improving the system's performance and growing its long-term knowledge base.

Broadening the horizon

The tool has the potential to evolve into an adaptable system for tackling complex projects. By refining its agentic workflows and integrating with existing business systems, it could develop into a platform that supports dynamic research workflows, enhances data organization, and provides tailored solutions for diverse challenges. The focus remains on empowering users to innovate, make informed decisions, and achieve their goals efficiently.