Skip to content

feat: support semantic search in AI chat and embedding ability#1510

Open
hgaol wants to merge 3 commits intoapache:devfrom
hgaol:1468
Open

feat: support semantic search in AI chat and embedding ability#1510
hgaol wants to merge 3 commits intoapache:devfrom
hgaol:1468

Conversation

@hgaol
Copy link
Member

@hgaol hgaol commented Mar 2, 2026

Implementation of #1468 , add a semantic search tool to keep user experience consistent.

Settings
image

Calling semantic search tool
image

logs
image

@LinkinStars LinkinStars self-requested a review March 5, 2026 12:22
}

// SearchSimilar performs brute-force cosine similarity search in Go.
func (r *embeddingRepo) SearchSimilar(ctx context.Context, queryVector []float32, topK int) ([]SimilarResult, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The biggest issue with this approach lies here: all data queries are performed in-memory. While this works for small datasets, it is undoubtedly unacceptable for large-scale data.

Therefore, my suggestion is that if users are using PostgreSQL, they could directly utilize PostgreSQL + pgvector to store vector data and perform searches within the database.

Furthermore, a more 'ideal' approach would be to expose this component as a plugin, allowing user Q&A data to be synchronized to external systems. This design is similar to how search plugins operate. For instance, an Elasticsearch (ES) plugin synchronizes Q&A data to ES for retrieval. The benefit of a plugin-based implementation is extensibility: users aren't restricted to the built-in database and can use their own custom vector databases. The required interfaces would likely mirror those of a search plugin, such as a data synchronization interface and a search interface.

What do you think?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First of all, thank you @LinkinStars for the careful review! The goal of this PR is also to make it clear, so feel free to raise your concern and suggestions!

The biggest issue with this approach lies here: all data queries are performed in-memory. While this works for small datasets, it is undoubtedly unacceptable for large-scale data.

Therefore, my suggestion is that if users are using PostgreSQL, they could directly utilize PostgreSQL + pgvector to store vector data and perform searches within the database.

It makes sense. It's better to save vectors in vector DB. And I think it could support multiple vector DBs, while still support saving in main database or memory for test purpose. WDYT?

Furthermore, a more 'ideal' approach would be to expose this component as a plugin, allowing user Q&A data to be synchronized to external systems. This design is similar to how search plugins operate. For instance, an Elasticsearch (ES) plugin synchronizes Q&A data to ES for retrieval. The benefit of a plugin-based implementation is extensibility: users aren't restricted to the built-in database and can use their own custom vector databases. The required interfaces would likely mirror those of a search plugin, such as a data synchronization interface and a search interface.

About plugin, do you mean make all the functionalities as a plugin, or just the storage for vectors? Per your statement, I assume the later one?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants