Replies: 1 comment 2 replies
-
Great idea. for the use of LLMs, they will be outside of the Quick core ? for example each library will be a independant repository ? Also, for these integration, they will come after the CLI or the both will come together ? |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
🌐 Proposal for Integration with LLMs and AI Ecosystem
The main objective is to connect with LLMs and orchestrate communication between them, facilitating the development of applications such as RAGs (Retrieval-Augmented Generation), Intelligent Agents, among others.
🔗 Communication with LLMs
The idea is to support communication with multiple LLM providers, including:
We intend to develop a LangChain-inspired library, with a focus on modularity and ease of use.
🧠 Vectors / Embeddings / RAG
Our scope also includes integration with vector and RAG technologies, such as:
🖼️ AI Image Processing
We will support communication with image generation and analysis platforms:
🎙️ Text-to-Speech (TTS) / Voice
The proposal also includes integration with voice and text-to-speech APIs:
💡 Name suggestions for Quick's LLM library
Here are some simple name suggestions I thought of for our Quick project:
Feel free to comment, suggest other names or vote for your favorites! 🚀
🤔 What do you think of this idea?
I'm curious to hear your thoughts!
Let's build something powerful, modular, and easy to integrate with the current AI ecosystem.
Beta Was this translation helpful? Give feedback.
All reactions