For OneCo we have implemented an advanced AI knowledge engine combined with a dynamic RAG (Retrieval-Augmented Generation) architecture.

To solve this, we have established an AI-based knowledge engine built on a vector database and a dynamic RAG architecture. This is not a traditional chatbot with predefined answers. It's a semantic search engine combined with a language model, connected to a structured and continuously updated knowledge base that reflects OneCo's organisation.
What was delivered
We have set up:
- Integration with OpenAI (model: gpt-5-nano)
- Own vector database (RAG) where OneCo's data is stored securely and within the GDPR and EEA data processing agreement.
- Dynamic indexing of OneCo's business data
- Continuous updating of the knowledge base
- AI chat and internal knowledge assistant
How the solution works
All relevant information about:
- What OneCo delivers
- Service areas (Electrical, Power, Telecom, Infrastructure, etc.)
- Projects and references
- Who to contact within different disciplines
- Organisational structure
... is continuously indexed in a vector database.
When a user asks a question, the following happens:
- The question is embedded and semantically searched in the vector database
- Only relevant text extracts are retrieved
- These are passed on to the language model
- AI generates a precise and contextually correct answer
This ensures that answers are always based on OneCo's actual and up-to-date content - not generic internet sources.
Dynamic and self-updating
The solution is not static.
When:
- New projects are published
- Employees change
- Service areas are updated
- Organisational structure adjusted
... this is automatically indexed and available to the AI assistant.
This gives OneCo a living and constantly updated knowledge engine.
Results
- Faster access to the right contact person
- More precise answers to customers
- Better navigation in complex service offerings
- Scalable AI infrastructure directly on your own applications
- Full control over data and structure
The system is already seeing significant usage, with an average of over 300,000 tokens processed in the last 30 days, and the chat module as the most used feature.