Bring your GenAI models together in one powerful chat portal. Narus gives you the control to confidently and securely manage LLMs in your organisation.
Compare chats for diverse insights and ensure quality responses.
Instantly optimise your requests with a predefined tone of voice, writing style, and more.
Save and share useful prompts, speeding work and reducing repetition with the curated prompt library.
Monitor user activities with a clear visualisation of AI model utilisation.
Control which model is available for different users and teams.
Track AI expenditure and set budgets to keep your costs in check.
Control LLM access permission by user roles to ensure security.
Configure security alerts to flag prompts that contain PII, sensitive topics or banned words.
Track user activity & get a full audit trail when a user violates the safeguard policy.
Narus is a generative AI chat portal that helps your teams work smarter, offering multiple large language model connections like GPT-4o, Claude, and Gemini. It’s designed for businesses that want to adopt AI securely with complete administrative controls, budget management, and oversight of AI usage.
Narus has been built for businesses that want to exploit the competitive advantage that AI offers without compromising security.
Narus uses a dedicated Amazon Web Services (AWS) instance administered by Kolekti (part of The Adaptavist Group) for data storage. Employing robust encryption protocols and maintaining meticulous data segregation, we guarantee the confidentiality and integrity of your organisation's sensitive information, ensuring full compliance with industry-leading data privacy standards. See our Kolekti Data Protection Addendum (DPA) for further details.
No. As a rule, when an LLM is used through an API, the data is not used to train the model. However, some models offer the option to opt-in training, make sure your company has not actively requested it if you want models not to be trained with your data.
To integrate your proprietary Language Large Models (LLMs) with the Narus platform, you will require an API key. The specific method for obtaining this key may vary depending on your chosen LLM provider. You can learn more about the LLMs compatible with Narus on our integrations page.
Narus calculates your company's AI cost by the number of tokens used when interacting with each LLM (large language model), both for input and output. We then suggest an approximate cost based on the price per token for each LLM, which is why the cost shown in Narus is only an estimate. LLM costs are billed separately by your LLM provider, so if you have a custom price per token with them, the amount shown in Narus may not match your actual bill. If this happens, feel free to contact us for clarification.