Internal AI Platform with OpenWebUI
EU-first, secure & cost-transparent
Building an internal AI platform – sounds like a big project?
Our experience shows: it can also be done pragmatically, step by step – yet still with a clear strategy. What's crucial is asking the right questions from the start: How can data privacy and EU hosting be consistently secured? How do costs stay transparent even as usage grows? And how do you create company-wide acceptance so the platform doesn't just exist, but gets actively used?
That's exactly why we introduced OpenWebUI at eggs unimedia – connected to Amazon Bedrock and Azure, integrated into enterprise SSO, and monitored via CloudWatch plus our own cost dashboard. From software development support to HR chatbots: use cases keep growing without compromising customer data. We're currently also working on automatic prompt routing that should provide even more convenience in the future.
User Adoption: Broad Usage and Clear Benefits
OpenWebUI isn't a niche tool for us – it's actively used by most of our workforce. Two-thirds of our team members in Munich and Cluj are already registered on the platform, and over one-quarter use it daily. This shows: the platform has arrived in everyday work life.
While larger enterprises often operate their own AI systems, we use OpenWebUI mainly internally and – when approved – also for smaller businesses that don't have their own AI infrastructure.
The use cases are diverse: developers use it to understand technical concepts, improve existing code, or analyze complex error messages faster. Marketing and sales teams draft texts or prepare market analyses. Project leaders benefit when creating presentations or communication drafts. Even HR uses it productively: a chatbot answers frequent questions and relieves colleagues in daily operations.
This broad adoption doesn't happen automatically. What's crucial is that users feel the direct benefit in their daily work. Whether saving time on routine tasks, getting to better drafts faster, or simplifying internal processes like HR requests – the platform creates noticeable relief. That's what drives the usage growth we're currently observing.
Tech Setup: Models, Routing and Integration
The technical foundation is deliberately broad. Through Amazon Bedrock in Frankfurt, various Amazon Nova variants are available (Micro, Lite, Pro). Added to these are models from Anthropic (Claude 3 to 4), Mistral Pixtral, and OpenAI models GPT-4o, GPT-4o-mini, and the reasoning models o3-mini and o4-mini. These are sometimes more capable at complex tasks than GPT-4o. So users can choose the right model for their task – from quick idea scouting to in-depth analysis.
Integration into enterprise SSO runs smoothly – employees log in with their familiar credentials. For operations and monitoring, we rely on AWS CloudWatch: this is where logs, usage metrics, and system alerts come together. The cost dashboard also builds on this foundation – more on that later.
Governance & Data Privacy: EU-first and Clear Policies
Data privacy and data sovereignty are our top priorities. That's why we consistently rely on EU-hosted infrastructure. Amazon Bedrock runs exclusively from Frankfurt, Azure models in European data centers. For particularly sensitive cases, there's also the option to integrate local models – internal guides show how to do this. So teams have a choice: maximum performance in the cloud or maximum control on premise.
The foundation is clear data classification according to common standards (Public, Internal, Confidential, Restricted). Confidential data like pricing information counts as Confidential, particularly sensitive info as Restricted. Only the latter has stricter restrictions.
When it comes to customer data, we emphasize transparency and responsible use. We proactively inform customers about how we leverage AI technologies in our projects to create innovative and efficient solutions. Customers are, of course, given the option to opt out of having their data used for AI processes – ensuring they retain full control at all times.
For the future, the planned AWS European Sovereign Cloud is interesting. This infrastructure planned for Germany should start by end of 2025 and be operated completely by EU personnel. All data stays in the EU, the cloud will be run by an independent European company without determining US influence. Decisions on operations and security lie completely in European hands. This offers companies with high sovereignty requirements a strong option. We're already preparing to seamlessly migrate workloads when it becomes available.
Cost Control: Transparency and Forecasts in Daily Operations
AI brings new cost models. To prevent these from getting out of hand, we built our own cost dashboard early on. The basis is CloudWatch – through this we capture total costs per model and visualize them. So we can see at any time which models are used how heavily and what that costs. With growing user numbers, this transparency provides security.
Besides measurement, we've established thresholds and alerts. If models or total usage exceed set budgets, responsible parties get automatically notified. Additionally, we use AWS forecast functions to identify trends early. This way we see if a monthly budget is at risk and can take action in time. The limits are deliberately generous – they prevent misuse without slowing down normal work.
We're currently making the dashboard even more granular. The plan is to transparently display token usage and costs per user in the future. This will show which models individual colleagues use how often – and what that costs. This fine-tuning is important for long-term efficient operation. Especially with the planned automatic routing, this opens up additional opportunities to optimize costs.
Lessons Learned: Introduction, Adoption and Prioritization
Introducing an internal AI platform isn't just technically challenging, but also communicatively. Key insight: OpenWebUI is so feature-rich that users without explanation often stay only at the surface. Features like shared prompt sharing or AI-assisted notes only unfold their value when actively shown. That's why demos, training, and regular reminders are needed – just providing it isn't enough.
Looking back, we should have prioritized the rollout more. With more visibility at leadership level and clear messages about strategic importance, the start would have been even smoother. Comprehensive workshops and short video tutorials would have also helped lower entry barriers.
But: the investment in this learning curve pays off. Today OpenWebUI is an integral part of our work approach. In parallel, we're also developing advanced chatbots with RAG integration that support us in our Sociocracy 3.0 practice – an organizational model for self-organized work that we use at eggs. These bots help formulate driver statements and domain descriptions. We'll report on this in more detail in a separate post.
Conclusion: What Companies Can Take Away
OpenWebUI shows: building an internal AI platform pays off when technology, governance, and cost control are thought through from the beginning. For us, it was crucial to consistently implement EU hosting and data privacy while still enabling broad model diversity and easy usage. The cost dashboard ensures operations stay transparent even with growing usage. Equally important: continuous communication. Only when features are explained and made experienceable do platforms unfold their full potential.
The message for other companies: an internal AI platform doesn't have to start big and complex. What's crucial is building acceptance step by step, clearly defining governance, and keeping costs in view. Those who take this path early gain operational advantages and collect experience for the next generation of sovereign cloud offerings.