Since the hype surrounding ChatGPT erupted in November 2022, various AI tools have been springing up. And artificial intelligence offers various use cases for meaningful integration into internal company processes. But what is the ideal way to combine internal expertise with modern technology? This article explains exactly that.

The old vs. the new IT landscape

When companies started ‘digitalising’ several years ago, this usually meant: I have Outlook for emails and drives for documents. Over time, more and more systems (e.g. intranets, DMS systems, ERPs, collaboration tools, etc.) were added. Added to this was the switch from the on-premise world to the online world – the digital transformation to the cloud is in full swing in many companies. Nevertheless, IT infrastructures today remain very fragmented, consist of various systems and are mostly hybrid – both on-premise and in the cloud.

Most companies now also realise that consolidating all systems into one system – for example Microsoft 365 – is more wishful thinking than a truly feasible option from a certain size. Nevertheless, everyone is talking about artificial intelligence and the potential that this technology holds for companies.

Most AI solutions are offered as cloud solutions and deployment in this form makes sense for almost all companies. The hardware costs required to operate large language models in on-premise systems are simply too high and negate any business case. As there is now a lot of talk about AI agents, AI assistants and co-pilots in AI systems, this blog article will focus primarily on the integration of these solutions into the IT landscape.

Integration of AI into existing IT systems

There are basically five relevant ways in which AI can be combined with internal systems:

  1. deployment of chatbots
  2. integrated solutions
  3. own AI models
  4. AI assistants
  5. AI agents

These have already been explained in detail in this blog article. However, when it comes to combining AI with existing IT systems, specific connections to the company’s own systems must be created so that the desired processes can be carried out.

Certain security standards must be observed: GDPR compliance, AI Act and access rights are just some of the existing requirements that need to be taken into account.

In addition to the IT security aspects, however, the technical framework conditions must also be taken into account. For example, information must first be vectorised so that it can be processed by a large language model. This means that it is converted into a format so that an AI model can work with it. This process takes place both as ‘pre-processing’ and in the search process itself. Vectorisation is also the reason why the use of AI is so resource-intensive – but also why the quality is significantly better than conventional mechanisms.

Enterprise search as the basis for AI processes

Taking into account the limitations mentioned in the previous section, an enterprise search fulfils all of these challenges. An enterprise search is an internal search engine that makes internal information quickly and easily accessible. Modern enterprise search solutions such as amberSearch are based on large language models and vectorise the data. Enterprise search solutions are designed to fulfil the requirements of existing access rights, IT security and GDPR.

To enable an enterprise search to process information in fractions of a second, it pre-processes the data and builds the required index. In the past, this was done in the form of words (keyword search), but now the data is vectorised. Important: the index must be kept continuously up to date.

#One use case would be, for example, that either users or systems (then automated) submit queries to the enterprise search to answer questions in certain processes. Another AI use case is retrieval augmented generation (RAG) systems, which form the basis for many companies’ AI systems.

AI assistants, AI agents & multi-agent systems

A RAG system maps a chat system and therefore falls into the AI assistant category. However, autonomous AI agents – i.e. systems that make and implement decisions independently – can also be developed and integrated on this basis. The data is already available in vectorised form and can be made usable via Enterprise Search. A further expansion stage of AI agents are multi-agent systems in which several AI agents have to work and interact with each other in order to achieve certain goals.

AI systems are worth nothing without internal knowledge

There was a lot of hype when ChatGPT was released. No one would have to write texts any more, it would all be done by the AI. However, it soon became clear that texts generated by AI are quickly just 0815 blah blah blah if the context and insights from everyday work are missing. Agent systems, for example, can also only make very poor decisions if they do not have a comprehensive information basis for decision-making.

Therefore, it does not help companies if they invest in modern AI systems but do not have the opportunity to feed them with the necessary internal expertise. Of course, you can develop a connector to certain systems yourself for smaller use cases, but these need to be maintained and developed further. So if you are serious and want to realise the added value of AI in your company in the long term, you should start with an enterprise search as the basis. This ensures that existing technical and regulatory requirements are met and takes into account the entire company’s knowledge and not just certain areas.