Our Open Source Contributions
Over the years, we have published parts of our code and our AI models.
We have summarised these contributions on this page.
Multilingual Passage Reranking Modell
- This model is a reranking model trained by us, which can evaluate content according to various factors on an LLM basis.
- It has been rated as one of the most efficient of its kind in various scientific papers:
- It now has a medium, six-digit number of downloads
High quality, German training data set
- High-quality data sets are required to train AI models and avoid certain biases.
- Together with Philipp May, T-Systems, we have defined this training dataset to train our LLMs.
German Electra Model
-
Together with Philip May, T-Systems, we have published an improved Electra model, which was trained with 1.5 million steps.
-
At the time of publication, it was briefly the most efficient German-language LLM.
Leading companies love amberSearch
Read interesting posts on this topic
DB Regio AG – Revolutionising customer service together
On 6 September 2023, amberSearch was invited to Station Berlin to present a solution on how to automate customer dialogue with the help of generative AI at the Zukunft Nahverkehr trade fair as part of the DB mindbox Accelerator. This pitch was won by amberSearch. Here is a recording of the pitch:
Generative AI: knowledge transfer through the intelligent combination of technology and existing data silos
Use generative AI to break down internal data silos. Find out how it works with an AI search here!
Alternatives to Microsoft’s Copilot
How can generative AI be used across systems and why are the systems of other providers not suitable for this?
Direkt starten!
Beginne noch heute mit dem ersten Schritt, um die Vorteile von amberSearch zu nutzen.