Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

AI lie detector: How HallOumi’s open-source approach to hallucination could unlock enterprise AI adoption


Join our daily and weekly newsletters for the latest updates and exclusive content in the industry’s leading AI coverage. Learn more


An entity deterrents the AI ​​in a row in a row in a series of obstacles: hallucinations. These fabricated answers from the AI ​​systems have led to companies that are forced to have to honor all the legal sanctions of lawyers.

Organizations attempted different approaches Solving the Hallucinated Problem, including better information, generation (dwarf), better regulation with better data watchman. Open source development firm odumi Now it also offers a new approach to the name of ‘Cheesy’.

This The name of the company has an abbreviation for open universal bench intelligence (Oumi). That is Leads by Ex-Apple and Google Engineers In a mission to build an unconditional open source There is a platform.

On April 2, the company Halloumi announced a open source claim that the detection of the Halloucination was designed to solve the problem of accuracy through a novel approach. Halloumi, of course, is a tough type of cheese, but it has nothing to do with the name of the model. Name, Hallucination and Oumi’s combination, although April Fools’ day has joked a joke – but one other than a joke; It is a solution to a very true problem.

“Hallucinations are often displayed as one of the most important problems in the placement of the models,” Manos Koukoumidis said that the Director General of Oumi was told Venturebeat. “A generative models that trust in the result are boiling up to an issue, it is probable, but not necessarily not accurate, to draw results.”

How does the Halloumi enterprise work to solve AI hallucinations

Analyzes the content generated by Halloumi, sentence-sentence. The system accepts the source document and the AI ​​response, then determines that the source material supports every claim in response.

“Haloumi analyzes every sentence independently,” Koukoumidis said. “Analyzes for each sentence, says special sentences in the input file you have to be checked, so there is no need to read the whole document to check out what [large language model] LLM is accurate or not. “

The model provides three main performances for each analyzed sentence:

  • A confidence account indicating the possibility of hallucination.
  • Combines evidence that supports prescribed quotes.
  • An explanation of a person whose claim is supported or not supported.

“We taught us to be very nuencing,” said Koukoumidis. “For our linguists, because it is something like a hallucinance of the model, first of all, why Haloumi, why a hallusination is why a speculation of Nuansen or why a speculation is wrong.”

To integrate Haloumi to enterprise AI workflows

Haloumi today has several ways to be used with an enterprise AI and can be integrated.

An option is to try the model using a slight hand-running process albeit online Demo interface.

API is a managing approach to production and enterprise AI will be more optimal for work flows. Manos said that the model is a completely open source and connection to existing workflows, can be used in local or cloud and can be used with any LLM.

The process, the original context and the LLM’s reaction to the Halloumi’s reaction, then confirms the output. The enterprise may unite the Halloumi to add a validation layer to the AI ​​systems that helps to detect and prevent and prevent and prevent the Hallucinations in the established content of the AI.

Oumi published two versions: a generative 8b model that provides a classifier model with detailed analysis and more accountable and more calculation efficiency.

Halloumi vs Enterprise Dwarfs for AI HalliSlination Protection

How other substantial approaches are used to replace the existing methods (purchased generation) when other than the typical caroys offer more detailed analysis from typical caroys.

“The entrance document you feed from LLM can be a cloth in the entrance,” Koukoumidis said. “In some other cases, it’s not exactly, because people say, ‘I don’t get anything. I said, I said to you. Halloumi dwarf, but only be clothed.”

This difference is important, because the gland is aimed at improving generations by providing the appropriate context, and the Halloumi checks the output after generation, regardless of how the context is obtained.

Compared to the Guardrails, Halloumi provides more than binary inspection. Analysis of sentence by trust scores and explanations gives users in detail understanding where and how the Hallowships occurred.

Halloumi combines a special form of thinking in the approach.

“There was an absolute substantial option that we did to synthesize the information,” said Koukoumidis. “We focused on thinking step by step or the claim to think that it should be classified by a larger claim or a larger sentence to make a step-by-step or forecast.”

The model can only reveal not only a random hallucinion, but also intentional misinformation. In a demonstration, Koukoumidis, Deepseek’s model showed that the Haloumi was determined when it did not take into account the Wikipedia content and instead of propaganda in China’s Covid-19 response.

What does this enterprise mean for the AI ​​adoption

The AI ​​offers a potential essential tool to securely accommodate generative AI systems in the Halloumi production environment for via the production environment.

“I really hope that this is blocking many scenarios,” said Koukoumidis. “There are many enterprise models because the existing applications were not very ergonomic or effective. Hope the Halloumi provides them to trust them because they have something to instill.”

For enterprises with SLOWER AI adoption, the Halloumi’s open source nature, Oumi, where they can practice with technology when they offer commercial support options.

“If you want to have a particular commercial way to make a certain commercial way for any company Halloumi to make them better or use them, we are very pleased to help develop a solution,” Koukoumidis said.

As AI systems continue to move forward, tools such as Halloumi can have standard components of the AI ​​stacks-major infrastructure to separate the AI ​​fact.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *