Ana SayfaGenelAI Search Engine Multilingual Evaluation Report - Complex Query (v1.1)

AI Search Engine Multilingual Evaluation Report – Complex Query (v1.1)

Published on



In our last assessment, we observed that existing AI search engines fell short when tackling intricate challenges. Considering the frequency of such complex issues in our everyday tasks and lives, this evaluation is dedicated to examining the AI Search Engine’s proficiency in resolving such problems.

However, during our evaluation, we found that the Basic versions of various products were not entirely satisfactory. Therefore, out of necessity, we had to include Perplexity Pro in our testing scope to see how well the best products on the market can perform. After rigorous testing, we have reached the following conclusions:

  1. Perplexity Pro significantly outperformed, achieving an accuracy rate of 80%, while the performance of other products (free version) was not up to par.

  2. When the source retrieved is not enough, LLMs tend to use its knowledge to infer, which lead to lots of hallucination.

  3. The LLMs generating answers for Metaso and Perplexity (basic) performed poorly, often providing incorrect answers even when relevant information was available.