"We are working on models that are based on 7 billion to 20 billion parameters…we are not doing the 500 parameter models as of now. We also want to have our very own graphics processing units (GPUs) infrastructure as that is cheaper in the long term,” Vembu said at CNBC TV18-Moneycontrol's Global AI Conclave.
Notably, Zoho integrates a range of language models (LLMs) within its workflows. These LLMs are used to improve AI output by infusing them with customer and industry-specific data. Zoho essentially plays one LLM against another to achieve better results.
Zoho takes full advantage of Small(er) Language Models (SLMs) to control AI operating costs while maintaining high-quality outputs. By using smaller models with 7 billion to 20 billion parameters, Zoho aims to solve domain-specific problems for its customers.
SLMs like Llama, Mistral, Qwen, Gemma, or Phi3 are designed to be more efficient at focused tasks such as conversation, translation, summarization, and categorization. They offer tailored solutions that are not only cost-effective but also more accessible, allowing for a broader range of applications and innovations.
Zoho's Chief Evangelist, Raju Vegesna, emphasizes that the best AI implementation is when customers don't even notice they're using AI. In other words, the AI seamlessly enhances their experience without being intrusive.
Additionally, Zoho has revealed plans to develop its own extensive language model (LLM), similar to OpenAI's GPT model and Google's PaLM 2. Furthermore, the company is venturing into chipmaking and seeking incentives from the Indian government for this endeavor.
Advertisements