Our Research

Exploring the frontiers of agentic AI, user behavior analysis, and multi-agent frameworks to revolutionize human-computer interaction.

Academic Publications

Our team regularly publishes research in leading academic journals and conferences

SalesRLAgent: A Reinforcement Learning Approach for Real-Time Sales Conversion Prediction and Optimization

Nandakishor M

arXiv:2503.23303, 2025

Current approaches to sales conversation analysis and conversion prediction typically rely on Large Language Models (LLMs) combined with basic retrieval augmented generation (RAG). These systems, while capable of answering questions, fail to accurately predict conversion probability or provide strategic guidance in real time. In this paper, we present SalesRLAgent, a novel framework leveraging specialized reinforcement learning to predict conversion probability throughout sales conversations. Unlike systems from Kapa.ai, Mendable, Inkeep, and others that primarily use off-the-shelf LLMs for content generation, our approach treats conversion prediction as a sequential decision problem, training on synthetic data generated using GPT-4O to develop a specialized probability estimation model. Our system incorporates Azure OpenAI embeddings, turn-by-turn state tracking, and meta-learning capabilities to understand its own knowledge boundaries. Evaluations demonstrate that SalesRLAgent achieves 96.7% accuracy in conversion prediction, outperforming LLM-only approaches by 34.7% while offering significantly faster inference. Furthermore, integration with existing sales platforms shows a 43.2% increase in conversion rates when representatives utilize our system's real-time guidance. SalesRLAgent represents a fundamental shift from content generation to strategic sales intelligence, providing moment-by-moment conversion probability estimation with actionable insights for sales professionals.

DeepRAG: Building a Custom Hindi Embedding Model for Retrieval Augmented Generation from Scratch

Nandakishor M

arXiv:2503.08213, 2025

This paper presents our work on DeepRAG, a specialized embedding model built specifically for Hindi language in RAG systems. While LLMs have gotten really good at generating text, their performance in retrieval tasks still depends heavily on having quality embeddings - something that's been lacking for Hindi despite being one of the world's most spoken languages. The author tackled this by creating embeddings from the ground up rather than just fine-tuning existing models. The process involved collecting diverse Hindi texts (over 2.7M samples), training a custom SentencePiece tokenizer that actually understands Hindi morphology, designing transformer architecture with Hindi-specific attention mechanisms, and optimizing with contrastive learning. Results showed a 23% improvement in retrieval precision compared to the multilingual models commonly used. The paper details a methodology that could help others working with low-resource languages where the one-size-fits-all multilingual models fall short. The embeddings have also been integrated with LangChain to build complete Hindi RAG systems. While there's still more to explore, this work addresses a critical gap for Hindi NLP and demonstrates why language-specific approaches matter.

Open Source Models

We contribute to the AI community by releasing specialized language models with a focus on Indic languages

Hindi Sentence Embeddings Model

DeepMostInnovations/hindi-embedding-foundational-model

A custom state-of-the-art sentence embedding model trained specifically for Hindi text. It leverages an advanced transformer architecture with specialized pooling strategies to create high-quality semantic representations of Hindi sentences.

Features:

  • Specialized for Hindi language text
  • Advanced transformer architecture with optimized attention mechanism
  • Multiple pooling strategies for enhanced semantic representations
  • Creates normalized vector representations for semantic similarity
  • Supports semantic search and text similarity applications

Interested in Collaborating?

We're always open to research partnerships, academic collaborations, and contributions from the AI community.