Hey there! I'm Yashaswini Ippili,

Know More

About Me

Profile Image

I’m a B.Tech Computer Science student specializing in Machine Intelligence & Data Science at PES University, blending technical know-how with practical experience across multiple domains. With a passion for all things tech, I’ve honed my skills in Python, Machine Learning, and the MERN Stack, along with hands-on work in the Gen AI ML space during my internship at Slang Labs, where I contributed to pioneering conversational AI solutions. I also have front-end expertise gained during a summer stint at Exam Trakker, where I developed dynamic web interfaces using Angular and NX.


I thrive in leadership roles, whether it’s heading The Alcoding Club and driving high-energy hackathons, or co-leading Shunya, where I helped orchestrate ideathons and hackathons for over 300 participants. My contributions as a web developer for AIKYA and managing operations for the Apple Developer's Group speak to my blend of technical skills and organizational expertise.


On the side, I’m a polyglot, fluent in English, Telugu, Kannada, and Hindi, with a touch of Spanish. Outside the tech world, I’m a professional inline skater, runner, and an explorer of diverse cuisines, photography, and music.

View Resume

Projects

Agent Arena

Tools Used : Python, Llama3-8b, Claude-3.5, GPT-3.5, DSPy, Firestore, ReactJS, FastAPI

Features head-to-head comparisons of LLMs where models are randomly paired and their names remain undisclosed until after the user selects the better response.

Utilizes a human vote-based Elo-rating system to evaluate and rank LLM performance on prompt generation quality, with results stored in a Firestore database.

Source Code Try it out

Generative AI Chatbot

Tools Used : Python, Llama2, Huggingface, Langchain, Transformers, Selenium, Gradio

Leveraged advanced automation techniques to seamlessly handle the processing of an extensive volume of GEHC-related PDFs. This involved automating the download, precise text extraction, merging, and meticulous cleaning of data. Through this refined dataset, facilitated the training of a sophisticated multilingual bot. This bot possesses the prowess to adeptly address inquiries spanning installations, manuals, services, and effortlessly maintains continually updated transcripts.

Achieved the position of Second Runner Up in the National Level Hackathon organized by GE Healthcare, titled Hack'E'lth'23.

PEScholar

Tools Used : Python, PyMySQL, Scholarly, Streamlit, SQL, Selenium

The project involves scraping Google Scholar's data for university professors. After preprocessing and cleaning, the data is stored in a MySQL database. A Streamlit front-end is developed for populating the database and displaying statistics for both individual professors and aggregated data based on specific years or combined information. It also enables detailed viewing of professors, showcasing their citations, publications, conferences, and other relevant scholarly contributions.

Source Code

PESEat

Tools Used : JavaScript, SQL, PHP, CSS, SCSS, Xampp, Selenium

Successfully built a full-stack web application adhering to software engineering principles and the Agile methodology. The project involved comprehensive requirements engineering, leveraging Selenium for automated testing, and utilizing tools such as SRS (Software Requirements Specification), RTM (Requirements Traceability Matrix), and Gantt charts for project management.

Source Code

Research

Optimization of LLMs with RAG and MTKD

Tools Used : Python, PyTorch, Flan-T5, Knowledge Distillation

Novel Architecture Idea for Optimizing Large Language Models using a combination of Retrieval Augmented Generation and Multi Teacher Knowledge Distillation method.

The use of multiple teacher models allows the student model to learn a diverse range of information and patterns in order to generate the most accurate response for the user's query.

Source Code

Meta-Reflexion

Tools Used : Python, Huggingface, Gemini 1.5 Pro, LLama3.1-8b-Instruct

A user enters a query, and a response is generated. A judge then produces N judgments using high temperature. Another judge compares two judgments at a time, performing 2*NC2 comparisons. The judgment with the most wins from these comparisons is chosen as the best judgment. This winning judgment is used to refine the initial response, aiming to produce the best possible response.

The process continues until an evaluation criterion is met, at which point the loop ends.

Source Code

MCTS-MetaJudge

Tools Used : Python, Gemini 1.5 Pro, Phi3, Llama3.1-8b-Instruct

A framework designed to combine the Monte-Carlo Tree Search (MCTS) algorithm with a custom self-refine algorithm, effectively managing the exploration-exploitation trade-off to consistently find the best solution.

While the paper focuses on mathematical reasoning, this project adapts the concept to enhance the logical reasoning capabilities of an LLM along with the concept of "Meta-Judge" provided by Meta. Notably, the phi3-mini model was able to reach GPT-3.5 level logical reasoning performance.

Source Code

Blogs

DSPy vs Conva.AI : Building the Best AI Assistant

Exploring and comparing DSPy and Conva.AI to determine the optimal AI assistant solution.

The article provides a thorough comparison of DSPy and Conva.AI, evaluating their strengths to determine which platform delivers the most sophisticated and efficient AI assistant solution. It also shares my personal experiences with both tools across various use cases, offering insight into their underlying processes.

Source

Agent Arena: Leaderboard for LLM-built capabilities

Inspired by LMSYS, benchmarking platform to evaluate and compare the prompt generation capabilities of Conva.AI against others.

The article provides a complete walkthrough of Agent Arena, the story behind building it, comparing Conva.AI against other competitors like DSPy, Claude (Anthropic Workbench), Human built prompts and the totally unbiased approach it follows.

Source

Contact Me

Reach Out for Collaboration!

Get in Touch