Amit Yadav

One Hot Encoding vs Dummy Variables

Have you ever heard the saying, “Not everything that counts can be counted?” Well, in machine learning, everything must be counted—especially categorical data. That’s where one-hot encoding and dummy variables come into play. Brief Overview: If you’ve worked with machine learning models, you’ve likely come across categorical data—features that represent distinct categories rather than continuous

One Hot Encoding vs Dummy Variables Read More »

One Hot Encoding vs. Dummy Encoding

What is Encoding in Machine Learning? “In life, everything is about translation—sometimes between languages, other times between thoughts and actions. In machine learning, we’re constantly translating too—only here, we’re turning categories into numbers.” When you’re working with machine learning models, one thing you’ll come across often is categorical data—you know, things like colors, job titles,

One Hot Encoding vs. Dummy Encoding Read More »

Contrastive Learning for Recommender System

Imagine walking into a bookstore, and instead of searching through hundreds of titles, the store instantly presents you with five books you’re likely to love. That’s the magic of recommender systems, and they’re everywhere—from your Netflix queue to the suggested products on Amazon. But here’s the catch: providing personalized recommendations isn’t as easy as it

Contrastive Learning for Recommender System Read More »

Contrastive Learning for Label Efficient Semantic Segmentation

“Data is the new oil”—this phrase couldn’t be more accurate in today’s AI-driven world. But what happens when your model needs oceans of labeled data to perform, and you have barely a drop? This is where label-efficient learning becomes a lifesaver, especially for complex tasks like semantic segmentation.” Semantic segmentation is no ordinary task. Imagine

Contrastive Learning for Label Efficient Semantic Segmentation Read More »

Contrastive Learning for Sequential Recommendation

Imagine you’re browsing your favorite e-commerce platform, and you notice the suggestions seem to evolve based on what you’ve just looked at or purchased. Whether it’s your streaming service offering a lineup of shows after a binge or a music app predicting your next favorite song, these are the magic moments powered by sequential recommendation

Contrastive Learning for Sequential Recommendation Read More »

Contrastive Learning with Hard Negative Samples

Let’s start with something simple but powerful: contrastive learning. It might sound fancy, but the core idea is straightforward. It’s a method used in self-supervised learning where the goal is to learn useful representations of data—without needing labeled examples. What’s the secret sauce? Contrast! In contrastive learning, you teach the model to pull similar things

Contrastive Learning with Hard Negative Samples Read More »

Scroll to Top