Skip to content Skip to sidebar Skip to footer

Graph Theory to Harmonize Model Integration | by Ahmad Albarqawi | Feb, 2024

Optimising multi-model collaboration with graph-based orchestration Orchestra — photographer Arindam Mahanta by unsplashIntegrating the capabilities of various AI models unlocks a symphony of potential, from automating complex tasks that require multiple abilities like vision, speech, writing, and synthesis to enhancing decision-making processes. Yet, orchestrating these collaborations presents a significant challenge in managing the inner relations…

Read More

How to Create Powerful Embeddings from Your Data to Feed into Your AI | by Eivind Kjosbakken | Feb, 2024

This article will show you different approaches you can take to create embeddings for your data Creating quality embeddings from your data is crucial for your AI system's efficacy. This article will show you different approaches you can use to convert your data from formats like images, texts, and audio, into powerful embeddings that can…

Read More

Satellites Can See Invisible Lava Flows and Active Wildfires, But How? (Python) | by Mahyar Aboutalebi, Ph.D. šŸŽ“ | Feb, 2024

Visualizing satellite images captured over volcanos and wildfires in various spectral bands Sentinel-2 images captured over a volcano and a wildfire visualized with different spectral bands by the author🌟 Introduction šŸ” Sentinel-2 (Spectral Bands) 🌐 Downloading Sentinel-2 Images āš™ļø Processing Sentinel-2 Images (Clipping and Resampling) šŸŒ‹ Visualization of Sentinel-2 Images (Volcano) šŸ”„ Visualization of Sentinel-2…

Read More

Building a Semantic Book Search: Scale an Embedding Pipeline with Apache Spark and AWS EMR Serverless | by Eva Revear | Jan, 2024

Using OpenAI’s Clip model to support natural language search on a collection of 70k book covers In a previous post I did a little PoC to see if I could use OpenAI’s Clip model to build a semantic book search. It worked surprisingly well, in my opinion, but I couldn’t help wondering if it would…

Read More

Interpreting R²: a Narrative Guide for the Perplexed | by Roberta Rocca | Feb, 2024

An accessible walkthrough of fundamental properties of this popular, yet often misunderstood metric from a predictive modeling perspective Photo by Josh Rakower on UnsplashR² (R-squared), also known as the coefficient of determination, is widely used as a metric to evaluate the performance of regression models. It is commonly used to quantify goodness of fit in…

Read More

Advanced Retrieval-Augmented Generation: From Theory to LlamaIndex Implementation | by Leonie Monigatti | Feb, 2024

For additional ideas on how to improve the performance of your RAG pipeline to make it production-ready, continue reading here: This section discusses the required packages and API keys to follow along in this article. Required Packages This article will guide you through implementing a naive and an advanced RAG pipeline using LlamaIndex in Python.…

Read More

Understanding Latent Dirichlet Allocation (LDA) — A Data Scientist’s Guide (Part 2) | by Louis Chan | Feb, 2024

LDA Convergence Explained with a Dog Pedigree Model ā€œWhat if my a priori understanding of dog breed group distribution is inaccurate? Is my LDA model doomed?ā€ My wife asked. Welcome back to part 2 of the series, where I share my journey of explaining LDA to my wife. In the previous blog post, we discussed…

Read More

An Introduction To Fine-Tuning Pre-Trained Transformers Models | by Ram Vegiraju | Feb, 2024

Simplified utilizing the HuggingFace trainer object Image from Unsplash by Markus SpiskeHuggingFace serves as a home to many popular open-source NLP models. Many of these models are effective as is, but often require some sort of training or fine-tuning to improve performance for your specific use-case. As the LLM implosion continues, we will take a…

Read More