Cybersecurity leaders are being asked impossible questions. “What’s the likelihood of a breach this year?” “How much would it cost?” And “how much should we spend to stop it?”
Yet most risk models used today are still built on guesswork, gut instinct, and colorful heatmaps, not data.
In fact, PwC’s 2025 Global Digital Trust…
Deploying your Large Language Model (LLM) is not necessarily the final step in productionizing your Generative AI application. An often forgotten, yet crucial part of the MLOPs lifecycle is properly load testing your LLM and ensuring it is ready to withstand your expected production traffic. Load testing at a high level is the practice of…

Recently, Sesame AI published a demo of their latest Speech-to-Speech model. A conversational AI agent who is really good at speaking, they provide relevant answers, they speak with expressions, and honestly, they are just very fun and interactive to play with.
Note that a technical paper is not out yet, but they do have a…

For a ML model to be useful it needs to run somewhere. This somewhere is most likely not your local machine. A not-so-good model that runs in a production environment is better than a perfect model that never leaves your local machine.
However, the production machine is usually different from the one you developed the…

This is the sixth (and likely last) part of a Linear Programming series I’ve been writing. With the core concepts covered by the prior articles, this article focuses on goal programming which is a less frequent linear programming (LP) use case. Goal programming is a specific linear programming setup that can handle the optimization of…

In my previous article, I discussed how morphological feature extractors mimic the way biological experts visually assess images.
This time, I want to go a step further and explore a new question: Can different architectures complement each other to build an AI that “sees” like an expert?
Introduction: Rethinking Model Architecture Design
While building a…

Jupyter AI brings generative AI capabilities right into the interface. Having a local AI assistant ensures privacy, reduces latency, and provides offline functionality, making it a powerful tool for developers. In this article, we’ll learn how to set up a local AI coding assistant in JupyterLab using Jupyter AI, Ollama and Hugging Face. By…

GenAI systems affect how we work. This general notion is well known. However, we are still unaware of the exact impact of GenAI. For example, how much do these tools affect our work? Do they have a larger impact on certain tasks? What does this mean for us in our daily work?
To answer these…

What if you want to write the whole object detection training pipeline from scratch, so you can understand each step and be able to customize it? That’s what I set out to do. I examined several well-known object detection pipelines and designed one that best suits my needs and tasks. Thanks to Ultralytics, YOLOx, DAMO-YOLO,…

“Conduct a comprehensive literature review on the state-of-the-art in Machine Learning and energy consumption. […]”
With this prompt, I tested the new Deep Research function, which has been integrated into the OpenAI o3 reasoning model since the end of February — and conducted a state-of-the-art literature review within 6 minutes.
This function goes beyond a normal web…