Skip to content Skip to sidebar Skip to footer

Sensor-Invariant Tactile Representation for Zero-Shot Transfer Across Vision-Based Tactile Sensors

Tactile sensing is a crucial modality for intelligent systems to perceive and interact with the physical world. The GelSight sensor and its variants have emerged as influential tactile technologies, providing detailed information about contact surfaces by transforming tactile data into visual images. However, vision-based tactile sensing lacks transferability between sensors due to design and manufacturing…

Read More

Advancing Vision-Language Reward Models: Challenges, Benchmarks, and the Role of Process-Supervised Learning

Process-supervised reward models (PRMs) offer fine-grained, step-wise feedback on model responses, aiding in selecting effective reasoning paths for complex tasks. Unlike output reward models (ORMs), which evaluate responses based on final outputs, PRMs provide detailed assessments at each step, making them particularly valuable for reasoning-intensive applications. While PRMs have been extensively studied in language tasks,…

Read More

Taking a responsible path to AGI

We’re exploring the frontiers of AGI, prioritizing readiness, proactive risk assessment, and collaboration with the wider AI community. Artificial general intelligence (AGI), AI that’s at least as capable as humans at most cognitive tasks, could be here within the coming years. Integrated with agentic capabilities, AGI could supercharge AI to understand, reason, plan, and execute…

Read More

The Art of Hybrid Architectures

In my previous article, I discussed how morphological feature extractors mimic the way biological experts visually assess images. This time, I want to go a step further and explore a new question: Can different architectures complement each other to build an AI that “sees” like an expert? Introduction: Rethinking Model Architecture Design While building a…

Read More