Overview
While large language models can deliver impressive results, they often demand substantial computing resources—posing barriers in terms of cost, accessibility, and environmental impact. Compact models enable secure, on-site diagnosis support, minimizing the need to transmit sensitive patient data to external servers, which are more practical for real-world use such as healthcare.
However, these smaller models are particularly reliant on the clarity and effectiveness of their prompts to perform well. Thus, creating a single, cost-efficient “super” prompt—an instruction that enables resource-limited models to approach expert-level diagnostic performance—can make advanced AI tools more accessible in clinical settings without compromising privacy.
Objective
In this competition, you’ll be challenged to create a single, highly effective prompt for small language models (under 70B parameters) that enables them to generate accurate, one-sentence medical diagnoses from a diverse set of >200 clinical case reports.
Rather than building a new AI model, your task is to design a prompt that can consistently guide existing, resource-constrained models to extract and summarize complex clinical narratives with precision.
By advancing prompt engineering, this challenge aims to help models make accurate diagnoses from complex clinical cases—contributing to safer, more efficient, and privacy-preserving AI tools for healthcare professionals.
PRIZES
TRACKS
This competition is separated into two tracks:
Clinical Professionals Track
Open to physicians, medical specialists, and healthcare professionals.
General Track
Open to all AI researchers, data scientists, and developers.