Overview

'Anaesthesia Support through Knowledge retrieval with Large Language Models'

A comparative study assessing the speed and accuracy of a mini-Large Language Model in retrieving information from anaesthesia guidelines compared to manual clinician look up. The study aims to demonstrate that LLMs can safely and rapidly retrieve information from verified sources potentially enhancing guideline accessibility for clinicians.

Investigators:

Dr. Ben Evans

Dr. Allan Xu

Anaesthetists operate in a demanding environment requiring rapid recall of extensive knowledge, often relying on established guidelines published by organisations such as the Association of Anaesthetists. However, guidelines are often difficult to access, navigate and extract specific information from particularly in stressful situations. 'ASK-LLM' addresses the critical need for swift and accurate access to this vast body of evidence-based information by simplifying this process.

Large Language Models (LLMs) offer significant promise in information retrieval but there are concerns regarding their tendency to 'hallucinate' or generate unverified information. Our project directly tackles this by constraining an LLM to a curated and validated mini-dataset of evidence.

We are conducting a comparative study comparing a custom mini-LLM against anaesthetists of varying seniority. A set of real-world clinical questions were designed each possessing an objective correct answer verifiable within the Association of Anaesthetists guideline repository. We compared the speed and accuracy manual clinician look-up compared to LLM search to assess whether LLMs can safely and rapidly provide information from verified sources to enhance guideline accessibility for clinicians.

We anticipate publication of the results in mid-2025