Large Language Models (LLMs) have shown promising potential in the medical domain, assisting with tasks like clinical note generation and patient communication. However, current LLMs are limited to text-based communication, hindering their ability to interact with diverse forms of information in clinical environments. Despite clinical agents succeeding in diverse signal interaction, they are oriented to a single clinical scenario and hence fail for broader applications. To evaluate clinical agents holistically, we propose ClinicalAgent Bench (CAB), a comprehensive medical agent benchmark consisting of 18 tasks across five key realistic clinical dimensions.
Building on this, we introduce ReflectTool, a novel framework that excels at utilizing domain-specific tools within two stages. The first optimization stage progressively enlarges a long-term memory by saving successful solving processes and tool-wise experience of agents in a tiny pre-defined training set. In the following inference stage, ReflectTool can search for supportive successful demonstrations from already built long-term memory to guide the tool selection strategy, and a verifier improves the tool usage according to the tool-wise experience with two verification methods--Iterative Refinement and Candidate Selection.
Extensive experiments on ClinicalAgent Benchmark demonstrate that ReflectTool surpasses the pure LLMs with more than 10 points and the well-established agent-based methods with 3 points, highlighting its adaptability and effectiveness in solving complex clinical tasks.
Figure 1: The overview of ClinicalAgent Bench.
Figure 2: Data Statistics of ClinicalAgent Bench.
Table 1: The description of the tools in the clinical toolbox. The column of Input shows the toolsβ input format, which indicates the form of the information that the tool can leverage.
Figure 3: Overview of the ReflecTool.
Table 2: Experimental results of four types of models on Clinical Agent Bench. The βCOTβ method indicates the agent runs without the pre-built tools. β*β indicates the models use 4-bit GPTQ quantization. β-β means the model is not capable of solving such a task. The best results of each type of task are Bold.
Table 3: Ablation results of Refinement and Selection verification methods. All the experiments are conducted on Qwen2-72B. The modules of the REFLECTOOL contain Reflective Memory and Tool-wise Reflection.
Figure 4: Impact of the verification size on Iterative Refinement and Candidate Selection methods.
Figure 5: Impact of the verification size on Iterative Refinement and Candidate Selection methods.
In this paper, we introduce ClinicalAgent Bench, a holistic benchmark for clinical agents comprising 18 tasks across five key dimensions. Building upon it, we propose ReflecTool, a reflection-aware tool-augmented framework that optimizes tool utilization through long-term memory and tool-wise verification. To adaptively improve agent performance given varying backbones, we adopt Iterative Refinement and Candidate Selection to verify actions. Empirical results show that ReflecTool outperforms existing clinical agents, demonstrating superior adaptability and efficacy in real-world healthcare scenarios.
@misc{liao2024reflectoolreflectionawaretoolaugmentedclinical,
title={ReflecTool: Towards Reflection-Aware Tool-Augmented Clinical Agents},
author={Yusheng Liao and Shuyang Jiang and Yanfeng Wang and Yu Wang},
year={2024},
eprint={2410.17657},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.17657},
}