πŸŽ“ReflectTool

Towards Reflection-Aware Tool-Augmented Clinical Agents

Yusheng Liao1,3, Shuyang Jiang2,3, Yu Wang1,3βœ‰οΈ, Yanfeng Wang1,3
1Shanghai JiaoTong University
2Fudan University
3Shanghai Artifical Intelligence Laboratory

Abstract

Large Language Models (LLMs) have shown promising potential in the medical domain, assisting with tasks like clinical note generation and patient communication. However, current LLMs are limited to text-based communication, hindering their ability to interact with diverse forms of information in clinical environments. Despite clinical agents succeeding in diverse signal interaction, they are oriented to a single clinical scenario and hence fail for broader applications. To evaluate clinical agents holistically, we propose ClinicalAgent Bench (CAB), a comprehensive medical agent benchmark consisting of 18 tasks across five key realistic clinical dimensions.


Building on this, we introduce ReflectTool, a novel framework that excels at utilizing domain-specific tools within two stages. The first optimization stage progressively enlarges a long-term memory by saving successful solving processes and tool-wise experience of agents in a tiny pre-defined training set. In the following inference stage, ReflectTool can search for supportive successful demonstrations from already built long-term memory to guide the tool selection strategy, and a verifier improves the tool usage according to the tool-wise experience with two verification methods--Iterative Refinement and Candidate Selection.


Extensive experiments on ClinicalAgent Benchmark demonstrate that ReflectTool surpasses the pure LLMs with more than 10 points and the well-established agent-based methods with 3 points, highlighting its adaptability and effectiveness in solving complex clinical tasks.

ClinicalAgent Bench

Overview

Despite clinical agents succeeding in diverse signal interaction, they are oriented to a single clinical scenario and hence fail for broader applications. To evaluate clinical agents holistically, we propose ClinicalAgent Bench, a comprehensive medical agent benchmark consisting of 18 tasks across five key realistic clinical dimensions.

Figure 1: The overview of ClinicalAgent Bench.

Data Statistics

The statistics of the ClinicalAgent Bench dataset is shown below. We investigate existing public medical datasets and divide them according to the ability requirement of the agents. The ClinicalAgent Bench contains 18 tasks across five capacity dimensions, including Knowledge&Reasoning, MultiModal, Numerical Analysis, Data Understanding, and Trustworthiness. All the data examples were divided into two subsets: test and optimization:
  • test: 14879 samples for standard evaluations, with average 826 samples for each dataset.
  • optimization: we also provide 200 samples of each dataset for agent optimization.
  • Figure 2: Data Statistics of ClinicalAgent Bench.

    Clinical ToolBox

    Based on the ClinicalAgent Bench, we develop a toolbox that contains 17 types of tools to enable agents to handle diverse tasks.

    Table 1: The description of the tools in the clinical toolbox. The column of Input shows the tools’ input format, which indicates the form of the information that the tool can leverage.

    Method: ReflecTool

    We introduce ReflectTool, a novel framework that excels at utilizing domain-specific tools within two stages. ReflectTool can search for supportive successful demonstrations from already built long-term memory to guide the tool selection strategy, and a verifier improves the tool usage according to the tool-wise experience with two verification methods--Iterative Refinement and Candidate Selection.

    Figure 3: Overview of the ReflecTool.

    Performance & Analysis

    Performance on ClinicalAgent Bench

    Table 2: Experimental results of four types of models on Clinical Agent Bench. The β€˜COT’ method indicates the agent runs without the pre-built tools. β€˜*’ indicates the models use 4-bit GPTQ quantization. β€˜-’ means the model is not capable of solving such a task. The best results of each type of task are Bold.

    Ablation Experiments

    Table 3: Ablation results of Refinement and Selection verification methods. All the experiments are conducted on Qwen2-72B. The modules of the REFLECTOOL contain Reflective Memory and Tool-wise Reflection.

    Analysis

    Figure 4: Impact of the verification size on Iterative Refinement and Candidate Selection methods.

    Figure 5: Impact of the verification size on Iterative Refinement and Candidate Selection methods.

    Conclusion

    In this paper, we introduce ClinicalAgent Bench, a holistic benchmark for clinical agents comprising 18 tasks across five key dimensions. Building upon it, we propose ReflecTool, a reflection-aware tool-augmented framework that optimizes tool utilization through long-term memory and tool-wise verification. To adaptively improve agent performance given varying backbones, we adopt Iterative Refinement and Candidate Selection to verify actions. Empirical results show that ReflecTool outperforms existing clinical agents, demonstrating superior adaptability and efficacy in real-world healthcare scenarios.

    BibTeX

    
            @misc{liao2024reflectoolreflectionawaretoolaugmentedclinical,
              title={ReflecTool: Towards Reflection-Aware Tool-Augmented Clinical Agents}, 
              author={Yusheng Liao and Shuyang Jiang and Yanfeng Wang and Yu Wang},
              year={2024},
              eprint={2410.17657},
              archivePrefix={arXiv},
              primaryClass={cs.CL},
              url={https://arxiv.org/abs/2410.17657}, 
            }