Prompt-driven llm-agents for content modification in ranking competitions
Sun 02.02 13:00 - 13:30
Abstract: Automated document optimization using Large Language Models (LLMs) may play a crucial role in ranking competitions, where strategic modifications can enhance retrieval rankings. This paper introduces a structured, prompt-based approach for LLM-driven document modification, aiming to improve rankings while maintaining content quality and faithfulness to the original content. Our method formalizes the modification process through a structured prompt architecture, consisting of a system prompt that provides competition guidelines, a user prompt that includes the candidate document for modification and the query it aims to answer, and a contextual component that incorporates past ranking data. We define and evaluate four distinct prompting strategies—Pointwise, Pairwise, Listwise, and Dynamic—each designed to leverage ranking history differently. The effectiveness of these approaches is assessed through both offline and online experiments across multiple ranking competitions, comparing LLM agents to human participants and a baseline modification method. To measure the impact of LLM-driven modifications, we introduce novel faithfulness and contextual consistency metrics, utilizing dense and sparse document representations in addition to previously used rank promotion estimation methods. Experimental results demonstrate that our structured prompting methods lead to improved ranking performance while preserving document coherence and relevance. These findings highlight the effectiveness of structured LLM prompting in competitive retrieval environments and provide insights into optimal strategies for ranking-oriented document modifications.