Faculty Recruiting Support CICS

Neural Approaches to Feedback in Information Retrieval

17 Jun
Add to Calendar
Thursday, 06/17/2021 10:00am to 12:00pm
Zoom Meeting
PhD Seminar
Speaker: Keping Bi

Zoom Meeting: https://umass-amherst.zoom.us/j/7469168220?pwd=RUFYYm1Kdk5UeFFWS3hBWGRFT3RHdz09

Relevance feedback on search results indicates users' search intent and preferences. Extensive studies have shown that incorporating relevance feedback (RF) on the top k (usually 10) ranked results significantly improves the performance of re-ranking. However, most existing research on user feedback focuses on word-based retrieval models such as the vector space model (VSM) and language model (LM) for information retrieval (IR). Recently, neural retrieval models have shown their efficacy in capturing relevance matching in retrieval but little research has been conducted on neural approaches to feedback. This leads us to study different aspects of feedback with neural approaches.

RF techniques are seldom used in real search scenarios since it can require significant manual efforts to obtain explicit judgments for search results. However, with intelligent assistants being more popular nowadays, user feedback of result quality could be collected potentially during their interactions with the assistants. This scenario argues for retrieval on answer passages instead of documents and iterative feedback on one result at a time rather than feedback on a batch of results. To this end, we study iterative feedback versus top-k feedback with a focus on answer passages. Moreover, we study both positive and negative RF to refine the re-ranking performance. Although effective, positive feedback is not always available since relevant results may not be ranked at the top, especially for difficult queries. Also, in most cases, it is more beneficial to find the first relevant result compared with finding additional relevant results. Thus, incorporating only negative feedback to identify relevant results is an important research topic. However, it is much more challenging to find relevant results based on negative feedback than positive feedback since relevant results are usually similar while non-relevant results could vary considerably.

We focus on the tasks of text retrieval and product search to study the different aspects of incorporating feedback for ranking refinement with neural approaches. Our contributions are: (1) we show that iterative relevance feedback (IRF) is more effective than top-k RF on answer passages and we further improve IRF with neural approaches; (2) we propose an effective RF technique based on neural models for product search; (3) we study how to refine re-ranking with negative feedback for conversational product search; (4) we leverage negative feedback in user responses to ask clarifying questions in open-domain conversational search. Our research improves retrieval performance by incorporating feedback in interactive retrieval and approaches multi-turn conversational information-seeking tasks with a focus on positive and negative feedback.

Advisor: W. Bruce Croft