Recommender systems play a critical role in modern digital platforms, yet their effectiveness depends on how well they extract and utilize implicit information from available data. This dissertation explores informative learning—a paradigm that enhances recommendation quality by extracting, refining, and transferring knowledge from structured and unstructured data.
We introduce a series of informative learning techniques that advance market adaptation, confidence calibration, collaborative-semantic alignment, knowledge extraction, and self-optimizing ranking. Specifically, we propose a transferable attention mechanism for adapting recommendation models across diverse geographic markets without requiring additional training data. A confidence-aware fine-tuning framework integrates conformal prediction to quantify uncertainty in recommendations. To bridge collaborative and semantic signals, we develop a method that improves performance in both warm-start and cold-start scenarios by leveraging textual information. We further explore how large language models (LLMs) can generate structured representations from unstructured game text to enhance personalization and content integrity. Finally, we present Auto-Guided Prompt Refinement (AGP), a dynamic framework that refines user profiles based on ranking feedback without manual prompt engineering.
Extensive experiments on multiple real-world datasets validate the effectiveness of the proposed techniques. Collectively, these contributions advance the development of adaptive, confidence-aware, and knowledge-enriched recommendation systems.