We are living in an ever-changing and high-tech world starving for intelligent applications. Intensive research works contribute to building intelligent agents to enhance the smart interaction between people with intelligent applications. To understand natural language, the most fundamental work is named entity understanding, which is to interpret the sentence by words in both semantic and syntactic ways. Intelligent agents raise their capability of understanding the key entities in a sentence to a new standard but still suffer from limited ability without sufficient annotated data.
Faced with the challenge of maintaining language models' power in low-resource scenarios, this thesis focuses on smart entity knowledge transfer by uncovering the nature of data coherence and bias, included in four works: (1) Coarse-grained named entity out-domain transfer. I tackle data bias among different domains by extracting domain-invariant features to assist positive transfer. (2) Fine-grained named entity in-domain transfer. I explore the nature of label space to enhance smart transfer from well-performing labels to new labels. (3) Life-long transfer among scalable domains for the attribute value extraction. I explore knowledge transfer while maintaining the model's parameter efficiency and life-long ability. (4) Prompting-style multi-label space transfer for the lengthy information extraction. I study the feasibility of a large language model in relation to extraction from a lengthy context under the challenge of huge and complicated label space.