In the era of big data, we have access to various sources of potentially unlimited data, but collecting labels for those data is still very costly for computer vision. For example, object detection requires the images to be annotated with labels and bounding boxes for all objects, and instance segmentation requires pixel level annotation of images. Given the limited budget and the non-uniform distribution of real world data, the available labels we have usually follows a long tail distribution, where some frequent classes have a lot of annotations while rare classes have very few labels. With the rapid growth of the Internet, people create new content and concepts almost every day, and it is hard for machines to recognize and classify such novel content, which gives rise to another kind of label scarcity named zero-shot recognition, where we want to train models to recognize new classes that they never see during training. In this work, we study the two types of label scarcity (i.e., long tail distribution of classes and novel classes without annotations) in different applications. On one hand, we study dealing with long tail distribution in scene graph parsing, which requires the model to not only detect objects in the input images but also predict the relations between those objects. We propose a general framework that can be applied to and improve many existing models, by decomposing the problem into classification and ranking sub-problems. On the other hand, to deal with label scarcity caused by novel classes with no annotations, we design generative models as well as utilize external knowledge from text to solve different zero-shot recognition problems in image classification. Specifically, we propose a unified framework for single label zero-shot recognition with generative adversarial networks, and use graph convolutional networks to bridge the gap between seen and unseen classes for multi-label zero-shot image recognition. Additionally, we propose a translational embedding model that recognize new attribute-object compositions. All the works mentioned above use open-source public datasets like ImageNet, MS-COCO, NUS-WIDE and CUB.