1In recent years, remarkable advancements have been achieved in the field of image semantic segmentation through the utilization of deep convolutional neural networks (CNNs). Nevertheless, the development of semantic segmentation is hindered by the requirement of a substantial amount of intensively labeled training samples in conventional deep learning networks. The objective of few-shot semantic segmentation is to accurately segment objects within a target class using a limited number of annotated images for learning. Currently, most methods in the few-shot segmentation field are highly sensitive to target categories, meaning that they have weak generalization ability and that segmentation results can vary greatly across different categories. Furthermore, a common drawback among many of these methods is the underutilization of the semantic information available in the support set, leading to suboptimal segmentation performance. To overcome these challenges, we introduce a novel strategy for encoding contextual information in our paper, specifically designed for few-shot image semantic segmentation. First, we propose a metric network to obtain prototype representations of feature classes from the supporting images. Furthermore, we introduce a novel Contextual Advanced Semantic Extraction (CASE) module to learn the trade-off between depth, width, and resolution. To mitigate the detrimental effects of foreground-background class imbalance, we also put forth a hybrid loss strategy as an additional contribution.