Light field (LF) is a emerging technology, which can be used in many fields. Furthermore, LF cameras can capture spatial and angular information of 3D real-world scenes. This information is beneficial for image super-resolution (SR). However, most existing LF approaches have the limitation of utilizing the global-view information, which contains the correlation information among all LF. Moreover, to exploit the complementary information from different views of an LF image, we propose a novel SR method that adapts each view to a global domain with the guidance of global-view information. Our method, called LF-GAGNet, uses a dual-branch network to align features across views with deformable convolutions and fuse them with an attention mechanism. The upper branch extracts global-view information and generates adaptive guidance factors for each view through a global-view adaptation-guided module (GAGM). The lower branch uses these factors as offsets for deformable convolutions to achieve feature alignment in the global domain. Furthermore, we design an angular attention fusion module (AAFM) to enhance the angular features of each view according to their importance. We evaluate our method on various real-world scenarios and show that it surpasses other state-of-the-art methods in terms of SR quality and performance. We also demonstrate that our method can handle complex realistic LF scenarios effectively.