The future wireless networks are expected to support more artificial intelligence (AI)-enabled applications, such as Metaverse services, at the network edge. The AI algorithms, like deep learning, play an important role in extracting important information from a large dataset, but conventional centralized learning requires collecting the datasets that are distributed over the users and always include their personal information. Federated learning (FL) has been widely investigated to address those issues by performing learning in a distributed manner. However, it shows performance degradation for heterogeneous networks. In this paper, we introduce asynchronous and personalized FL to address the heterogeneity from different aspects. We first propose a semi-asynchronous FL (Semi-Async-FL) by adding time lag to distributed global model and enabling aggregation while receiving a small set of users. Specifically, we propose a new asynchronous-based personalized FL (Async-PFL) algorithm by considering the staleness of the personalized models in classic personalized FL. The simulations show that our proposed Async-PFL achieves better learning performance than Semi-Async-FL and personalized FL.