This paper mainly discusses how to use the large language models such as GPT and Ernie model combined with the AIGC tools represented by stable diffusion, which uses a random story script to generate images with fixed style, character characteristics, and continuous plots. The article provides a detailed introduction to how to build an assembly line, using a large language model and a story script to generate the prompt words required for stable diffusion. Subsequently, by comparing the characteristics of traditional picture book production and the image results of using language models word prompts, summarize the limitations of text to images. This leads to a supervised multi round iterative LoRA model scheme that utilizes the CLIP to achieve character IP fixation. Simultaneously using the ControlNet model and inpainting to preprocess and reprocess the image can achieve controllable character poses and fixed backgrounds in the picture book. Finally, we will evaluate and summarize the new scheme and analyze its strengths in picture book creation accordingly.