Recent advancements in deep learning-based compression techniques have demonstrated remarkable performance surpassing traditional methods. Nevertheless, deep neural networks have been observed to be vulnerable to backdoor attacks, where an added pre-defined trigger pattern can induce the malicious behavior of the models. In this paper, we propose a novel approach to launch a backdoor attack with multiple triggers against learned image compression models. Drawing inspiration from the widely used discrete cosine transform (DCT) in existing compression codecs and standards, we propose a frequency-based trigger injection model that adds triggers in the DCT domain. In particular, we design several attack objectives that are adapted for a series of diverse scenarios, including: 1) attacking compression quality in terms of bit-rate and reconstruction quality; 2) attacking task-driven measures, such as face recognition and semantic segmentation in downstream applications. To facilitate more efficient training, we develop a dynamic loss function that dynamically balances the impact of different loss terms with fewer hyper-parameters, which also results in more effective optimization of the attack objectives with improved performance. Furthermore, we consider several advanced scenarios. We evaluate the resistance of the proposed backdoor attack to the defensive pre-processing methods and then propose a two-stage training schedule along with the design of robust frequency selection to further improve resistance. To strengthen both the cross-model and cross-domain transferability on attacking downstream CV tasks, we propose to shift the classification boundary in the attack loss during training. Extensive experiments also demonstrate that by employing our trained trigger injection models and making slight modifications to the encoder parameters of the compression model, our proposed attack can successfully inject multiple backdoors accompanied by their corresponding triggers into a single image compression model.