Domain adaptive LiDAR point cloud segmentation aims to learn a target segmentation model from labeled source point clouds and unlabelled target point clouds, which has recently attracted increasing attention due to various challenges in point cloud annotation. However, its performance is still very constrained as most existing studies did not well capture data-specific characteristics of LiDAR point clouds. Inspired by the observation that the domain discrepancy of LiDAR point clouds is highly correlated with point density, we design a density-aware self-training (DAST) technique that introduces point density into the self-training framework for domain adaptive point cloud segmentation. DAST consists of two novel and complementary designs. The first is density-aware pseudo labelling that introduces point density for accurate pseudo labelling of target data and effective self-supervised network retraining. The second is density-aware consistency regularization that encourages to learn density-invariant representations by enforcing target predictions to be consistent across points of different densities. Extensive experiments over multiple large-scale public datasets show that DAST achieves superior domain adaptation performance as compared with the state-of-the-art.