Matching all RGB photos of the target person in the gallery database with the full-body sketch image drawn by the professional is defined as Sketch re-identification (Sketch Re-id). The big gap between the sketch domain and RGB domain makes Sketch Re-id challenging. This paper addresses the problem by proposing a new framework to obtain domain-invariant features, which uses CNN as the backbone. To make the model focus more on the regions related to the sketch image in the RGB photo, we propose a novel cross-domain attention (CDA) mechanism. It uses different ways of splitting feature maps in its two branches and calculates the relationship between different parts in the sketch images and RGB photos. Moreover, we designed the cross-domain center loss (CDC), which breaks through the limitations that datasets need to be in the same domain in the traditional center loss. It effectively reduces the gap between two domains and makes the features with the same ID closer. The experiment is performed on the Sketch Re-id dataset. Each person has one sketch image and two RGB photos. To evaluate the generalization, we also experimented on two popular sketch-photo face datasets. The result in the Sketch Re-id dataset shows the model performs 3.7% higher than the previous methods. And the result in the CUHK student dataset performs 0.38% higher than the state-of-the-art methods.