Think about it this way: you define a distance metric for two numbers, say the absolute value of their difference. This one is straightforward to define. Another measure of distance is to take their difference squared; once again, this is pretty straightforward.
However, how do you define the distance between two vectors? One way to do it is to measure the 2-norm of the difference vector; once again, pretty straightforward.
Now, how do you define the distance between two distributions? One answer is to use the relative entropy. That is likely where the motivation for relative entropy comes from; people were trying to judge the distance between two distributions, and in order to do so they came up with relative entropy.