There are two common definitions. The first is the easy to remember "you must sample at 2x (or faster) than the highest frequency in the measured signal". This is actually false in general, but many courses and even interview questions are focused on the very common but specific case of baseband sampling.
The more accurate is that "the uniform sampling rate has a lower bound of 2x the information bandwidth of the signal". For example, if you have a 4khz sine wave, the average sampling rate would be 0 -- using only a handful of samples the signal will be exactly known. If the bandwidth were 4kHz-5kHz, then you could sample at 2kHz. This is because no frequency between 4 and 5khz would map to the same digital frequency. had the bandwidth been between 4.5kHz and 5.5kHz, then the (uniform) sampling rate would need to be higher to prevent destructive aliasing.
The second definition can come up in some specific applications, though it is common with modern communication systems.
and yes, you CAN assume values between signals. You just have to have a correctly defined anti-aliasing filter (or in the original formulation, an assumption of a bandlimited signal). In such a case, the analog signal can be reconstructed from the samples. The example in the picture shows a signal which has a bandpass filter as the anti-aliasing filter. The image also shows what happens when a signal is then reconstructed using a different anti-aliasing filter.