As Keith pointed out, unlike DACs which ideally output a single precise voltage value for a particular each binary coded input, ADCs intentionally represent a range of analog input values with each binary coded output value.
Ideally the this range/accuracy is one Least Significant Bit (LSB), however in reality this range can vary depending on input value, ADC device, etc.
In the case of the ATMEL ATmega32A, this range/accuracy is spec'd at +/- 2 LSB, or if using the internal voltage reference at 10-bit resolution, +/- 5 mV.
The above issue can also be a source of confusion, as the calculated representive range of a unipolar ADC is NOT 0 to Vref, instead it is 0 to Vref * (2n-1)/2n, or in other words 0 to Vref-LSB, therefore at full scale the maximum calucated value will be less than Vref, in this case 2.56V * 1023/1024 or 2.56V - 2.5mV or 2.5575V.
Some of the additional sources of error are a spec'd Integral Non-Linearity of 0.5 LSB for the ATmega32A, Gain Error, in the case of SAR ADCs in general insufficient sample and hold cap charge time and a source output impedance which exceeds the ADCs recommend value.
How many of these factors the simulation takes into account is difficult to say with certainty as I rarely use Proteus these days.
However, in real world hardware the above mentioned factors and others can play a significant role in any design.
BigDog