To be more clear, I was referring to changing the texture rectangle only. That is, add (n, n) to the rectangle's position and subtracting (n, n) * 2 from the rectangle's size, where n is the fractional amount.
The main issue is that OpenGL doesn't know for certain which pixel it should use when the value is a whole number because that whole number borders two pixels, not just one, so is affected by floating point precision since it could be either side of that whole number.
e.g. (1, 1) could be (simplified):
a) (1.0000001, 1.0000001)
b) (1.0000001, 0.9999999)
c) (0.9999999, 1.0000001)
d) (0.9999999, 0.9999999)
When it's >=1, it's one pixel (1 - 1.99999999...); when it's <1 it's a different one (0 - 0.9999999...)
Again, the usually simpler solution is to just add an extra pixel of the image around each one in the texture, then still set the texture rectangle to ignore that pixel. If it "spills" into the surrounding area (due the reason shown above), it will still be the expected pixel.
Remember, the problem isn't the position of the vertices but the texture co-ordinates. However, these can be unreliable if vertices are not integers.