Yes, that's how it's supposed to work. Unfortunately it does not work so. Problem was already discussed here: https://github.com/SFML/SFML/issues/235 binding texture with sf::Texture::Pixels changes texture matrix, while sf::Texture::Normalized does not revert it to identity as expected.
I already understood that, and Laurent already explained it in the GitHub issue. You are mixing sfml-graphics rendering (using pixel coordinates) and what you think is "OpenGL rendering" for lack of a better term. If you are using .draw() to render stuff, it counts as sfml-graphics rendering, and the convention is that you use pixel coordinates. If you want to use normalized texture coordinates, then you are probably better off doing the rendering directly with OpenGL code, which is what sf::Texture::bind() with sf::Texture::Normalized was for: interaction with OpenGL. You can't mix and match pixel and normalized coordinates if you just rely on sfml-graphics to do the rendering for you. SFML doesn't expect the user to bind a texture with sf::Texture::Normalized while at the same time binding a texture with pixel coordinates somewhere internally, so it also doesn't reset it for you.
No. Again, it does not work as one would expect. Following Render() code produces exactly the same wrong image(commented lines are changed):
In this line
window.draw(vertices, 4, sf::Quads, &randomTexture);
you pass the texture to SFML as part of a RenderState. You are telling SFML to bind it using pixel coordinates.
In these lines
sf::Texture::bind(&randomTexture, sf::Texture::Normalized); //this line is changed. image is still wrong.
//sf::Texture::bind(0, sf::Texture::Normalized); //has no effect either
window.draw(vertices, 4, sf::Quads, &shader);
you bind the texture using normalized coordinates. Like I said above, don't mix modes. Stick to one mode and everything should work fine. If you want to use RenderStates then you will have to use pixel coordinates everywhere. If you don't need to pass your textures as part of RenderStates, then you can use normalized coordinates everywhere if you want, because SFML won't try to force you to use pixel coordinates any more.
And to add to what I said in my previous post, calling .resetGLStates() resets the GL states to the values
required by sfml-graphics to function properly. It doesn't reset the GL states to the default values that they had when the context was created, although some people might understand it this way.
I don't use texture matrix at all, SFML does. So I have to make a vertex shader that skips multiplication by texture matrix that I don't need.
It is not uncommon to create multiple shaders for different coordinate systems. Just because SFML uses the texture matrix somewhere, doesn't mean you have to use the texture matrix as well, if you
know that the coordinates your shader receives will
always be normalized. Shaders are very flexible. You can change conventions as you wish. Unless you plan on using
the same shader to render sfml-graphics objects (sf::Sprite, sf::Shape, etc.) as well as your custom vertex objects, you won't have to make sure that it is compatible with SFML's rendering. I also recommend splitting shaders up depending on the conventions that are used in them. Although some people might be inclined to toss everything into one gigantic "uber shader", it isn't really recommended from a performance and maintainability standpoint.
I am wondering, is there a reason why you aren't just resorting to using raw OpenGL in your project?
Your code already looks very similar to what the equivalent OpenGL code would look like, and you wouldn't be running into these issues if you went down the raw OpenGL route, at least for some portions of your code. SFML has had a tradition of being very picky in how it wants the user to render stuff but this is probably going to change in the future
.