-
Ditto.
Possible backwards compatible way could be another field in sf::RenderStates plus few trivial changes in sf::RenderTarget.
-
Could you be more specific? Normals for what and why cant you use unnormalized values?
-
I mean normalized coordiates, SFML is actually doing a tiny bit of extra work to get you the pixel coordinates but 'support' for both exists: http://www.sfml-dev.org/documentation/2.2/classsf_1_1Texture.php#ae9a4274e7b95ebf7244d09c7445833b0
The problem is that all SFML code (aka. around 10 relevant lines in RenderTarget.cpp) right now uses pixel mode, so all texture coords that you use are interpreted as pixels.
You can normalize them yourself but it's tedious, backwards and requires you to know the size of the texture.
-
I'm sorry, I'm not sure I fully understand. What's the problem with using CoordinateType::Normalized?
-
You can only use it for own OPENGL code.
You can NOT use it for drawing to sf::RenderTexture or sf::RenderWindow because they are always using Pixels, it's hardcoded in their (common) code.
Because of this, there is no way to create your own* custom drawable class that uses Normalized coordinates, which is pretty sad, considering that sf::Vertex gives us so much power and low-level access to design our own drawable classes already.
*drawables of SFML assume Pixels so you couldn't use them with Normalized even if Pixels wasn't hardcoded.
-
Are you referring to using RenderTarget::draw() with normalized coordinates, such as
sf::RectangleShape rectangle(sf::Vector2f(0.5, 0.5));
rectangle.setPosition(0.25, 0.25); // Remember that this sets the top left corner of the rectangle
window.draw(rectangle);
to produce a rectangle that is centered on the window and with half the size of the window? If so, you can accomplish this using views by using window.setView(sf::FloatRect(0.0, 0.0, 1.0, 1.0). If that's not what you meant, could you provide some example code to show exactly what you would like SFML to do?
-
I am talking about normalized texture coordinates.
It's a pretty basic concept in computer graphics (among other things): it means that instead of using texture coordinates in range from 0 to size, we use ones in range from 0 to 1.
TopLeft corner is (0, 0) in Pixels and (0, 0) in Normalized.
BottomRight corner is (size.x, size.y) in Pixels and (1, 1) in Normalized.
To get Pixels out of Normalized or other way around you just multiply or divide by size (which is what SFML does internally to give us Pixels, it sets a matrix in GL that divides x and y by respective texture sizes on each access to map Pixels into normalized coordinates, look at Texture.cpp).
The advantage of Pixels is that it's more intuitive for most people.
The advantage of Normalized is that it doesn't rely on texture size but on ratios (ie. you can draw a single 100x100 quad and map entire texture onto it, without knowing its' size).
A quad with these texture coordinates, in Normalized, covers entire texture, no matter the size:
sf::Vertex ver[4];
ver[0].texCoords = sf::Vector2f(0.f, 0.f);
ver[1].texCoords = sf::Vector2f(0.f, 1.f);
ver[2].texCoords = sf::Vector2f(1.f, 1.f);
ver[3].texCoords = sf::Vector2f(1.f, 0.f);
-
You still haven't answered the question: why? Do you have a use case, or more complete explanations on why this would be beneficial to SFML?
-
As I said:
The advantage of Normalized is that it doesn't rely on texture size but on ratios
no way to create your own* custom drawable class that uses Normalized coordinates
The use case is not spectacular, I had to map entire texture onto quad and had to pass the size onto it somehow, since I couldn't just use 0,0, ..., 1,1 as coords and not care about texture (that'd come in render states anyway).
But I know it probably won't be accepted since it's not in SFML's definition of 'Simple' but at least I can't say I didn't try.
-
Going just on what you wrote on that post, it sounds like at some point you had to write "0, 0, texture.getSize().x, texture.getSize().y" and are asking for the ability to shorten that to "0, 0, 1, 1". If so, that doesn't really seem like enough of a simplification to justify additional functions in the API.
I think Laurent's asking for a significantly more detailed and concrete argument. Maybe some before/after code snippets would help.
-
I think Laurent's asking for a significantly more detailed and concrete argument. Maybe some before/after code snippets would help.
No, his description of the use case was detailed enough.
But I know it probably won't be accepted since it's not in SFML's definition of 'Simple' but at least I can't say I didn't try.
I don't know what others think about it, but to me it's probably not worth adding something to the public API.
-
The rule of thumb is: If a feature can easily be added on top of SFML without modifying the core library, it is probably not a good candidate to be integrated.
It's not a big deal but it is a limitation, and an artificial one that cannot be jumped over in any sane way.
I don't know what others think about it, but to me it's probably not worth adding something to the public API.
Then why is there Normalized already in the public API?
-
Then why is there Normalized already in the public API?
You said it: for own OpenGL code. Look at the Texture::bind function:
static void bind(const Texture* texture, CoordinateType coordinateType = Normalized);
-
Considering I linked to it and read it's source before, I think I know that function. :P
I was just making a point: for sf::Vertex + sf::RenderTarget it's "too much additions in API"/"out of scope"/"not big advantage" to add one field to render states plus some insanely small internal changes, but it's okay to write a bind that also allows Normalized and expose that in the API.
Also, this functionality is redundant, you can get id of GL texture from sf::Texture and then use that to bind it however you want, and you might as well, since you are already doing GL calls, SFML is not blocking that possibility, sf::Texture::bind with Normalized is just a shortcut, not the only way.
On the other hand, using Normalized with sf::Vertex and sf::RenderTarget is not possible with SFML, nor GL.
-
I was just making a point: for sf::Vertex + sf::RenderTarget it's "too much additions in API"/"out of scope"/"not big advantage" to add one field to render states plus some insanely small internal changes.
Maybe you could expand on this possible implementation a bit more, as one usually does in the open post of feature request (instead of writing "Ditto" and assume everyone knows exactly what you're thinking). ;)
Also, this functionality is redundant, you can get id of GL texture from sf::Texture and then use that to bind it however you want
As you probably already know, I've no idea about all the GL magic, but last I checked SFML doesn't provide the actual OpenGL IDs, which exactly the reason why the bind functions were introduced. But maybe I understood that wrong in the past. :)
-
Ditto was replacement for writing again "Way to draw..." and so on, repeating the title.
Implementation is usually (sadly) secondary issue here, and primary issue is 'out of scope', 'not useful enough', 'what for' and so on...
I assumed people know what Normalized is... at least relevant to acceptance of this feature people know (hello Laurent).
last I checked SFML doesn't provide the actual OpenGL IDs, which exactly the reason why the bind functions were introduced.
http://en.sfml-dev.org/forums/index.php?topic=11303.msg79962#msg79962
Using this, we only ever need sf::Texture::bind(sf::Texture*); with Pixels being implicit to do anything ever with the texture via GL, including getting its' ID and then rebinding with another coords translation matrix (or just getting it and setting it, without rebinding).
This 'method' is approved by Laurent few posts later as he fiercely defends from adding this accessor. ;) I know he really isn't against it but it's still funny...
Although in the end he did say:
It will be added as soon as possible (not in SFML 2.1 which is a bug-fix release).
we still don't have it...
Also, bind is a bit of an 'internal' call too, that does a tiny bit extra work, because it goes back to GL_MODELVIEW and says in comment that sf::RenderTarget expects that of it.
Gratuitous Laurent/SFML bashing aside, implementation of THIS feature would consist of few things:
1. Add enum field to sf::RenderStates (plus possibly adding relevant constructors, but that is another can of worms, I'm talking about minimal support here).
2. Add enum field to render cache (two draws with same texture but different coords mode require a rebind, or a reset of matrix, but that'd be more messy).
3. Instead of just checking texture id (SFML 64 bit id, not GL id) before rebinding, check the last coords mode too.
-
I also requested this feature some time ago. It was deemed 'too specific' at that time.
http://en.sfml-dev.org/forums/index.php?topic=7243
-
That bodes well...
-
I also requested this feature some time ago. It was deemed 'too specific' at that time.
http://en.sfml-dev.org/forums/index.php?topic=7243
I'm guessing that by "Too specific", it meant that the usage is limited to few cases making its implementation not worth it. Personally I havent needed support for normalized co-ordinates and because of that I'm indifferent of its implementation but in the end, it cant hurt right?