Welcome, Guest. Please login or register. Did you miss your activation email?

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - Tyrendel

Pages: [1]
General / Re: Render to several render textures in one draw ?
« on: December 14, 2022, 03:25:52 pm »
Thanks, I'll have a look out of curiosity but I'm developping on OsX and Windows + using the Rust-SFML binding and I'm not confident at all to try and modify the library + binding  ;)

General / Re: VertexArray size impacting gl_Vertex value in Vertex shader
« on: December 14, 2022, 03:22:43 pm »
Thanks for the quick answer!

How did you work around the issue? I'm thinking to let the VertexArray declared with 5 Vertices though I find it quite a dirty solution...  :)

General / VertexArray size impacting gl_Vertex value in Vertex shader
« on: December 14, 2022, 01:55:33 pm »
Hi, I'm facing a really weird behavior between VertexArrays and Vertex shader :

Given the same 3 Vertices, a VertexArray doesn't set the same Vertex Shader "gl_Vertex" value depending on the VertexArray size.

For VertexArrays of size 5 upwards, gl_Vertex seems to contain word coordinates (as expected), but for VertexArrays of size up to 4, gl_Vertex seems to contain coordinates centered and aligned with the window.

I've been able to reproduce with a minimal example described bellow BUT in rust-sfml, so maybe the issue lies there... Anyway, I'm thinking if anyone could try to reproduce in native C++ it would be helpfull to corner the issue.

On this image there are three VertexArrays :
- one line strip representing x and y axes (red and green)
- one large triangle, a vertex array declared with size 3
- one small triangle, a vertex array declared with size 5

Both triangles are made of the same coordinates centered on origin, just scaled differently. They are drawn using the same shader.
The render state is translated (diagonal), rotated (90deg) and scaled (x10) to reveal the issue.
The Vertex shader passes the gl_Vertex coordinates to the fragment shader which maps x and y to the gl_FragColor red and green.

As you can see, the small triangle displays coordinates rotated 90deg, still centered on itsef, and the transition between colors is blured on 10 pixels, as expected. The large triangle on the other side displays colors as if no changes were made to the render state.

Thanks for reading and for your help!

Here is the source code if you want to check more things:
Rust :
fn minimal_bug_reproduction() {
    let mut window = RenderWindow::new(
        "Vertex Array issue with shader and gl_Vertex.xy",
    window.set_view(&View::new((0., 0.).into(), Vector2f::new(1500., 1000.)));

    let rotate = |vector: Vector2f, sin_cos: (f32, f32)| {
        let (sin, cos) = sin_cos;
        Vector2f::new( vector.x * cos - vector.y * sin, vector.y * cos + vector.x * sin)

    let shader = Shader::from_file(Some("resources/shaders/dummy_vertex.glsl"), None, Some("resources/shaders/triangle_fragment.glsl"));
    let mut render_state = RenderStates::default();
    render_state.transform.translate(50., 50.);
    render_state.transform.scale(10., 10.);
    render_state.shader = shader.as_ref();

    let mut angle = 0_f32;
    let angle_speed = 0.5_f32;

    let left_pos = Vector2f::new(0., 0.);
    let right_pos = Vector2f::new(0., 0.);

    let mut up_shape = VertexArray::new(PrimitiveType::LineStrip, 3);
    up_shape[0] = Vertex::with_pos_color(Vector2f::new(30., 0.), Color::RED);
    up_shape[1] = Vertex::with_pos_color(Vector2f::new(0., 0.), Color::BLACK);
    up_shape[2] = Vertex::with_pos_color(Vector2f::new(0., 30.), Color::GREEN);

    let size_1 = 20.;
    let mut shape_1 = VertexArray::new(PrimitiveType::Triangles, 3);
    shape_1[0] = Vertex::with_pos(Vector2f::new(-size_1, -size_1) + left_pos);
    shape_1[1] = Vertex::with_pos(Vector2f::new(size_1, -size_1) + left_pos);
    shape_1[2] = Vertex::with_pos(Vector2f::new(0., size_1) + left_pos);

    let size_2 = 10.;
    let mut shape_2 = VertexArray::new(PrimitiveType::Triangles, 5);
    shape_2[0] = Vertex::with_pos(Vector2f::new(-size_2, -size_2) + right_pos);
    shape_2[1] = Vertex::with_pos(Vector2f::new(size_2, -size_2) + right_pos);
    shape_2[2] = Vertex::with_pos(Vector2f::new(0., size_2) + right_pos);

    let mut clock_updates: Clock = Clock::start();
    let mut delta_time_physics: Time = Time::ZERO;
    let mut delta_time_update: Time = Time::ZERO;
    while window.is_open() {
        let delta_time = clock_updates.restart();
        delta_time_physics += delta_time;
        delta_time_update += delta_time;

        while delta_time_physics.as_seconds() >= Game::SIMULATION_TIME_STEP as f32 {
            delta_time_physics -= Time::seconds(Game::SIMULATION_TIME_STEP as f32);
            angle += angle_speed * Game::SIMULATION_TIME_STEP as f32;

            shape_1[0] = Vertex::with_pos(rotate(Vector2f::new(-size_1, -size_1), angle.sin_cos()) + left_pos);
            shape_1[1] = Vertex::with_pos(rotate(Vector2f::new(size_1, -size_1), angle.sin_cos()) + left_pos);
            shape_1[2] = Vertex::with_pos(rotate(Vector2f::new(0., size_1), angle.sin_cos()) + left_pos);

            shape_2[0] = Vertex::with_pos(rotate(Vector2f::new(-size_2, -size_2), angle.sin_cos()) + right_pos);
            shape_2[1] = Vertex::with_pos(rotate(Vector2f::new(size_2, -size_2), angle.sin_cos()) + right_pos);
            shape_2[2] = Vertex::with_pos(rotate(Vector2f::new(0., size_2), angle.sin_cos()) + right_pos);

        while let Some(event) = window.poll_event() {
            match event {
                Event::Closed => window.close(),
                Event::KeyPressed { code: Key::Escape, .. } => window.close(),
                _ => {}

        window.draw_vertex_array(&shape_1, render_state);
        window.draw_vertex_array(&shape_2, render_state);
        render_state.shader = None;
        window.draw_vertex_array(&up_shape, render_state);
        render_state.shader = shader.as_ref();
vertex glsl:
#version 120

varying vec2 io_world_position;

void main() {
    // transform the vertex position
    gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
    io_world_position = gl_Vertex.xy;

    // transform the texture coordinates
    gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;

    // forward the vertex color
    gl_FrontColor = vec4(gl_Vertex.xy,0.,1.);
fragment glsl:
#version 120

varying vec2 io_world_position;

void main() {
    gl_FragColor = vec4(io_world_position, 0., alpha);

General / Re: Render to several render textures in one draw ?
« on: December 14, 2022, 01:07:19 pm »
Thanks for your answer. I think I'll still try this without the MRT in the future!

General / Re: Render to several render textures in one draw ?
« on: December 11, 2022, 11:54:41 am »
Ok so I found the terms for what I was looking for, and the answer to my questions.

I was talking about Multiple Render Targets, it is a standard way of doing things, it isn't currently possible in SFML.

-> Any plan of having it added in the future?

General / Render to several render textures in one draw ?
« on: December 10, 2022, 04:22:51 pm »
Hi, I'm still working on my Jabos project, currently on the rendering part to add lighting effects :

At this point I'm drawing each frame bit by bit. Each kind of element to draw is a Vertex Array using UV coords to manage normals, drawn with its own shader to manage lights:
  • stars
  • atmosphere
  • ocean background (necessary to mask the stars)
  • ship parts
  • ocean foreground (applied with a multiply blend mode)
  • planet ground

I feel this is suboptimal as code managing lighting is duplicated in each shader and I'm facing few limitations for example in how light affects ocean. I'm thinking about switching to using a few render textures:
  • base material colors
  • normals
  • environment kind (water, air, void)
And in one last step mix everything using a list of lights. Though I'm worried about the performance impact of drawing everything to 2 or 3 render textures each frame.

Is there a way to draw to several render textures in one shader call? Is that a standard way of doing things?

General / Re: change View or change RenderStates?
« on: April 28, 2021, 09:12:52 am »
What I meant is that I used to change the View using set_center, set_size, rotate, and sometimes storing the view somewhere to set it temporary to default View to render stuff like GUI (I drew on RenderTarget without RenderStates specified), and I’ve fully switched to not changing the View but instead changing the Transform stored in a RenderState that I pass along each draw function, and that I can locally change without impacting calling functions (I also use the default RenderState when I want to draw GUI).

I understand from your answer that going one way or the other is globally the same, I just have to decide which way I prefer. Thanks !

Water ? It’s because I didn’t make it to the moon in the video ;-)

General / change View or change RenderStates?
« on: April 26, 2021, 11:34:45 am »

I'm working on a lunar lander (video bellow) and I manage to achieve the same camera movements either by moving the view or by changing RenderStates. I don't clearly understand which way I should go. I've read the doc and searched on the forum and google but I didn't find similar questions.

1) Is there a concrete reason one should be used over the other depending on the use case (code organisation and readability, ...) or is it more a philosophical choice?
2) What is the perf impact of changing the View VS changing RenderStates?



I recompiled SFML on my current environement and the bug is now gone.

Thanks !

Hi everyone, I'm experiencing a weird bug with std::thread and SFML, and I couldn't find any answer in this forum / google / stackoverflow :

When I run the piece of code that I attached at the end of this post, I get a "Segmentation fault" on the t.join(); instruction.
However, if I delete everything about SFML, returning  EXIT_SUCCESS right after the t.join();, the program exits without error.

I'm using Windows Code::Blocks IDE loaded with TDM-gcc builder (codeblocks-13.12, mingw-TDM-GCC-481), and I installed SFML 2.1 (SFML-2.1-windows-gcc-4.7-tdm-32bits). I guessed my problem came from the difference in GCC versions, sadly TDM-GCC-471 doesn't seem to support std::thread...
The same code, compiled by Xcode on OSX and SFML 2.1 doesn't produce any bug.

Can you confirm the problem is indeed coming from the difference in gcc versions ?
Do you think / know that recompiling SFML with gcc 481 will solve this ?

#include <SFML/Graphics.hpp>
#include <thread>
#include <iostream>

int main()
    std::thread t = std::thread([](){
                                    std::cout << "test\n";

   // Create the main window
    sf::RenderWindow app(sf::VideoMode(800, 600), "SFML window");
    sf::Texture texture;
    if (!texture.loadFromFile("cb.bmp"))
        return EXIT_FAILURE;
    sf::Sprite sprite(texture);

        while (app.isOpen())
        sf::Event event;
        while (app.pollEvent(event))
            if (event.type == sf::Event::Closed)

    return EXIT_SUCCESS;

Pages: [1]