Welcome, Guest. Please login or register. Did you miss your activation email?

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - binary1248

Pages: [1] 2 3 ... 93
1
Graphics / Re: sf::VertexBuffer::update() memory reallocation
« on: January 11, 2022, 09:50:07 am »
Performance characteristics change from vendor to vendor, GPU to GPU, OS to OS and driver version to driver version. The best you can do is measure the performance of each approach, pick what currently suits you best and hope it stays that way. Welcome to the OpenGL club.

3
Graphics / Re: Update fails in sf::VertexBuffer
« on: January 27, 2021, 02:07:31 am »
As you noticed yourself, the CPU synchronization primitives mean close to nothing to the GPU. What you were initially attempting could be seen as "undefined behaviour" in GPU-land.

The problem that not many people realize when they start out with such things is that OpenGL was never meant to be used in any multi-threading scenarios. Back in the 90s when OpenGL came to be, people were happy that they could get something on their screen using their single core processor running their application on a single thread. All these facilities around OpenGL contexts and the like were just a half-baked solution for an already broken programming model. The programming model was so broken that up until today multi-threading with OpenGL can still be considered a dark art and subject to the mood of the driver in any given situation. Because of this, any serious engine e.g. Unreal Engine never even attempted to multi-thread any part of its OpenGL implementation because it would never work properly.

Now you might be wondering: Why does glFinish() or glFlush() seem to fix the problem?

This is the other thing that many beginners in this field seem to misunderstand, glFinish() and glFlush() were never intended to be and will never be synchronization mechanisms.

Again, back in the 90s, when dedicated graphics hardware was more or less limited to privileged (i.e. rich) companies that had to do e.g. CAD work, the idea was that it would be a waste for such hardware to be used only by a single workstation and could be shared by many users (the idea being a little like modern render farms). If I had to guess, the experienced engineers working on the solution got so accustomed to working with mainframes in the prior decades that they thought access to the expensive graphics hardware could be modelled after access to a mainframe as well. This is the reason that up until now, if you read through the OpenGL specification, the GPU is always referred to as a "Server" and your application as the "Client", this can be seen in the naming of some OpenGL functions as well. In more modern APIs the more sensible terms "Device" and "Host" are used instead.

The problem with modelling something using a client-server model is always going to be latency, buffering, head-of-line blocking and all the other problems that one might be more accustomed to in the networking world.

Just like in the networking world, because every packet has some overhead attached to it, any efficient implementation is going to try to group data together to be able to send in bigger chunks at a time. In the world of TCP this is known as "Nagle's Algorithm". The problem comes when there isn't enough data to satisfy the threshold at which a new packet would be sent out. Either you wait for the internal timeout to expire or you force the implementation to send the data out straight away. This is known as flushing the buffer and is more or less what glFlush() was always intended to do.

Now obviously, if you open up connections to a web server on 2 different computers and force them to send their requests as fast as possible, common sense will tell you that that still doesn't guarantee the order in which the computers will receive their responses from the server because you can't predict which request will actually reach the server first due to the unpredictability of the internet. If you replace "computer" with "OpenGL context" and "internet" with "graphics driver" you basically end up with what is going on here. The fact that you are calling glFlush() doesn't have to mean much. It might work or it might not work, but there is never going to be any guarantee.

The only real guarantee you are going to get is by using glFinish(). It goes a step further than glFlush(). glFinish() basically blocks execution of your code until the graphics driver can guarantee that all issued commands have been completed by the GPU. It's like saying you don't do anything else in your life until the web page finishes loading on your screen. Obviously if a single person did this with 2 computers it is obvious that the order in which requests are processed by the web server will be guaranteed. The main downside of glFinish() and also the only reason people need to stay far away from it is that it completely and utterly destroys any performance you might gain from accelerating your rendering using a GPU. I would go so far as to say you might as well render your graphics solely on your CPU if you intend to use glFinish().

So, now you are asking yourself what you can even do in the current situation if glFlush() and glFinish() are obviously not the way to go.

I hate to say it, but your current architecture shows some signs of premature optimization. Because any graphics API is just queueing up commands to be sent to the GPU at a later point, timing the duration spent calling graphics API functions makes little sense. As such I assume that you didn't base your current solution around such data. What does show up in profiling data would be the time spent doing CPU intensive work like loading and decompressing resources from disk. It is these tasks that you should try to off-load to multiple threads if you feel like it.

I must admit, SFML tends to muddy the performance costs of some of its API a bit too well at times making it seem like throwing everything into threads will magically make everything faster.

As a first step, I would really break all the tasks down into CPU-side tasks and GPU-side tasks. The GPU-side tasks will have to be submitted via a single dedicated thread for the reasons above. How you optimize the CPU-side tasks will be up to you.

5
Feature requests / Re: macOS Metal support?
« on: December 25, 2020, 02:16:22 am »
I think at this point the limiting factor isn't our willingness to support any alternative graphics API but just finding people who are experienced enough and have the time to do the work. Considering Apple already made it clear they are moving away from OpenGL in the long term, I don't think anybody is going to push back on a Metal implementation if a pull request is submitted.

This draft pull request I submitted over a year ago suggests a direction the library can take to start moving in this direction.

6
General / Re: Failed to set DirectInput device axis mode: 1
« on: November 15, 2020, 02:48:10 am »
This was changed in commit 3557c46. The last release of Attract-Mode was built before this commit was merged. Try building Attract-Mode with SFML master and see if this problem is fixed.

7
Since we pass the anti-aliasing level directly to the driver and this behaviour depends on its value, this sounds more like a driver bug.

8
This should already be fixed in master.

Can you give master a try?

9
Can you try using the master branch? This might have been fixed in the last 2 years.

10
Graphics / Re: Just cannot get sf::Text to draw without crash
« on: September 02, 2020, 01:28:19 am »
Did you try running any of the SFML examples? If I had to guess they will crash too because this is looking like another case of buggy Intel OpenGL driver.

11
Feature requests / Re: SFML on NVIDIA Jetson nano
« on: May 03, 2020, 02:10:12 pm »
That probably means that desktop OpenGL support is emulated by the OpenGL ES driver. SFML still makes use of the pretty old OpenGL API, meaning there is a high likelihood most of the calls it makes will have to go through the emulation layer. This might lead to poor performance.

You can try to build and run SFML on the Jetson Nano. In theory it should work, however whether the performance holds up to expectations is another story.

12
Feature requests / Re: SFML on NVIDIA Jetson nano
« on: April 30, 2020, 02:31:12 am »
Same story as all the other embedded systems including mobile devices: SFML's support for OpenGL ES is currently still "lacking".

13
General / Re: General Questions
« on: February 09, 2020, 12:51:15 pm »
Multi-threading in SFML is no different than in any other programming library. Unless stated otherwise, simultaneous access to the same object from multiple threads requires synchronization.

Using views allows you to control a 2D virtual "camera". You choose where to place it in your 2D scene, how big the area should be that it should capture and the target region of your window where the contents should be projected. There is no right or wrong here. It depends on what you are trying to achieve artistically. Rendering to a view whose source rectangle size does not exactly correspond to its viewport size will cause the contents to be stretched or squashed along one or both axes. If you want your rendered graphics to be reproduced 1:1 you should avoid any kind of scaling and make sure your view source rectangle size matches its viewport size in pixels exactly.

14
General / Re: Need Advices for performances + shaders
« on: January 18, 2020, 07:36:48 pm »
As for your first question:

Judging by your pictures... you could benefit from using geometry shaders to generate the vertex information within the shader based on data you pass in the position vertex attributes. That way you can also perform the matrix multiply within the shader as well which will probably lead to a huge performance improvement because that is what they are made to do.

As for your second question:

In theory you can move the decision on whether to "apply" a shader to a primitive from CPU to GPU. In effect, the same shader program is left enabled and run over all vertices, but you can program the shader to only really "do stuff" when certain conditions are met (which you can control via uniforms) and just pass through the data in all other cases.

I think you overestimate the "difficulty" of using OpenGL. Sure it might not be as close to a walk in the park as maybe learning SFML might be. But as I have always said, the hardest part about learning OpenGL is not actually getting familiarized with the API itself but learning the general 3D graphics programming concepts that it is built on. Based on what you have shown, you are already familiar with these concepts, so I don't think extending your knowledge to OpenGL will be that much work.

Look at it this way: trying to accomplish what you want your game to look like is probably doable using SFML and GLSL alone, but the effort you will have to invest will be more than just learning OpenGL and doing it with that instead. The nice thing about learning new skills is that the knowledge will keep benefiting you even after you are done with your project. The techniques/hacks you would have to come up with doing everything with SFML and GLSL will probably only be applicable to your current project and almost useless anywhere else.

15
Graphics / Re: Embedded RenderWindow depth buffer issues on linux
« on: December 09, 2019, 02:12:28 am »
In the Linux Xlib world, the pixel/surface format of the window can only be set when creating the window. It cannot be retroactively changed. Since you create your window using wxWidgets you will have to get wxWidgets to create a window with the correct format and pass it to SFML. The context settings parameter for the window handle overload is more or less meaningless on Linux, it is there because of the other platforms where retroactively changing the pixel format is possible.

You should still be using wxGLCanvas even when intending to render with SFML. The normal wxControl object doesn't know much about OpenGL surface attributes.

Pages: [1] 2 3 ... 93