Welcome, Guest. Please login or register. Did you miss your activation email?

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - binary1248

Pages: [1] 2 3 ... 93
1
Graphics / Re: Update fails in sf::VertexBuffer
« on: January 27, 2021, 02:07:31 am »
As you noticed yourself, the CPU synchronization primitives mean close to nothing to the GPU. What you were initially attempting could be seen as "undefined behaviour" in GPU-land.

The problem that not many people realize when they start out with such things is that OpenGL was never meant to be used in any multi-threading scenarios. Back in the 90s when OpenGL came to be, people were happy that they could get something on their screen using their single core processor running their application on a single thread. All these facilities around OpenGL contexts and the like were just a half-baked solution for an already broken programming model. The programming model was so broken that up until today multi-threading with OpenGL can still be considered a dark art and subject to the mood of the driver in any given situation. Because of this, any serious engine e.g. Unreal Engine never even attempted to multi-thread any part of its OpenGL implementation because it would never work properly.

Now you might be wondering: Why does glFinish() or glFlush() seem to fix the problem?

This is the other thing that many beginners in this field seem to misunderstand, glFinish() and glFlush() were never intended to be and will never be synchronization mechanisms.

Again, back in the 90s, when dedicated graphics hardware was more or less limited to privileged (i.e. rich) companies that had to do e.g. CAD work, the idea was that it would be a waste for such hardware to be used only by a single workstation and could be shared by many users (the idea being a little like modern render farms). If I had to guess, the experienced engineers working on the solution got so accustomed to working with mainframes in the prior decades that they thought access to the expensive graphics hardware could be modelled after access to a mainframe as well. This is the reason that up until now, if you read through the OpenGL specification, the GPU is always referred to as a "Server" and your application as the "Client", this can be seen in the naming of some OpenGL functions as well. In more modern APIs the more sensible terms "Device" and "Host" are used instead.

The problem with modelling something using a client-server model is always going to be latency, buffering, head-of-line blocking and all the other problems that one might be more accustomed to in the networking world.

Just like in the networking world, because every packet has some overhead attached to it, any efficient implementation is going to try to group data together to be able to send in bigger chunks at a time. In the world of TCP this is known as "Nagle's Algorithm". The problem comes when there isn't enough data to satisfy the threshold at which a new packet would be sent out. Either you wait for the internal timeout to expire or you force the implementation to send the data out straight away. This is known as flushing the buffer and is more or less what glFlush() was always intended to do.

Now obviously, if you open up connections to a web server on 2 different computers and force them to send their requests as fast as possible, common sense will tell you that that still doesn't guarantee the order in which the computers will receive their responses from the server because you can't predict which request will actually reach the server first due to the unpredictability of the internet. If you replace "computer" with "OpenGL context" and "internet" with "graphics driver" you basically end up with what is going on here. The fact that you are calling glFlush() doesn't have to mean much. It might work or it might not work, but there is never going to be any guarantee.

The only real guarantee you are going to get is by using glFinish(). It goes a step further than glFlush(). glFinish() basically blocks execution of your code until the graphics driver can guarantee that all issued commands have been completed by the GPU. It's like saying you don't do anything else in your life until the web page finishes loading on your screen. Obviously if a single person did this with 2 computers it is obvious that the order in which requests are processed by the web server will be guaranteed. The main downside of glFinish() and also the only reason people need to stay far away from it is that it completely and utterly destroys any performance you might gain from accelerating your rendering using a GPU. I would go so far as to say you might as well render your graphics solely on your CPU if you intend to use glFinish().

So, now you are asking yourself what you can even do in the current situation if glFlush() and glFinish() are obviously not the way to go.

I hate to say it, but your current architecture shows some signs of premature optimization. Because any graphics API is just queueing up commands to be sent to the GPU at a later point, timing the duration spent calling graphics API functions makes little sense. As such I assume that you didn't base your current solution around such data. What does show up in profiling data would be the time spent doing CPU intensive work like loading and decompressing resources from disk. It is these tasks that you should try to off-load to multiple threads if you feel like it.

I must admit, SFML tends to muddy the performance costs of some of its API a bit too well at times making it seem like throwing everything into threads will magically make everything faster.

As a first step, I would really break all the tasks down into CPU-side tasks and GPU-side tasks. The GPU-side tasks will have to be submitted via a single dedicated thread for the reasons above. How you optimize the CPU-side tasks will be up to you.

3
Feature requests / Re: macOS Metal support?
« on: December 25, 2020, 02:16:22 am »
I think at this point the limiting factor isn't our willingness to support any alternative graphics API but just finding people who are experienced enough and have the time to do the work. Considering Apple already made it clear they are moving away from OpenGL in the long term, I don't think anybody is going to push back on a Metal implementation if a pull request is submitted.

This draft pull request I submitted over a year ago suggests a direction the library can take to start moving in this direction.

4
General / Re: Failed to set DirectInput device axis mode: 1
« on: November 15, 2020, 02:48:10 am »
This was changed in commit 3557c46. The last release of Attract-Mode was built before this commit was merged. Try building Attract-Mode with SFML master and see if this problem is fixed.

5
Since we pass the anti-aliasing level directly to the driver and this behaviour depends on its value, this sounds more like a driver bug.

6
This should already be fixed in master.

Can you give master a try?

7
Can you try using the master branch? This might have been fixed in the last 2 years.

8
Graphics / Re: Just cannot get sf::Text to draw without crash
« on: September 02, 2020, 01:28:19 am »
Did you try running any of the SFML examples? If I had to guess they will crash too because this is looking like another case of buggy Intel OpenGL driver.

9
Feature requests / Re: SFML on NVIDIA Jetson nano
« on: May 03, 2020, 02:10:12 pm »
That probably means that desktop OpenGL support is emulated by the OpenGL ES driver. SFML still makes use of the pretty old OpenGL API, meaning there is a high likelihood most of the calls it makes will have to go through the emulation layer. This might lead to poor performance.

You can try to build and run SFML on the Jetson Nano. In theory it should work, however whether the performance holds up to expectations is another story.

10
Feature requests / Re: SFML on NVIDIA Jetson nano
« on: April 30, 2020, 02:31:12 am »
Same story as all the other embedded systems including mobile devices: SFML's support for OpenGL ES is currently still "lacking".

11
General / Re: General Questions
« on: February 09, 2020, 12:51:15 pm »
Multi-threading in SFML is no different than in any other programming library. Unless stated otherwise, simultaneous access to the same object from multiple threads requires synchronization.

Using views allows you to control a 2D virtual "camera". You choose where to place it in your 2D scene, how big the area should be that it should capture and the target region of your window where the contents should be projected. There is no right or wrong here. It depends on what you are trying to achieve artistically. Rendering to a view whose source rectangle size does not exactly correspond to its viewport size will cause the contents to be stretched or squashed along one or both axes. If you want your rendered graphics to be reproduced 1:1 you should avoid any kind of scaling and make sure your view source rectangle size matches its viewport size in pixels exactly.

12
General / Re: Need Advices for performances + shaders
« on: January 18, 2020, 07:36:48 pm »
As for your first question:

Judging by your pictures... you could benefit from using geometry shaders to generate the vertex information within the shader based on data you pass in the position vertex attributes. That way you can also perform the matrix multiply within the shader as well which will probably lead to a huge performance improvement because that is what they are made to do.

As for your second question:

In theory you can move the decision on whether to "apply" a shader to a primitive from CPU to GPU. In effect, the same shader program is left enabled and run over all vertices, but you can program the shader to only really "do stuff" when certain conditions are met (which you can control via uniforms) and just pass through the data in all other cases.

I think you overestimate the "difficulty" of using OpenGL. Sure it might not be as close to a walk in the park as maybe learning SFML might be. But as I have always said, the hardest part about learning OpenGL is not actually getting familiarized with the API itself but learning the general 3D graphics programming concepts that it is built on. Based on what you have shown, you are already familiar with these concepts, so I don't think extending your knowledge to OpenGL will be that much work.

Look at it this way: trying to accomplish what you want your game to look like is probably doable using SFML and GLSL alone, but the effort you will have to invest will be more than just learning OpenGL and doing it with that instead. The nice thing about learning new skills is that the knowledge will keep benefiting you even after you are done with your project. The techniques/hacks you would have to come up with doing everything with SFML and GLSL will probably only be applicable to your current project and almost useless anywhere else.

13
Graphics / Re: Embedded RenderWindow depth buffer issues on linux
« on: December 09, 2019, 02:12:28 am »
In the Linux Xlib world, the pixel/surface format of the window can only be set when creating the window. It cannot be retroactively changed. Since you create your window using wxWidgets you will have to get wxWidgets to create a window with the correct format and pass it to SFML. The context settings parameter for the window handle overload is more or less meaningless on Linux, it is there because of the other platforms where retroactively changing the pixel format is possible.

You should still be using wxGLCanvas even when intending to render with SFML. The normal wxControl object doesn't know much about OpenGL surface attributes.

14
The linked forum thread is from 2013 and the issue is from 2016 (and didn't really discuss any concrete benefits of supporting Wayland other than "it's the new thing").

We're in the latter half of 2019 now, 6 and a half years since the original thread was written. A lot has happened since then. Many mainstream distros have decided to make Wayland the default session, the latest being Debian which is a pretty significant milestone and a testament to the stability and support it has gained over the years.

I remember experimenting with Wayland back then as well. It wasn't pretty, and still really experimental. If something went wrong with your exotic setup you were pretty much on your own. It wasn't a very nice environment to develop in. This coupled with the fact that not many "casual" users would even make use of potential Wayland support on their daily drivers and the fact that Wayland didn't provide any tangible advantages over X11 back then lead to Laurent deciding not to dedicate developer time to supporting the platform.

What hasn't changed since then is that SFML is still constrained by developer time. In contrast to other libraries, none of the SFML's contributors are compensated in any way for working on SFML and it's development isn't endorsed by any organization either. Just as back then, we need to prioritize features we feel provide a meaningful benefit to the library, as opposed to features that are just "nice to have" or "would be cool".

Considering that Wayland today has matured a lot since 2013 and development should no longer be as painful as it was back then, I wouldn't say it is off the table that support for it in SFML will come one day. The question is who will implement it. Since this is a pure backend enhancement with no public API changes, the necessary discussion should be kept to a minimum. I might have a look when I have a bigger chunk of free time on my hands.

15
General discussions / Re: Is it worth it to build my own engine using SFML
« on: September 14, 2019, 02:54:05 pm »
The M in SFML stands for multimedia. You shouldn't expect SFML to realistically help you with anything else that belongs in a usable game engine.

You have to ask yourself, what is your goal. Do you want to end up with a game of sorts? Or do you "just" want to gain experience in programming, libraries, and the other 1000 things that engine developers have to care about.

There are some people (myself included) that aren't interested in the game itself but more the underlying technology which is why I enjoy programming and working on libraries even though I never actually end up with anything "shippable" to a consumer. And then there are the other people who really just want to make games that they can show their friends once they are done, this is also a task that shouldn't be underestimated.

The effort that goes into games comprises way more of actual content creation than the raw programming. If you look at teams, you will find way more artists, designers, writers, testers etc. than actual programmers. The programmers do their part of the job writing an engine the rest of the team can work with and the "stuff" that makes the game what it is comes after that. In smaller teams, you will find that each developer will have to fulfil multiple of these roles.

If you really want to do both, you will be doing the jobs that normally rest on the shoulders of multiple people, the workload included. A beginner misconception is that one must write an "engine" if one wants to make a game. This is not true. You write an engine so that non-programmers can also contribute to making the game in their own way. At the present time, if a game is big enough, writing an engine is the only way to realistically ship the game before the budget runs out, spread the work out among many people. If you are a smaller team or even just one man, you have to ask yourself: Would I really benefit from writing an engine considering I will be the only one using it?

Most indie games either use pre-existing engines or if they feel that those wouldn't get the job done as they envision their game they just write the game from scratch. I haven't heard of a small indie team writing an engine first then building their game around it. That just wouldn't be financially feasible. What does happen though is, if the first game becomes a financial success and they decide to write a sequel, they reuse parts of the first game to make the sequel. Would this be some rudimentary form of engine? I wouldn't count it as such. It is just code re-use to me.

TL;DR: Engines are just a tool to help make development of complex projects financially viable. Bigger projects always go in that direction. Smaller/Tiny projects should focus on getting deliverables out the door ASAP however they choose to do it. You have to choose now what you want to have in your hands after 2 years. It won't be both.

Pages: [1] 2 3 ... 93