Welcome, Guest. Please login or register. Did you miss your activation email?

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - dialer

Pages: [1]
1
Graphics / Re: Drawing Text on transparent RenderTexture leaves outline
« on: August 19, 2017, 11:43:58 pm »
No, that's not how this works.

Multiplying an alpha value of 0.5 to a color value of (1, 0, 0) yields a premultiplied color value of (1, 0, 0, 0.5).

2
Graphics / Re: Drawing Text on transparent RenderTexture leaves outline
« on: August 19, 2017, 05:15:48 pm »
source = (1, 0, 0, 0.5) on destionation = (x1, x2, x3, 0.0):

co = 0.5 * (1, 0, 0) + 0.0 * (x1, x2, x3) * (1 - 0.5) = 0.5 * (1, 0, 0)

Maybe I should have clarified that Cs and Cb are not premultiplied, only co is.

3
Graphics / Re: Drawing Text on transparent RenderTexture leaves outline
« on: August 19, 2017, 03:38:01 pm »
You say that this is correct behavior when alpha blending the semi-transparent white text edges onto the (0, 0, 0, 0) background, but I disagree. I believe this is a bug (be it in SFML or OpenGL).

Firstly, the terms "Alpha Blending" and "Blend Mode Alpha" are vague. It should really be clarified that this refers to the Porter-Duff compositing mode "Source Over", and if it does not, well, that's a bug because it should.

The color and alpha formulae for this mode are:

co = αs x Cs + αb x Cb x (1 – αs)
αo = αs + αb x (1 – αs)


Where
co is the output color (with premultiplied alpha),
αo is the output alpha,
αs is source alpha,
αb is destination alpha,
Cs is source color,
Cb is destination color.

In this specific example, αs is a value between 0 and 1. It is 1 for pixels completely inside the glyphs, 0 completely outside the glyphs, and something inbetween on the edges. When blending this onto a (0, 0, 0, 0) background, αb is always 0, meaning that the Cb expression has no effect on the output color, no matter how you look at it. But you can clearly see that this is not the case. When changing the background to, say, (255, 0, 0, 0), the artifacts become red.

4
Apparently, wglMakeCurrent also causes that, which is a much bigger problem than glGetError. This is an issue with OpenGL though, not with SFML.

That means that you cannot switch GL contexts on each frame, and that means that you cannot render to RenderTextures on each frame.

On top of that, I encountered another problem with vsync and posted that on stackoverflow:

https://stackoverflow.com/questions/45676892/reliable-windowed-vsync-with-opengl-on-windows

I'm starting to believe that vsync with OpenGL is the devil and NOBODY has ever tested it thoroughly before. Speaks volumes about the quality of the OpenGL developers' work IMO, but I'm also beginning to understand why Jon Blow thinks OpenGL is designed by crazy people.

5
To be clear: I'm not requesting that this should be changed, but I do think it should be mentioned in the documentation, and also very early in those tutorials.

I just wanted to add that I personally think it could be a good idea to disable it upon explicit request during *runtime* (note *disable*, meaning I agree that it should be enabled by default), and if others who stuble upon the problem agree, they can easily implement it themselves (after all, developing with the SFML sources is pretty reasonable, unlike Some Dumb Library I could name, so I have my own modified version of SFML anyway).

6
This is more informative than a question or a feature request. I'm making this thread because other people must have had this problem, yet I was unable to find anything coherent on google that presents the issue well enough and I want to save others from the frustration.

Problem:

When rendering stuff with vsync enabled, you would expect the CPU usage to be reduced because the application is blocked. This is not the case when using sfml in debug mode; CPU usage is 100% on one core.

Explanation:

This happens if you have "Threaded Optimization" enabled in the NVidia Control Panel (it is enabled by default). The issue is that in SFML debug builds, after every OpenGL call, the glGetError function is automatically called to help finding errors. That function causes a busy wait.

Typically (with vsync enabled) (though this is technically not specified), the call to Window::display (or rather, SwapBuffers) causes the thread to block (non-busy wait) until it is appropriate to actually swap the buffers with respect to vsync. All the other rendering functions that OpenGL offers are highly asynchronous to improve performance. Calling glGetError (and a bunch of other functions that access the current rendering state in a consistent way) requires the CPU thread to synchronize with the rendering process, and it isn't very smart about it. If vsync is enabled, it spins until the next frame is synchronized, leaving no time for SwapBuffers to wait in a sensible manner. In other words, you should absolutely not call glGetError each frame (definitely not in a Release build, IMO not even in a Debug build).

Afterword:

In SFML, the error checking is currently disabled via a compiler switch in the Release build (GLCheck.hpp). However, in case you want to avoid high CPU load even in Debug builds, I propose a different approach. Allow short-circuiting the error checking with a global bool. Yes, that sounds like an intern writing nightmarish code, but that was a reasonable choice for me at least. I actually have that in my Release build as well.

It allows you to enable error checking during interesting sections that don't happen every frame, while having it disabled during normal, dull rendering. It also allows you to easily switch it on and off for a few frames, should you ever need to actually debug the dull rendering.

7
Feature requests / Set hardware cursor for Window
« on: November 30, 2013, 09:52:50 pm »
(This has been requested before, but the forum apparently discourages thread necromancy)

Even though Laurent has stated multiple times that you can just implement a software cursor using a sprite, I think nowadays this is simply bad practice, and even terribly annoying in certain genres. There is an absolutely noticable difference even at 60 fps, especially when playing RTS or MOBA games (there is a good reason why all the top titles like SC2 or LoL use harware cursors). The delay introduced by software cursors makes it simply feel sluggish and unresponsive.

The recent version of SDL (SDL2) supports setting the hardware cursor to a Surface (an image resource, which could for example be loaded from a PNG (also supports transparency)) using the SDL_CreateColorCursor function. While the code behind it is kind of non-trivial for both Windows and Linux (I haven't examined the Mac OS code), it certainly is manageable and cross-platform. Although SDL's API cannot create animated hardware cursors, having the ability to set the hardware cursor is a nice enough advantage in my opinion. Furthermore, having the concept of a cursor available without the necessity to program any additional rendering should be a good thing either way.

I'd be happy to implement this functionality in SFML by myself (if it is not underway already), but I have no idea how you handle user contributions. Since I'm most likely going to implement it for my project anyway, it would be nice to know if I can make it available to others as well.

Pages: [1]
anything