Welcome, Guest. Please login or register. Did you miss your activation email?

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - kfj

Pages: [1] 2
1
Window / Re: Linux: occasional white bar on top of fullscreen window
« on: December 02, 2023, 11:39:37 am »
Seems like the run-full-throttle bug has gone with the last updates - I haven't seen it any more. For the white-bar-on-top bug, my workaround in my initial post works okay, though I do get a bit of flicker if the erroneous size is detected several times in a row - this is, luckily, rare. Re. the third bug - the confinement of the mouse pointer - I needed some trickery. I have code to switch the mouse cursor on if there is mouse activity (like mouse moves, clicks, scroll wheel), and off if there is no activity for a little while. I intended to start my program with the mouse cursor off to avoid the bug, but try as I might the cursor was switched on. I finally tracked it down to a single MouseMoved event which occured without any interaction on my side. Is this an artificial event sent by SFML to provide the initial mouse position? If I catch and ignore the very first MouseMoved event, the mouse cursor remains off, as intended, and the bug does not seeem to occur any more.

2
Window / Re: Linux: occasional white bar on top of fullscreen window
« on: December 01, 2023, 06:41:53 pm »
To me it sounds like some Gnome issue, not sure if SFML can do much or anything about it.
Have you tried other games, do they experience the same issue or not?

I don't have any games. My Program is an image viewer. I also think it's a Gnome issue, or to be more precise, a mutter issue. I haven't seen the problems I describe with other programs, that's why I've come here to see if anyone else has had similar problems, and maybe a workaround - until the bugs are, hopefully, fixed on the Gnome side. My workaround to catch and react to the Resize events which contain the erroneous extent does work, but I was hoping for something more elegant. For the other two issues I haven't yet found anything.

It happens with both X11 and Wayland on Gnome.
Just to double check, when you say X11, you really switched to Xserver and aren't using just translation layer (xwayland, etc.), correct?

I used both. Outside a Linux session, you can choose which desktop environment you want for your next session, so I tried both 'Gnome' and 'Gnome with Xorg'. I think the first one uses wayland and xwayland, whereas the second uses plain X11. The problem occurs with both, but when I start a session with openbox instead, the problems are gone.
 
- both with vsync and with a fixed frame rate, occasionally, the capped rate is not honoured and frames are consumed as they come. I expect Window::display() to block until a new frame can be displayed, instead it seems to return without delay as if there were no frame rate limit or vsync at all. This problem does not occur with openbox.
Make sure you're not mixing vsync and frame rate limit, that can cause odd issues. Also check that you haven't disabled vsync globally. And ensure your GPU driver is up to date.

I sure won't try and mix the two. Normally I run with vsync enabled, beause I use a fixed delta loop. When I use fixed framerate instead, my system does not run smoothly. Nevertheless I have CL options to choose one or the other, and with both settings I get the problem I described: every now and then the 'brake' fails and I get 100% CPU load due to the frames being accepted without blocking. With debian12 and unstable packages activated, I should have the latest GPU drivers.

- sometimes, after program start in Fullscreen mode, the mouse pointer is confined to the outline of a rectangular region of the screen. Switching the Mouse Cursor off and on again removes the confinement. This is hard to reproduce, but when it occurs it's very annoying.
Is this again with X11 and Wayland?

Same thing, both on Gnome one on Gnome with Xorg.

3
Window / Re: Linux: occasional white bar on top of fullscreen window
« on: December 01, 2023, 10:19:03 am »
No helpful hints?

I have more problems which I think are related to Gnome:

- both with vsync and with a fixed frame rate, occasionally, the capped rate is not honoured and frames are consumed as they come. I expect Window::display() to block until a new frame can be displayed, instead it seems to return without delay as if there were no frame rate limit or vsync at all. This problem does not occur with openbox.

- sometimes, after program start in Fullscreen mode, the mouse pointer is confined to the outline of a rectangular region of the screen. Switching the Mouse Cursor off and on again removes the confinement. This is hard to reproduce, but when it occurs it's very annoying.

4
Window / Re: Linux: occasional white bar on top of fullscreen window
« on: November 28, 2023, 06:50:27 pm »
Sorry for not replying straight away, I did not see your response!

It happens with both X11 and Wayland on Gnome. I'm running a debian 12 system, with unstable packages activated, to make it close to a rolling release. SFML is built here from the 2.6.x branch. I had similar issues on a recent Ubuntu install. I think the problem may be with Gnome, and that it's not an X11/Wayland issue. Searching for a solution, I have, among other things, tried installing openbox, and there, the problem does not occur. My hope posting here was that maybe some of you guys who are 'deep inside' SFML can make something of the hint that, even though I request a full-screen window, I get the thing with the white bar on top, plus the SFML event telling me of the erroneous size. This only happens every now and then - I can toggle window/full-screen mode with a key, and sometimes I can switch to and from 5-6 times and all is well, sometimes my workaround fires once and I have the full screen, but sometimes I get the Resized event repeatedly, up to 5-6 times or so, until the system finally relents and gives me the proper full-screen, and every time I retry the screen flickers annoyingly, but at least the workaround holds - it saves me from having to toggle full-screen mode manually until the white bar disappears.

If you have a recent debian or ubuntu at hand, you can maybe try and see if you can reproduce the bug - I distribute binaries, the Linux version is in AppImage format, so all it takes is downloading it and making it executable. Load any image and press F11 repeatedly until you get the white bar thing (it may even show up straight away). Here's my Download page, just pick a recent AppImage:

https://bitbucket.org/kfj/pv/downloads/

If you need more info, I'll gladly help - this thing is bugging me big time. And if you could reproduce the bug, this would at least reassure me that it's not just my system (Intel® Core™ i5-4570 × 4, Intel® HD Graphics 4600 (HSW GT2)) acting up.

5
Window / Linux: occasional white bar on top of fullscreen window
« on: November 21, 2023, 11:09:32 am »
Dear all!
For some time, I've been wrestling with an annoying bug when running my SFML Program on Linux. When switching to fullscreen mode, I do at times get a white bar at the top of the screen. I have seen one bug report which has similar content: https://gitlab.gnome.org/GNOME/mutter/-/issues/2937
I tried every trick I could think of to find the place where things go wrong, but came to conclude that the windowing system messes with the window 'from the outside', because I do get Resize events with the wrong size (desktop height minus the white bar) without having triggered anything along these lines - the window is created with Fullscreen style. I use the Resize event to 'notice' that the bug is happening (it only occurs occasionally) and then I immediately re-create the window with fullscreen style. Usually one detect-recreate cycle is enough, but at times, the system is 'obstinate' and sends a few wrong Resize events before yielding to my re-creates. This produces a bit of flicker, but at least the erroneous state does not persist. I post here in case anyone else is having the same problem. Maybe my workaround can help, until there is a fix system-side which makes it unnecessary. Of course, a 'real' bug fix would be much appreciated!

This is the gist of my worlk-around code:

...
        if ( event.type == sf::Event::Resized )
        {
          auto window_size = p_window->getSize() ;

          if (    ui::run_fullscreen
               && ( window_size.y != desktop.height ) )
          {
             ui::p_screen->p_window->create ( desktop ,
                                      std::string ( "" ) ,
                                      sf::Style::Fullscreen ) ;
          }


6
General discussions / Re: sRGB/RGB conversions
« on: November 06, 2022, 11:46:11 am »
There is one issue with using an sRGB-capable frame buffer and SFML as it is: AFAICT I can only pass uchar data to create a texture. If I do so, the resolution is insufficient to display dark areas without banding. So this would be another reason to allow passing in other data types.
In the context of my image/panorama viewer, the gain in rendering speed when passing LRGB data is significant - somewhere in the OOM of 1 msec per frame on my machine, but the banding can be annoying for some content. So in lux, I use a non-sRGB-capable frame buffer per default and pass in sRGB data (this is also SFML's default), but I allow the user to override that via --gpu_for_srgb=yes (currently master branch only, the binaries are still 1.1.4)
Thanks @eXpl0it3r for the link to the discussion about textures with different data types - I wonder if this is still an issue? The discussion went on over a long time and never really seemed to get anywhere. How about this: rather than creating openGL textures with a different internal data type, one might overload the SFML texture's c'tor to accept several data types and subsequently convert the incoming data to a uchar sRGB texture. The conversion could be done on the GPU, and the resulting texture would be the same as the textures created by SFML's usual texture creation routines, so no issues would arise from a multitude of texture data types, and the remainder of the code wouldn't be affected. The overloads would not interfere with the status quo.

7
General discussions / Re: sRGB/RGB conversions
« on: October 27, 2022, 10:56:40 am »
When I scanned the openGL documentation, I found that openGL supports float textures, so I thought that passing in float data and the appropriate tag might be easy, but I haven't investigated further. My rendering pipeline directly produces 8bit RGBA in a final stage, where it converts the float vectors to uchar and interleaves them. The alternative would be to directly interleave the float vectors to memory, but the memory footprint would be four times as large for a float texture, and the question is whether the added memory traffic outweighs the float-to-uchar conversion, which can be done quite efficiently with SIMD operations since the data is in the vector registers already. When handling large amounts of data the best approach is to get memory access down to a minimum, because that's usually the slowest bit. If I had a simple way of passing float textures to openGL, I'd give it a try and see if it's any faster than what I use now - hence my request.

Thanks for giving lux a try! What you see in your abortive attempt is not a crash but a message box stating that lux can't figure out the FOV. Oftentimes the metadata in panoramas generated with smartphones don't work with lux without passing additional information. Try and pass both projection and HFOV (in degrees) on the command line, like

lux --projection=spherical --hfov=200 pano.jpg

The projection and field of view are vital for a correct display, and if lux can't figure them out that's a show-stopper and lux terminates, emitting the message box as it's 'last gasp'. Sadly, panorama-specific metadata aren't standardized well. lux should be able to process GPano metadata and metadata in the UserComment tag, as written by hugin. It also supports it's own flavour, have a look at https://kfj.bitbucket.io/README.html#image-metadata.

lux can do much more beyond simply displaying panoramas, but most 'advanced' capabilities (like stitching or exposure fusions) require PTO input and the use of additional command line parameters. Being FOSS, lux does not have 'selling points', but I'd say that my b-spline-based reimplementation of the Burt&Adelson image splining algorithm makes it unique. I am a panorama photographer and hugin contributor, and lux is the result of many years of personal reasearch into b-splines and image processing - mainly 'scratching an itch', but, feeling generous, I publish my software for everyone who wants to give it a try. My approach uses SIMD to do efficient rendering on the CPU, hence the use of SIMD-specific code. If you're working on an intel machine, you can figure out what difference SIMD makes. Let's assume you have an AVX2-capable machine. lux will figure out that your processor has AVX2 units and use AVX2 ops, but you can tell it to switch back to lesser ISAs (try --isa=fallback which uses some 'lowest common denominator' SSE level). If you do a 1000-frame sweep over your image and look at the average frame calculation duration (echoed near the end of the console output) you'll see the difference. Try

lux -ps -h360 -z1000 -A.05 some_image.jpg

lux is a large program with tons of features, but most of it's capabilities aren't immediately obvious, and many require the use of command line parameters. Please give it a bit more time, and if you find something amiss, post an issue to it's issue tracker!

8
General discussions / Re: sRGB/RGB conversions
« on: October 02, 2022, 11:37:16 am »
So far sending linear RGB to the openGL code works just fine, and for me it creates a significant performance improvement.[lux][https://bitbucket.org/kfj/pv/] now does this by default (needs to be built from master; the precompiled binaries still pass SRGB). How about my proposal to allow passing in single precision float data, rather than 8bit RGBA?

9
General discussions / sRGB/RGB conversions
« on: September 12, 2022, 10:45:14 am »
Dear all!

I have just discovered that SFML can handle textures in linear RGB (I'll use 'LRGB' in this post, in contrast to SRGB). You may say this is old hat, but the capability is not glaringly obvious from the documentation. I'll reiterate what I have understood so far:

There are two components involved, the texture and the window. The texture may contain LRGB or SRGB data, and to tell the system which type of data the texture holds, you call sf::Texture::setSrgb() and you can inquire with sf::Texture::isSrgb() what the current state is. The second component is the window, and there it depends on the ContextSettings passed to the window's c'tor. The ContextSettings have a boolen field 'sRgbCapable'. You can pass true here if you want an SRGB-capable frame buffer, and false if you don't. AFAICT passing false will certainly succeed, but passing true may not succeed - you have to inquire about the state after creating the window to figure out what type of frame buffer you have.

Now let's assume you're working in LRGB. To offload LRGB to SRGB conversion to the GPU, you'd call setSrgb() with a false argument to tag the texture as holding LRGB data and work with an SRGB-capable frame buffer. If the latter can't be had (or the window was deliberately created without SRGB capability), you have to 'manually' convert your data to SRGB on the CPU before storing it to the texture.

I assume that passing in LRGB data should be preferable: first, the time to convert the texture data to LRGB (which is used by the GPU) is saved, and second, since the texture data is quantized to 8 bits already, doing the conversion from SRGB to LRGB will produce loss of image quality. Is this assumption correct?

How about transparency? Is the alpha value processed as-is, regardless of the texture's property?

I'm processing photographic images, SRGB/LRGB is an issue for me - in fact, my data are initially in single precision float LRGB, and I'd be happy if I could offload the float-to-uchar conversion to the GPU as well. Is there a way to do that simply with SFML? And if not, maybe this would make a nice new feature?

10
Feature requests / crossfading images
« on: June 30, 2020, 10:16:38 am »
I'm using SFML to write an image viewer. I'd like a simple way to crossfade from the current image to the next one. I haven't found a way to do this with SFML yet - if there is one, please let me know! I have the images as textures on the GPU and I'd like to use the GPU to produce the effect.

I think the simplest way to do this is to draw the new texture on top of the old one several times, increasing the new frame's alpha values every time until it's fully opaque, but touching every single pixel with the CPU just for that is too expensive. What I'd like is a function to globally change the transparency of a complete texture, or a blending mode to the same effect. I suppose this would be easy to do with a fragment shader, but I haven't worked with shaders so far and I'd prefer a simple prefabricated solution.

Kay

11
Feature requests / Re: query the system's current FPS setting
« on: April 30, 2017, 08:54:38 am »
Quote
tk = k * dt + ek
This formula rather matches a single measurement for the whole k-frames duration, but you said you were cumulating single frame times, which would rather be expressed as:
tk = sum(dt + ek)
Sorry to go on about this, but I did say that I am cumulating the deltas *inside an animated sequence*. My software switches between showing still images and animated sequences, which happens when the user zooms, pans, etc.. So my measurement does indeed look at an average over several k-frame durations. I have a frame counter counter which is reset to zero whenever the system is at rest. When I get to the call to display I look at this counter, and if it's greater than zero, I know the previous frame was already inside the animation sequence and I cumulate the delta. Cumulating single deltas has the advantage that the average is right at hand at every moment, rather than having to bother with recording sequence starting times and counting frames per sequence.

When it comes to cumulating individual independent measurements, you and Hapax are of course totally right about the error cumulating, and the formulta you give is the correct one to use.

Here's the relevant part of the code I'm using:


          p_window->display() ;
     
          display_called = std::chrono::system_clock::now() ;
          float dt = std::chrono::duration_cast<std::chrono::microseconds>
                      (display_called - display_last_called).count() ;
          dt /= 1000.0 ; // use float milliseconds
          display_last_called = display_called ;
         
          if ( frames_in_sequence > 1 )
          {
            dt_count++ ;
            total_ms += dt ;
            if ( dt_count > 20 ) // only start using dt_average after having cumulated 20 deltas
              dt_average = total_ms / dt_count ;
          }


12
Feature requests / Re: query the system's current FPS setting
« on: April 29, 2017, 08:45:21 am »
I believe that what Hapax meant by error accumulation is the error (inaccuracy) caused by std::chrono, which does accumulate when using multiple measurements.
It does not, in this case. Let me explain. What we are looking at is a sequence of time points:

{ t0, t1, ...tk }

where the individual times tn are composed of a fixed 'true' time n * dt (dt being the fixed, unvarying GPU frame time) and an arbitrary small error en, where en is never larger than the maximal measurement error (std::chrono's inaccuracy):

tn = n * dt + en

Now we increase k. We have

tk = k * dt + ek

When we take the average a, we get

a = tk / k
a = dt + ek / k

So, after k measurements, the error of the average is precisely ek / k, which, with large k, approaches 0. There is now way for an error to accumulate in this system, the error only depends on the error of the last measurement, divided by the number of total measurements.

I have to eat my words concerning another statement I made, though. I wrote earlier that the average frame time did not come out at 16.6 / 20.0 respectively. This was due to sloppy programming on my part: I stupidly did a std::chrono::duration_cast<std::chrono::milliseconds> on my times, resulting in the value being truncated to an integer millisecond value, which produced the .5 time difference, because my faulty maths sure *did* cumulate. I now changed the code to use std::chrono::duration_cast<std::chrono::microseconds>, assign to float and divide by 1000.0. Lo and behold, the timing is spot on. And it only takes a few frames to stabilize, so 25 is ample for a sample.

13
Feature requests / Re: query the system's current FPS setting
« on: April 28, 2017, 06:44:47 pm »
Timing a frame is fairly accurate but not great so adding those frame times would cumulate their error.
It would be more accurate - if you need it - to just measure the time after 25 frames have passed and then divide for the average.
The error does not cumulate because the times I measure actually vary around a fixed value, the GPU frame time (at least I think that is constant!). After many observations, the variations cancel out and the underlying period manifests in the mean. What I'm timing is the time from one call to display to the next; there is nothing to get 'in between'.

I was only omitting the first 25 frames because there the deviation from the true value was often so great that the average initially varied too much, but that was a quick shot. It's a good idea to use the first measurements, but only after some time, as you have suggested. So now I wait until 25 frames have passed before I *use* the average. On my system, it takes a few hundred frames for the value to really stabilize. Thanks for the hint!

14
Feature requests / Re: query the system's current FPS setting
« on: April 27, 2017, 11:19:01 am »
Since the time between two GPU frames seems hard to get, I have now followed your alternative proposal and resorted to taking measurements of the time between successive calls to display(). Just looking at the first two frames did not work for me. So initially I looked at the time differences between all frames in an animated sequence, but I noticed that these values did vary a good deal initially and stabilized only after some time. So now I use this method:

- take the time delta between successive calls to display()
- if the animated sequence has run for at least 25 frames, cumulate the delta
- calculate the time between GPU frames as the average of the cumulated deltas

The value stabilizes after some time, but keeps on fluctuating by a small amount. An alternative would be to use a gliding average, which has the advantage that the value adapts more quickly to a change to the FPS via the system settings, but I think this is too much of a corner case to bother.

I noticed one surprising thing, which shows how apt your comment on the unreliability of settings vs. reality is: the result of my measurements does not approach 16.6 / 20.0 msec for 60 / 50 fps, respectively, but instead converges on ca. 16.0 / 19.5. I think this is a genuine difference, not due to false measurements - to take the times I use std::chrono::system_clock::now(), which should be fairly accurate.

15
Feature requests / Re: query the system's current FPS setting
« on: April 26, 2017, 11:59:30 am »
I don't think you understand what I'm trying to explain...
But I do! I want precisely the time between two GPU frames. I don't think the graphics driver can even know what the monitor does, and I am not concerned with this datum. I thought the time between two GPU frames is what you set when setting the FPS in the system settings, but I may have misunderstood that bit. I thought the graphics hardware simply produces a signal which the monitor adapts to - or stays black if it can't handle the signal.

So is it possible to query the time between two GPU frames?

I understand and agree that the refresh rate of the monitor is irrelevant. I don't want to know the monitor's refresh rate, I want the time between two GPU frames, if it can be provided.

Pages: [1] 2