Welcome, Guest. Please login or register. Did you miss your activation email?

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Messages - kfj

Pages: [1]
General discussions / Re: sRGB/RGB conversions
« on: November 06, 2022, 11:46:11 am »
There is one issue with using an sRGB-capable frame buffer and SFML as it is: AFAICT I can only pass uchar data to create a texture. If I do so, the resolution is insufficient to display dark areas without banding. So this would be another reason to allow passing in other data types.
In the context of my image/panorama viewer, the gain in rendering speed when passing LRGB data is significant - somewhere in the OOM of 1 msec per frame on my machine, but the banding can be annoying for some content. So in lux, I use a non-sRGB-capable frame buffer per default and pass in sRGB data (this is also SFML's default), but I allow the user to override that via --gpu_for_srgb=yes (currently master branch only, the binaries are still 1.1.4)
Thanks @eXpl0it3r for the link to the discussion about textures with different data types - I wonder if this is still an issue? The discussion went on over a long time and never really seemed to get anywhere. How about this: rather than creating openGL textures with a different internal data type, one might overload the SFML texture's c'tor to accept several data types and subsequently convert the incoming data to a uchar sRGB texture. The conversion could be done on the GPU, and the resulting texture would be the same as the textures created by SFML's usual texture creation routines, so no issues would arise from a multitude of texture data types, and the remainder of the code wouldn't be affected. The overloads would not interfere with the status quo.

General discussions / Re: sRGB/RGB conversions
« on: October 27, 2022, 10:56:40 am »
When I scanned the openGL documentation, I found that openGL supports float textures, so I thought that passing in float data and the appropriate tag might be easy, but I haven't investigated further. My rendering pipeline directly produces 8bit RGBA in a final stage, where it converts the float vectors to uchar and interleaves them. The alternative would be to directly interleave the float vectors to memory, but the memory footprint would be four times as large for a float texture, and the question is whether the added memory traffic outweighs the float-to-uchar conversion, which can be done quite efficiently with SIMD operations since the data is in the vector registers already. When handling large amounts of data the best approach is to get memory access down to a minimum, because that's usually the slowest bit. If I had a simple way of passing float textures to openGL, I'd give it a try and see if it's any faster than what I use now - hence my request.

Thanks for giving lux a try! What you see in your abortive attempt is not a crash but a message box stating that lux can't figure out the FOV. Oftentimes the metadata in panoramas generated with smartphones don't work with lux without passing additional information. Try and pass both projection and HFOV (in degrees) on the command line, like

lux --projection=spherical --hfov=200 pano.jpg

The projection and field of view are vital for a correct display, and if lux can't figure them out that's a show-stopper and lux terminates, emitting the message box as it's 'last gasp'. Sadly, panorama-specific metadata aren't standardized well. lux should be able to process GPano metadata and metadata in the UserComment tag, as written by hugin. It also supports it's own flavour, have a look at https://kfj.bitbucket.io/README.html#image-metadata.

lux can do much more beyond simply displaying panoramas, but most 'advanced' capabilities (like stitching or exposure fusions) require PTO input and the use of additional command line parameters. Being FOSS, lux does not have 'selling points', but I'd say that my b-spline-based reimplementation of the Burt&Adelson image splining algorithm makes it unique. I am a panorama photographer and hugin contributor, and lux is the result of many years of personal reasearch into b-splines and image processing - mainly 'scratching an itch', but, feeling generous, I publish my software for everyone who wants to give it a try. My approach uses SIMD to do efficient rendering on the CPU, hence the use of SIMD-specific code. If you're working on an intel machine, you can figure out what difference SIMD makes. Let's assume you have an AVX2-capable machine. lux will figure out that your processor has AVX2 units and use AVX2 ops, but you can tell it to switch back to lesser ISAs (try --isa=fallback which uses some 'lowest common denominator' SSE level). If you do a 1000-frame sweep over your image and look at the average frame calculation duration (echoed near the end of the console output) you'll see the difference. Try

lux -ps -h360 -z1000 -A.05 some_image.jpg

lux is a large program with tons of features, but most of it's capabilities aren't immediately obvious, and many require the use of command line parameters. Please give it a bit more time, and if you find something amiss, post an issue to it's issue tracker!

General discussions / Re: sRGB/RGB conversions
« on: October 02, 2022, 11:37:16 am »
So far sending linear RGB to the openGL code works just fine, and for me it creates a significant performance improvement.[lux][https://bitbucket.org/kfj/pv/] now does this by default (needs to be built from master; the precompiled binaries still pass SRGB). How about my proposal to allow passing in single precision float data, rather than 8bit RGBA?

General discussions / sRGB/RGB conversions
« on: September 12, 2022, 10:45:14 am »
Dear all!

I have just discovered that SFML can handle textures in linear RGB (I'll use 'LRGB' in this post, in contrast to SRGB). You may say this is old hat, but the capability is not glaringly obvious from the documentation. I'll reiterate what I have understood so far:

There are two components involved, the texture and the window. The texture may contain LRGB or SRGB data, and to tell the system which type of data the texture holds, you call sf::Texture::setSrgb() and you can inquire with sf::Texture::isSrgb() what the current state is. The second component is the window, and there it depends on the ContextSettings passed to the window's c'tor. The ContextSettings have a boolen field 'sRgbCapable'. You can pass true here if you want an SRGB-capable frame buffer, and false if you don't. AFAICT passing false will certainly succeed, but passing true may not succeed - you have to inquire about the state after creating the window to figure out what type of frame buffer you have.

Now let's assume you're working in LRGB. To offload LRGB to SRGB conversion to the GPU, you'd call setSrgb() with a false argument to tag the texture as holding LRGB data and work with an SRGB-capable frame buffer. If the latter can't be had (or the window was deliberately created without SRGB capability), you have to 'manually' convert your data to SRGB on the CPU before storing it to the texture.

I assume that passing in LRGB data should be preferable: first, the time to convert the texture data to LRGB (which is used by the GPU) is saved, and second, since the texture data is quantized to 8 bits already, doing the conversion from SRGB to LRGB will produce loss of image quality. Is this assumption correct?

How about transparency? Is the alpha value processed as-is, regardless of the texture's property?

I'm processing photographic images, SRGB/LRGB is an issue for me - in fact, my data are initially in single precision float LRGB, and I'd be happy if I could offload the float-to-uchar conversion to the GPU as well. Is there a way to do that simply with SFML? And if not, maybe this would make a nice new feature?

Feature requests / crossfading images
« on: June 30, 2020, 10:16:38 am »
I'm using SFML to write an image viewer. I'd like a simple way to crossfade from the current image to the next one. I haven't found a way to do this with SFML yet - if there is one, please let me know! I have the images as textures on the GPU and I'd like to use the GPU to produce the effect.

I think the simplest way to do this is to draw the new texture on top of the old one several times, increasing the new frame's alpha values every time until it's fully opaque, but touching every single pixel with the CPU just for that is too expensive. What I'd like is a function to globally change the transparency of a complete texture, or a blending mode to the same effect. I suppose this would be easy to do with a fragment shader, but I haven't worked with shaders so far and I'd prefer a simple prefabricated solution.


Feature requests / Re: query the system's current FPS setting
« on: April 30, 2017, 08:54:38 am »
tk = k * dt + ek
This formula rather matches a single measurement for the whole k-frames duration, but you said you were cumulating single frame times, which would rather be expressed as:
tk = sum(dt + ek)
Sorry to go on about this, but I did say that I am cumulating the deltas *inside an animated sequence*. My software switches between showing still images and animated sequences, which happens when the user zooms, pans, etc.. So my measurement does indeed look at an average over several k-frame durations. I have a frame counter counter which is reset to zero whenever the system is at rest. When I get to the call to display I look at this counter, and if it's greater than zero, I know the previous frame was already inside the animation sequence and I cumulate the delta. Cumulating single deltas has the advantage that the average is right at hand at every moment, rather than having to bother with recording sequence starting times and counting frames per sequence.

When it comes to cumulating individual independent measurements, you and Hapax are of course totally right about the error cumulating, and the formulta you give is the correct one to use.

Here's the relevant part of the code I'm using:

          p_window->display() ;
          display_called = std::chrono::system_clock::now() ;
          float dt = std::chrono::duration_cast<std::chrono::microseconds>
                      (display_called - display_last_called).count() ;
          dt /= 1000.0 ; // use float milliseconds
          display_last_called = display_called ;
          if ( frames_in_sequence > 1 )
            dt_count++ ;
            total_ms += dt ;
            if ( dt_count > 20 ) // only start using dt_average after having cumulated 20 deltas
              dt_average = total_ms / dt_count ;

Feature requests / Re: query the system's current FPS setting
« on: April 29, 2017, 08:45:21 am »
I believe that what Hapax meant by error accumulation is the error (inaccuracy) caused by std::chrono, which does accumulate when using multiple measurements.
It does not, in this case. Let me explain. What we are looking at is a sequence of time points:

{ t0, t1, ...tk }

where the individual times tn are composed of a fixed 'true' time n * dt (dt being the fixed, unvarying GPU frame time) and an arbitrary small error en, where en is never larger than the maximal measurement error (std::chrono's inaccuracy):

tn = n * dt + en

Now we increase k. We have

tk = k * dt + ek

When we take the average a, we get

a = tk / k
a = dt + ek / k

So, after k measurements, the error of the average is precisely ek / k, which, with large k, approaches 0. There is now way for an error to accumulate in this system, the error only depends on the error of the last measurement, divided by the number of total measurements.

I have to eat my words concerning another statement I made, though. I wrote earlier that the average frame time did not come out at 16.6 / 20.0 respectively. This was due to sloppy programming on my part: I stupidly did a std::chrono::duration_cast<std::chrono::milliseconds> on my times, resulting in the value being truncated to an integer millisecond value, which produced the .5 time difference, because my faulty maths sure *did* cumulate. I now changed the code to use std::chrono::duration_cast<std::chrono::microseconds>, assign to float and divide by 1000.0. Lo and behold, the timing is spot on. And it only takes a few frames to stabilize, so 25 is ample for a sample.

Feature requests / Re: query the system's current FPS setting
« on: April 28, 2017, 06:44:47 pm »
Timing a frame is fairly accurate but not great so adding those frame times would cumulate their error.
It would be more accurate - if you need it - to just measure the time after 25 frames have passed and then divide for the average.
The error does not cumulate because the times I measure actually vary around a fixed value, the GPU frame time (at least I think that is constant!). After many observations, the variations cancel out and the underlying period manifests in the mean. What I'm timing is the time from one call to display to the next; there is nothing to get 'in between'.

I was only omitting the first 25 frames because there the deviation from the true value was often so great that the average initially varied too much, but that was a quick shot. It's a good idea to use the first measurements, but only after some time, as you have suggested. So now I wait until 25 frames have passed before I *use* the average. On my system, it takes a few hundred frames for the value to really stabilize. Thanks for the hint!

Feature requests / Re: query the system's current FPS setting
« on: April 27, 2017, 11:19:01 am »
Since the time between two GPU frames seems hard to get, I have now followed your alternative proposal and resorted to taking measurements of the time between successive calls to display(). Just looking at the first two frames did not work for me. So initially I looked at the time differences between all frames in an animated sequence, but I noticed that these values did vary a good deal initially and stabilized only after some time. So now I use this method:

- take the time delta between successive calls to display()
- if the animated sequence has run for at least 25 frames, cumulate the delta
- calculate the time between GPU frames as the average of the cumulated deltas

The value stabilizes after some time, but keeps on fluctuating by a small amount. An alternative would be to use a gliding average, which has the advantage that the value adapts more quickly to a change to the FPS via the system settings, but I think this is too much of a corner case to bother.

I noticed one surprising thing, which shows how apt your comment on the unreliability of settings vs. reality is: the result of my measurements does not approach 16.6 / 20.0 msec for 60 / 50 fps, respectively, but instead converges on ca. 16.0 / 19.5. I think this is a genuine difference, not due to false measurements - to take the times I use std::chrono::system_clock::now(), which should be fairly accurate.

Feature requests / Re: query the system's current FPS setting
« on: April 26, 2017, 11:59:30 am »
I don't think you understand what I'm trying to explain...
But I do! I want precisely the time between two GPU frames. I don't think the graphics driver can even know what the monitor does, and I am not concerned with this datum. I thought the time between two GPU frames is what you set when setting the FPS in the system settings, but I may have misunderstood that bit. I thought the graphics hardware simply produces a signal which the monitor adapts to - or stays black if it can't handle the signal.

So is it possible to query the time between two GPU frames?

I understand and agree that the refresh rate of the monitor is irrelevant. I don't want to know the monitor's refresh rate, I want the time between two GPU frames, if it can be provided.

Feature requests / Re: query the system's current FPS setting
« on: April 26, 2017, 08:42:02 am »
I beg to differ: It would be a good starting point.
Have you read what I said about the GPU driver?
I did read that. Be assured, I'm glad that you take the time to give good advice and I carefully read everything you write. Let me explain in more detail what my problem is:
On my system (Kubuntu 16.10 running on an Intel(R) Core(TM) i5-4570, no graphics card) I have lots of problems with stuttering. The only way I have found so far to get the system displaying my graphics smoothly is by using setVerticalSyncEnabled(true) and making sure I call display() in time. With the method I have outlined above, I need an estimate of the time budget I have to come up with new frames so I can adapt my processing time if my calculations take too long. I would like to base my estimate on the currently active FPS setting, which, as you say, is easy to extract but not available in the public API. All I'd like to see is a way to query this setting, never mid the driver. What the driver does or does not do to the stream of frames is outside my control anyway, but it can be tweaked by the user to look good on their hardware.
I have timed the time difference from one call to display() to the next and found that, at least on my system, the times do eventually settle on an average which is close to the figure I search, but extracting it this way means looking at the frame-to-frame times over some time, waiting for them to stabilize and then extract an estimate. It would be much easier to have a value to start out with, namely the 20ms for 50fps or 16.7 ms for 60fps etc..
Maybe it's not a good idea to put the value inside the sf::VideoMode structure, but a simple getter function à la sf::getCurrentFPSSetting() would not harm anyone, especially if it says in the documentation that there is no garantee that the system will actually display frames at this rate and that it is merely the setting in the window manager.

Feature requests / Re: query the system's current FPS setting
« on: April 25, 2017, 12:02:42 pm »
The refresh rate of monitor could easily be added to sf::VideoMode; I think all back-ends already have this information, it's just a matter of exposing it in the public API.
Plase do!
But I don't think it is relevant.
I beg to differ: It would be a good starting point. Currently I am using some default which seems sensible, assuming 50fps. If I had information from the system, I could use that as the starting point and with a bit of luck this starting point is better than guessing a 'plausible' value.

Feature requests / Re: query the system's current FPS setting
« on: April 25, 2017, 11:47:05 am »
The refresh rate of monitor(s) is not hard to extract

This is precisely what I want. I'm not sure if one can query the refresh rate of the monitor itself, but I'd like to get the rate at which the system is sending frames out to it. If you know of a portable way of extracting the refresh rate of monitor, please let me know!

One solution is to measure the duration between two consecutive (empty) frames when the application starts.

I'd like to avoid trickery like this - and if the information is easy to get, why choose a complicated way of getting it  ;)

... but without knowing exactly what you do, it's hard to say more.

I'm writing an image viewer allowing all kinds of manipulations like zoom, pan and rotate, and I animate these manipulations smoothly, relying of a fixed frame rate, and the vsync to display every frame in time. I can adapt rendering times by rendering to a smaller frame, which I let SFML display enlarged by using a view. This way, may (CPU-based) rendering time can be adapted to what the system can handle, while the upsacling of the small frame which I've rendered is done via SFML/openGL and takes next to no time. When at rest after animating the zoom, pan, etc, I render to full resolution again.

To keep this process running smoothly, I render in a separate thread and buffer a few frames. This way, even if the rendering times vary from frame to frame I don't get stutter because with a few frames buffered it all averages out. Works like a dream, and the latency is so small it doesn't matter - after all, I'm writing an image viewer, not a shooter.

Feature requests / query the system's current FPS setting
« on: April 25, 2017, 10:43:59 am »
Hi group!

I'm looking for a portable way to query the FPS (frames per second) setting of the system my SFML program is running on. So I am *not* looking for a way to measure how many FPS my program is producing, but I want the currently active setting for the display driver. My program is driven by the vertical sync and can adapt it's computational load to produce frames faster or slower, but to get an idea how much time it has to produce a frame, it should know what the system is expecting: If the graphics unit feeds 60 FPS to the monitor, my time budget to render a frame is obviously smaller than it is when it's using 50 or even 30 FPS if I want to supply it with frames at the currently set rate.

While there may be ways to get the information via the system settings, I'd like to do a query from *inside* my program which will work on all systems SFML supports, so that it's portable. I suspect the information should not be too hard to extract - ideally I'd like it to be incorporated, for example, into the sf::VideoMode structure, or to be returned by a simple getter function.

Pages: [1]