Welcome, Guest. Please login or register. Did you miss your activation email?

Author Topic: sf::Texture crashes when used in a static object  (Read 15260 times)

0 Members and 1 Guest are viewing this topic.

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
sf::Texture crashes when used in a static object
« Reply #15 on: March 14, 2012, 03:03:59 pm »
Quote
Believe me I ran into so many problems regarding exactly that, that I decided to drop the idea of using a context in another thread than the "primary" thread.

The behaviour when using contexts in multiple threads is well defined, and as far as I can tell, it has always worked perfectly so far. Many users do that.

Quote
What you can do is loading the image data in a separate thread.

That would add unnecessary complexity and verbosity to the user code.

Quote
Also think about how the interaction with the driver/GPU works: There's one bus which isn't threaded by any means. That means you don't have a benefit at all when you're preparing your textures in a separate thread.

If you have 1 texture it's pretty useless, for sure. But if you want to load 1000 textures, you'll be happy to be able to run your game loop smoothly while the textures are being loaded in the background.

Quote
Out of curiosity: Who? I really can't think of a single use-case where multiple windows couldn't be emulated by OpenGL.

I guess you'll have to search the related posts, because I can't remember them :P
Laurent Gomila - SFML developer

Tank

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1486
    • View Profile
    • Blog
    • Email
sf::Texture crashes when used in a static object
« Reply #16 on: March 14, 2012, 03:19:46 pm »
Quote
The behaviour when using contexts in multiple threads is well defined, and as far as I can tell, it has always worked perfectly so far. Many users do that.

Is that an assumption or a fact? If it's a fact, then I'm probably much more worse than I thought, because I wasn't able to do that with SFML both on Windows and Linux with Nvidia and AMD graphic cards in several configurations.

Some crashed, some did not display anything. And I did minimal examples that did nothing else than loading a texture in a separate thread with context switching -- also protected by a mutex. It DOES lead to problems.

And again, it does not help you with "smooth" texture loading. The texture does go through the same bus, there's not a special separated texture bus that's being used.

Quote
That would add unnecessary complexity and verbosity to the user code.

It adds complexity, yes. But it's not unnecessary, it's needed to make it work flawlessly. And besides of that, it's not complicated (complicated != complexity). In a separate thread you call prepareTexture() or whatever, and in the main thread you finalize them, as easy as pie.

Quote
If you have 1 texture it's pretty useless, for sure. But if you want to load 1000 textures, you'll be happy to be able to run your game loop smoothly while the textures are being loaded in the background.

As soon as the bus is used, you lose every benefit. I really know that, but you can do an example to prove the other way: Load 1000 textures in one context and 1000 textures in another context (or even thread, doesn't matter). FPS will drop in both cases, because the bus is used equally. And make sure not to count the time in that's needed for sf::Image::loadFromFile(), that's not related to textures in any way.

Quote
I guess you'll have to search the related posts, because I can't remember them

Probably because there're just too less or unimportant. ;-) So if anyone is reading this and needs multiple windows, please scream.

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
sf::Texture crashes when used in a static object
« Reply #17 on: March 14, 2012, 03:35:39 pm »
Quote
Is that an assumption or a fact?

It's a fact. Regular use cases work fine; you might run into troubles only if you do complicated things (but in this case it's probably not SFML which is causing the problems), or you use crappy graphics drivers.
I'd be very interested to see your most minimal/simple example that doesn't work as expected.

Quote
It adds complexity, yes. But it's not unnecessary, it's needed to make it work flawlessly.

It already works flawlessly ;)
And like I said, many users already do it, so it would be silly to change that.

Quote
And besides of that, it's not complicated (complicated != complexity). In a separate thread you call prepareTexture() or whatever, and in the main thread you finalize them, as easy as pie.

I think it's already complicated for what it is -- a little more than what you said: you have to store all the intermediate images, consuming unnecessary memory, and map them to some unique ID (which may not be a filename) so that you can assign them to their matching sf::Texture.
But anyway, let's talk about this later, only if we agree to go in this direction ;)

Quote
As soon as the bus is used, you lose every benefit

All that I'm saying is that it's much easier to scatter the loading of 1000 textures among multiple frames if it's done in a parallel thread, than if it's done in the same one. And avoid a big freeze of the application when you do it.

Quote
And make sure not to count the time in that's needed for sf::Image::loadFromFile(), that's not related to textures in any way.

Ok, but it adds even more complexity to your solution. We're still comparing the global task of preloading textures smoothly ;)

Quote
Probably because there're just too less or unimportant.

Seriously, am I the kind of guy to implement such a big feature if it is useless? :twisted:
Laurent Gomila - SFML developer

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
sf::Texture crashes when used in a static object
« Reply #18 on: March 14, 2012, 03:40:15 pm »
By the way... can you remind me how disabling shared contexts and multiple windows would help to solve the initial problem? Why are we discussing these details?
Laurent Gomila - SFML developer

Tank

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1486
    • View Profile
    • Blog
    • Email
sf::Texture crashes when used in a static object
« Reply #19 on: March 14, 2012, 04:20:17 pm »
Quote
but in this case it's probably not SFML which is causing the problems

That's right, it's generally multi-threading and shared contexts with OpenGL.

Here are two very interesting links:

- http://www.opengl.org/wiki/OpenGL_and_multithreading
- http://developer.amd.com/gpu_assets/GDC2005_OpenGL_Performance.pdf

In the PDF start at slice 33.

Quote
I'd be very interested to see your most minimal/simple example that doesn't work as expected.

Here you go: http://pastebin.com/bMeGSRpa

Keep in mind what I said before: It doesn't work for some OS/GPU combinations only! I hadn't got any problems, it were 2 out of 12 users who had problems when I loaded textures in parallel (both Win7 64 bit, AMD GPU, latest drivers at that time).

Quote
you have to store all the intermediate images, consuming unnecessary memory, and map them to some unique ID (which may not be a filename) so that you can assign them to their matching sf::Texture.

At first, you do not consume unnecessary memory: You also consume the same amount of memory when doing loadFromFile(), except for a shorter time. When loading images in parallel and finalizing the textures in an update() step, yes, memory is used for a larger time – some milliseconds maybe. That's definitely not an issue.

Also you do not need to map anything, the details are hidden in your class design. In FlexWorld for example as soon as a texture is requested, a pointer is held to the same object – if it has the texture initialized already or not. When it gets ready, it's used. If not, then you see just nothing. But as this happens only for a couple of milliseconds, it's not noticable.

That of course depends on your OOP skill. If you use only SFML's classes, then yes, you have to keep mappings somewhere. But for everything else than Pacman or other smaller projects you'll likely have some sort of resource management externally that takes care of such things.

Quote
And avoid a big freeze of the application when you do it.

Again: You can't avoid freezes that happen due to texture preparation (except you're using pixel buffers, but that's another topic). Freezes are mostly the result of time spent both on the hard-disk and decompression. So if that's the bottleneck, I have to avoid it – not loading textures to the GPU which can't be made faster.

Quote
Seriously, am I the kind of guy to implement such a big feature if it is useless?

Normally not, this is why I'm surprised you want to keep it, even if it means having issues. ;-) Seriously, I just searched the forums for "multiple windows" and found some threads. Not a single thread revealed what they're used for except "Can I do it?".

Besides of that: Multiple windows are of course still possible, the critical part is the shared context. If you have two isolated contexts for each window, everything's fine, no issues. So I rather ask to drop the shared context, sorry for being unclear.

Quote
By the way... can you remind me how disabling shared contexts and multiple windows would help to solve the initial problem? Why are we discussing these details?

I rechecked a recent stacktrace of an SFGUI crash. Take a look at it:

Code: [Select]
ntdll.dll!77d9a2ce()    
    [Frames below may be incorrect and/or missing, no symbols loaded for ntdll.dll]    
    Label.exe!sf::priv::MutexImpl::Lock()  Line 52 + 0xc bytes    C++
    Label.exe!sf::Mutex::Lock()  Line 62    C++
    Label.exe!sf::Lock::Lock(sf::Mutex & mutex)  Line 39    C++
    Label.exe!sf::priv::WglContext::CreateContext(sf::priv::WglContext * shared, unsigned int bitsPerPixel, const sf::ContextSettings & settings)  Line 315 + 0x10 bytes    C++
    Label.exe!sf::priv::WglContext::WglContext(sf::priv::WglContext * shared)  Line 59    C++
    Label.exe!sf::priv::GlContext::New()  Line 142 + 0x28 bytes    C++
    Label.exe!`anonymous namespace'::GetInternalContext()  Line 87 + 0x5 bytes    C++
    Label.exe!sf::priv::GlContext::EnsureContext()  Line 135 + 0x7 bytes    C++
    Label.exe!sf::GlResource::EnsureGlContext()  Line 83    C++
    Label.exe!sf::Texture::~Texture()  Line 98    C++
    Label.exe!sfg::Renderer::~Renderer()  Line 45 + 0x36 bytes    C++
    Label.exe!sfg::Renderer::`scalar deleting destructor'()  + 0x16 bytes    C++
    Label.exe!sfg::ReferenceCount<sfg::Renderer>::Dispose()  Line 61 + 0x1f bytes    C++
    Label.exe!sfg::ReferenceCountBase::RemoveStrongReference()  Line 34 + 0xf bytes    C++
    Label.exe!sfg::StrongReferenceCount::~StrongReferenceCount()  Line 84    C++
    Label.exe!sfg::SharedPtr<sfg::Renderer>::~SharedPtr<sfg::Renderer>()  + 0x19 bytes    C++
    Label.exe!`dynamic atexit destructor for 'sfg::Renderer::m_instance''()  + 0xd bytes    C++
    Label.exe!doexit(int code, int quick, int retcaller)  Line 567    C
    Label.exe!exit(int code)  Line 393 + 0xd bytes    C
    Label.exe!__tmainCRTStartup()  Line 284    C
    Label.exe!mainCRTStartup()  Line 189    C
    kernel32.dll!76cb339a()    
    ntdll.dll!77d79ef2()    
    ntdll.dll!77d79ec5()


You could of course also try to workaround resource destruction by checking if there's a valid context and if not skip the cleanup (Texture::~Texture). But imho the context management is somewhat messed up. ;)

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
sf::Texture crashes when used in a static object
« Reply #20 on: March 14, 2012, 04:53:37 pm »
Quote
That's right, it's generally multi-threading and shared contexts with OpenGL

... with crappy drivers. The specification of OpenGL itself is very clear about multi-threading.
I insist that problems are caused by the drivers, not by the programer, its design choices or the APIs that it uses.

Quote
Here you go: http://pastebin.com/bMeGSRpa

Really? The thread instance is local to the if{} block, therefore the main thread blocks until it's finished ;)
And you don't need to explicitely create sf::Context instances anymore in SFML 2, you can remove it from your load_texture function.

And what were the errors with this code? Crash, or wrong results?

Quote
At first, you do not consume unnecessary memory: You also consume the same amount of memory when doing loadFromFile(), except for a shorter time. When loading images in parallel and finalizing the textures in an update() step, yes, memory is used for a larger time – some milliseconds maybe. That's definitely not an issue.

It could be an issue. In the threaded version, images are loaded in memory one after the other. With your solution, it's all at the same time.
But I agree that in the real world it's unlikely to be a problem, since there will always be more system RAM than video RAM.

Quote
In FlexWorld for example as soon as a texture is requested, a pointer is held to the same object – if it has the texture initialized already or not. When it gets ready, it's used. If not, then you see just nothing. But as this happens only for a couple of milliseconds, it's not noticable.

So you had to write a dedicated class just for that?

Quote
You can't avoid freezes that happen due to texture preparation

I prefer to have 1000 1ms freezes, rather than a single 1sec freeze.

Quote
Normally not, this is why I'm surprised you want to keep it, even if it means having issues.

The feature of allowing multiple windows itself doesn't create new issues. It's context sharing which does, which is an orthogonal concept (I can disable it while keeping multiple windows).
(ok, I just saw that you say it below)

Quote
Seriously, I just searched the forums for "multiple windows" and found some threads. Not a single thread revealed what they're used for except "Can I do it?"

Arf, I guess I'll have to spend some time to find something relevant then.

Quote
I rechecked a recent stacktrace of an SFGUI crash. Take a look at it

It would crash without the sharing thing. And even if I totally remove my (totally) messy context management, creating a context after main() still crashes on ATI drivers, so the only thing that would really solve the problem is to make sure that everything's destroyed before main() returns. So, basically, adding a sf::cleanup() function or similar. But then I would have other issues (in the .Net binding for example).
Laurent Gomila - SFML developer

binary1248

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1405
  • I am awesome.
    • View Profile
    • The server that really shouldn't be running
sf::Texture crashes when used in a static object
« Reply #21 on: March 14, 2012, 05:15:45 pm »
Quote from: "Tank"
Quote
And avoid a big freeze of the application when you do it.

Again: You can't avoid freezes that happen due to texture preparation (except you're using pixel buffers, but that's another topic). Freezes are mostly the result of time spent both on the hard-disk and decompression. So if that's the bottleneck, I have to avoid it – not loading textures to the GPU which can't be made faster.


Quote
I prefer to have 1000 1ms freezes, rather than a single 1sec freeze.


Not to mention, nobody said multiple threads would all get the same amount of execution time from the OS scheduler. Sure the ideal would be a perfect fair scheduling strategy, but as we all know operating systems have become so complex nowadays and scheduling is a science for itself you really can't count on multiple threads in the same process having the same priority.

Take for example you try to play your favorite game at maximum detail to fully load the system. You then start to compress something that takes a while and also uses all CPU cores. You would expect the FPS in the game to drop and stay stable at a certain reduced value while the compression takes place. What you instead observe is that it fluctuates noticeably, maybe even producing a bit of stutter in the game for a while. Sure this example is about scheduling across multiple processes but the same would count for threads within a single process too.

Quote
It would crash without the sharing thing. And even if I totally remove my (totally) messy context management, creating a context after main() still crashes on ATI drivers, so the only thing that would really solve the problem is to make sure that everything's destroyed before main() returns. So, basically, adding a sf::cleanup() function or similar. But then I would have other issues (in the .Net binding for example).


The problem now is that the contexts are being played with even after main() returns (this includes even creating a new one). The context(s) can be created either before or preferably during execution of main() and should only be created when absolutely necessary (in almost all cases when the first GL resource is created). Constructing a new context after main() returns is just asking for trouble and is the root of all the issues being discussed here. That in turn, according to our understanding, is due to the management of these shared contexts going wrong somewhere. Microsoft and Khronos might give the illusion that using shared contexts is well defined, as well as the whole deinitialization order thing. But in reality we all know that you can't rely purely on standards to get something working.

Sure you can wait for AMD to fix their driver or Microsoft to come up with a new way to do old things (which is never going to happen, they hate OpenGL don't they?), but what if AMD doesn't want to fix this issue? Are you going to let programs crash forever or proactively look for a solution (workaround) around this known "behavior"?
SFGUI # SFNUL # GLS # Wyrm <- Why do I waste my time on such a useless project? Because I am awesome (first meaning).

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
sf::Texture crashes when used in a static object
« Reply #22 on: March 14, 2012, 06:16:21 pm »
Quote
Not to mention, nobody said multiple threads would all get the same amount of execution time from the OS scheduler. Sure the ideal would be a perfect fair scheduling strategy, but as we all know operating systems have become so complex nowadays and scheduling is a science for itself you really can't count on multiple threads in the same process having the same priority.

Yeah, but basically if you set your loading thread to a low priority you can expect it to run without disturbing the main thread.

What's the point of all these technical discussions? Loading textures in threads works as expected, many users do it and are happy with it. All other solutions are more or less worse, in my opinion. And there's nothing to fix there, so where do we go? ;)

Quote
proactively look for a solution

I constantly do, and sometimes I find a solution that improves the global behaviour. But I haven't found the perfect solution yet, so I'll continue to work on it, yes.
Laurent Gomila - SFML developer

Tank

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1486
    • View Profile
    • Blog
    • Email
sf::Texture crashes when used in a static object
« Reply #23 on: March 14, 2012, 06:44:49 pm »
Quote
Really? The thread instance is local to the if{} block, therefore the main thread blocks until it's finished

:) I wrote it up from memory without compiling it. You get the idea.

Quote
And what were the errors with this code? Crash, or wrong results?

Both. One user reported not seeing anything with those loaded textures (and that one was really hard to tackle; a glDisable( GL_TEXTURE_2D ) revealed the *bug*), the other one experienced crashes in the driver.

Quote
It could be an issue. In the threaded version, images are loaded in memory one after the other. With your solution, it's all at the same time.

The amount of memory is still the same. I don't think you'll run out of memory just because you're fetching some images in one cycle.

Quote
So you had to write a dedicated class just for that?

Yes, better speaking of a proxying class for managing resources in general (to provide a garbage collection and loading stuff securely from separate threads, like textures, but also 3D models which are being uploaded into VBOs).

Quote
creating a context after main() still crashes on ATI drivers, so the only thing that would really solve the problem is to make sure that everything's destroyed before main() returns. So, basically, adding a sf::cleanup() function or similar. But then I would have other issues (in the .Net binding for example).

cleanup() isn't necessary imho, look at the proxy pattern I mentioned some posts before (it's just one possible solution of course, but it makes resource management automatic and you don't need ugly init/cleanup functions).

What kind of issues do you get in the .NET binding?

Quote
What's the point of all these technical discussions? Loading textures in threads works as expected, many users do it and are happy with it.

I think it points out some flaws. I already told you 2-3 times that loading textures in threads does not always work. Many users may do it, but how many of those distribute their software within a larger scale? Then what percentage of them will report the error instead of trying something else?

I don't report the crash or multi-threading issues for fun, they're real show-stoppers at my side. I can avoid some of them, primarily those which are under my responsibility.

You know the problems (and some solutions), so I guess it's up to you to decide if you're going to fix the problems or live with crashes on some machines. I can still add a "cleanup()" workaround to make everything work. ;-)

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
sf::Texture crashes when used in a static object
« Reply #24 on: March 14, 2012, 07:14:52 pm »
Quote
I wrote it up from memory without compiling it. You get the idea.

Yeah, but I'd prefer to work on the exact code that produces the errors.

Quote
The amount of memory is still the same.

?
Anyway let's not waste too much time on this. This is a non-issue in real life.

Quote
cleanup() isn't necessary imho, look at the proxy pattern I mentioned some posts before

Yep, I'll definitely give it a try later. But I'm already pretty sure that it's not 100% perfect.

Quote
What kind of issues do you get in the .NET binding?

Some objects are destroyed by the garbage collector after the main(), and in a separate thread (the main thread has already terminated!). And I have absolutely no control on this behaviour.

Quote
I already told you 2-3 times that loading textures in threads does not always work

Since it globally works for me, I prefer to consider that as bugs that need to be fixed, rather than a global technical issue that must be worked around. So I'll first test your small code, probably asking other people to test it as well, and see what we get from there :)

Quote
it's up to you to decide if you're going to fix the problems or live with crashes on some machines

Of course I'll try to fix things. It's just that it's complicated, and takes time.

Quote
I can still add a "cleanup()" workaround to make everything work.

Yeah, you should try to find a workaround on your side, because:
- nobody knows when I'll have the time to work on this issue and find a solution
- the solutions that you find for your own code might help me ;)
Laurent Gomila - SFML developer

Tank

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1486
    • View Profile
    • Blog
    • Email
sf::Texture crashes when used in a static object
« Reply #25 on: March 14, 2012, 07:25:45 pm »
Quote
the solutions that you find for your own code might help me

I see. ;)

Quote
Yeah, but I'd prefer to work on the exact code that produces the errors.

Just put the thread variable in a more persistent scope, that's it.

Thanks for taking the time.

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
sf::Texture crashes when used in a static object
« Reply #26 on: March 14, 2012, 08:50:41 pm »
I've found a potential issue with this code in SFML, the fix is easy but I need to check something first. After that I'll create a new topic to make people test :)

By the way, I totally forgot about this project: sfeMovie

It shows two interesting things:
- loading textures in a separate thread is totally mandatory in this use case
- it seems to work fine too, however I don't know how many people use it
Laurent Gomila - SFML developer

binary1248

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1405
  • I am awesome.
    • View Profile
    • The server that really shouldn't be running
sf::Texture crashes when used in a static object
« Reply #27 on: March 14, 2012, 09:14:09 pm »
Quote from: "Laurent"

It shows two interesting things:
- loading textures in a separate thread is totally mandatory in this use case
- it seems to work fine too, however I don't know how many people use it


When loading the texture in a separate thread, I agree with you that it would be wise to do all the CPU driven tasks in that thread. But ultimately when you call glTexSubImage2D or any similar GL function the driver will queue the command in the same queue as if you were making all GL calls from a single thread.

The question is: who does the concurrency management? The driver which as you know is buggy as hell and Wgl which is so awesomely documented, or you yourself who can make sure that everything gets done as you expect and no overhead is needed where it isn't necessary.

I can't imagine this to be too hard of a task. Simply wrap your context in a class that interfaces to OpenGL and lock the context with a mutex when a thread wants to perform some GL magic. As it is now I can imagine the same happening just not in SFML but in the operating system somewhere.

In the case of sfeMovie I personally would have made use of some advanced GL buffer objects instead of constantly updating a texture all the time, but that's just my preference.
SFGUI # SFNUL # GLS # Wyrm <- Why do I waste my time on such a useless project? Because I am awesome (first meaning).

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
sf::Texture crashes when used in a static object
« Reply #28 on: March 14, 2012, 10:56:24 pm »
Since OpenGL contexts are thread-local resources, no concurrency can happen until the calls are processed by the driver, right before being sent to the graphics card. So I don't know what you mean with "[...] lock the context with a mutex when a thread wants to perform some GL magic", can you elaborate on this?

Quote
In the case of sfeMovie I personally would have made use of some advanced GL buffer objects

For sure it would be better with PBOs, but I don't know if the different would be that significant.

Quote
instead of constantly updating a texture all the time, but that's just my preference.

Whatever solution you choose, you have to constantly upload the pixel data to a texture anyway. Or am I missing an awesome OpenGL technique? :D
Laurent Gomila - SFML developer

binary1248

  • SFML Team
  • Hero Member
  • *****
  • Posts: 1405
  • I am awesome.
    • View Profile
    • The server that really shouldn't be running
sf::Texture crashes when used in a static object
« Reply #29 on: March 15, 2012, 12:44:31 am »
Quote from: "Laurent"
Since OpenGL contexts are thread-local resources, no concurrency can happen until the calls are processed by the driver, right before being sent to the graphics card. So I don't know what you mean with "[...] lock the context with a mutex when a thread wants to perform some GL magic", can you elaborate on this?


The idea is to only have a single context in your application and do the synchronization between threads yourself. You would have to make sure that only one thread has the context set as active at a time therefore lock a mutex every time you want to make it current somewhere. After the GL call is done you can make it non-current and release the lock. This would be the model multiple-thread-single-context. As shown in the PDF Tank linked to, single-thread-single-context is also possible and even recommended by many professionals. In this case you would queue your GL commands for a single "GL worker thread" to process. Due to the fact that a queue is FIFO it has implicit fairness built into it and no worries have to be made over one thread starving another thread of execution time. This is probably also how commands are queued and sent to the GPU. Since this solution is probably a bit more complicated the first one would be a nice middle ground between what SFML uses now and the optimal case.

Quote from: "Laurent"
Whatever solution you choose, you have to constantly upload the pixel data to a texture anyway. Or am I missing an awesome OpenGL technique? :D


Once again Mr. Song has written an excellent article about this matter: http://www.songho.ca/opengl/gl_pbo.html

Note also I was referring to the SFML sf::Texture and not the OpenGL texture object.
SFGUI # SFNUL # GLS # Wyrm <- Why do I waste my time on such a useless project? Because I am awesome (first meaning).