SFML community forums

General => Feature requests => Topic started by: kfriddile on October 30, 2008, 10:19:32 pm

Title: sfml equivalent of GetMessage()
Post by: kfriddile on October 30, 2008, 10:19:32 pm
I would really like to see support for a blocking call to get events, similar to the Win32 API's GetMessage() call.  It would be nice to be able to run sfml's message pump in a separate thread using this blocking call.  This would allow the app to idle nicely when there's nothing to do, without any of this hacky Sleep(0) silliness.  I've implemented a Win32-only windowing abstraction that does this, but I'd rather use something like sfml to easily make it cross-platform.  Does polling make anyone else feel dirty inside?
Title: sfml equivalent of GetMessage()
Post by: Laurent on October 31, 2008, 08:22:41 am
It has already been discussed. There are two main reasons why I won't implement it :
- It's almost useless in this context (SFML graphics is meant for real time)
- It's implementable with 3 lines of code (the hacky Sleep(0))

But the discussion is still opened, and feel free to try to convince me if you think it's worth it :)

By the way, I'm curious to know how you achieved to call GetMessage in another thread than the one which created the window ? MSDN says this is technically impossible ;)
More generally, multi-threading with windows is another potential issue which makes a good 3rd reason not to implement what you request.
Title: sfml equivalent of GetMessage()
Post by: kfriddile on October 31, 2008, 03:33:05 pm
Quote from: "Laurent"

- It's almost useless in this context (SFML graphics is meant for real time)


Real-time in no way requires polling (wasting CPU by continually asking if something has happened instead of just being told when something happened).  For example, the windowing abstraction I wrote that I mentioned earlier asynchronously calls subscribed methods when an event has occured.  I've had no trouble writing a real-time opengl app using this solution.  Those particular apps may use a lot of CPU, but I know all of that CPU is being consumed by something that actually needs it (the update/render loop)...not by continually asking if there are window events.  It also drastically reduces the coupling of the window object with the rest of the app.  I can just create a 'Window' on the stack and if an event occurs that some part of my code has subscribed for, it gets called.  My app doesn't have to concern itself with constantly checking up on the window to see if anything has happened only to be disappointed when nothing has.  That functionality is the window's responsibility.

Quote from: "Laurent"

- It's implementable with 3 lines of code (the hacky Sleep(0))


That isn't implementing the same thing at all.  It's still fundamentally different because it's still polling.  It's been discussed all over the internet why polling and the Sleep(0) "fix" is bad, so I'll just provide one of the better links:

http://blogs.msdn.com/oldnewthing/archive/2005/10/04/476847.aspx

Also, Microsoft themselves has this to say about using PeekMessage() (polling):

"PeekMessage shouldn't be needed in modern, well-written applications."

http://msdn.microsoft.com/en-us/library/ms644928(VS.85).aspx
(toward the bottom...it is under community content, but that statement has been there for years, so I'm interpreting that as an endorsement from Microsoft)

Quote from: "Laurent"

By the way, I'm curious to know how you achieved to call GetMessage in another thread than the one which created the window ? MSDN says this is technically impossible ;)


You are correct in that it isn't possible, but I never said I was calling it in a thread other than the one that created the window :)  My window abstraction owns and spawns a thread that creates the window and then starts the blocking message pump.
Title: sfml equivalent of GetMessage()
Post by: Laurent on October 31, 2008, 10:47:16 pm
You're right and I completely agree with you, actually ;)

Except for one thing : calling a function that does nothing if no event happened, doesn't waste CPU at all. But I agree it's much less elegant than a signal / slot system. The problem is that this kind of stuff is not yet part of the C++ standard, and libraries providing it are too heavy to be used by SFML (it really requires a lot of code in order to be implemented). Moreover, it's confusing for beginners and doesn't mix well with C, which is required to write bindings. One more thing : calling an event handler from a separate thread is too much dangerous to be the default behaviour (requiring to be thread safe for a basic application is too much to ask, especially for beginners who might not even know what threads are).

The conclusion to this discussion (in my opinion) is that SFML won't change the way it handles events, for the reasons I said previously, but it's up to users to implement a more elegant event handling on top of that. Polling is far from elegant, but it works and can be used to implement a clean signal / slot system.
Title: sfml equivalent of GetMessage()
Post by: kfriddile on November 01, 2008, 12:09:13 am
Quote from: "Laurent"

calling a function that does nothing if no event happened, doesn't waste CPU at all.


This is simply false.  Calling a non-blocking function to check for events ( such as PeekMessage() ) in a loop uses 100% of the CPU core that thread is running on.  I'm sure laptop users running on battery would consider that "wasteful".  Even with a Sleep(0) to relinquish the rest of your time slice, you're still making at least one function call which, by definition, uses CPU.

Quote from: "Laurent"

The problem is that this kind of stuff is not yet part of the C++ standard


What does that have to do with anything?

Quote from: "Laurent"

libraries providing it are too heavy to be used by SFML


I would love to see some profiling or benchmark data to support this claim.

Quote from: "Laurent"

Moreover, it's confusing for beginners


I'm sure C++ was confusing for them at first too.

Quote from: "Laurent"

doesn't mix well with C, which is required to write bindings.


If you're talking about the name-mangling differences between C and C++, I think that is solved by simply using "extern C" with your C++ functions.  If you're talking about something else I'm not aware of, then you may have a point.  I will admit that bindings for other languages aren't important to my particular use case.

Quote from: "Laurent"

calling an event handler from a separate thread is too much dangerous to be the default behaviour (requiring to be thread safe for a basic application is too much to ask, especially for beginners who might not even know what threads are).


I couldn't disagree more.  EVERYONE needs to be conscious of thread safety issues, because single-threaded applications won't be an option for much longer, especially for resource-intensive real-time applications like your library seems to be designed for.  Individual cores aren't getting faster at the rate that they used to (in some cases they are even getting slower).  Instead, additional cores are being added.  If your application's performance can't scale with the addition of more cores via multithreading, then your application won't be viable in the very near future.  Furthermore, if you, as a library developer, don't realize this and design your library to be safely usable in such an environment, then nobody will be able to write viable applications with your library.

Anyways, it's also come to my attention that sfml contains intentional memory leaks, so it can't be used in any production-quality code anyways.  It seems odd to me that you would point to the C++ standard above, while at the same time relying on the operating system to clean up your memory leaks...behavior which is obviously not garaunteed by the standard.
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 01, 2008, 07:41:26 am
1/ I'm making a difference between using CPU and wasting CPU. Of course a function call uses CPU, but for sure doesn't waste it.

2/ I'm talking about dependencies, not about performances. Libraries providing clean signal / slot features are just too big; the other solution is to wait for their integration into the C++ standard but it's not for now.

3/ SFML must be 100% bindable to C. Signals / slots are not.

4/ I think SFML is appreciated because I care about beginners. Trust me. And believe me, thread safety is not beginner-friendly at all.

5/ Regarding the memory leaks... it's a long story ;)
Title: sfml equivalent of GetMessage()
Post by: Ceylo on November 01, 2008, 05:22:45 pm
Quote from: "Laurent"
4/ I think SFML is appreciated because I care about beginners. Trust me. And believe me, thread safety is not beginner-friendly at all.

Is SFML supposed to be used for beginners only ? (I admit this would somewhat disapoint me)
I think the point is : do you prefer to focus on the library popularity or the library quality ?

Quote from: "Laurent"
5/ Regarding the memory leaks... it's a long story ;)

Why don't you just explain to him there is only ONE leak that happens only ONCE in the whole program execution ?
Or are there other leaks under Windows ?

Quote from: "kfriddile"
Anyways, it's also come to my attention that sfml contains intentional memory leaks, so it can't be used in any production-quality code anyways.

As the Mac OS X developer for SFML, I can tell I noticed no leak the OS dependant part (at least, that's what the "leaks" tool told me, and the memory amount used by SFML apps is not increasing over time).

Now about this...
Quote from: "kfriddile"
I would really like to see support for a blocking call to get events, similar to the Win32 API's GetMessage() call.

I've not thought of this for a long time, but whenever it would be easy for me to support blocking calls for events, that would also block display updates because I manually have to tell the OpenGL context when to swap the back and front buffers (which is done from the polling loop). Is this what you wish ?
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 01, 2008, 06:41:59 pm
Quote
Is SFML supposed to be used for beginners only ? (I admit this would somewhat disapoint me)
I think the point is : do you prefer to focus on the library popularity or the library quality ?

SFML is for beginners (most users are), as well as experts. I don't see why a library which is beginner-friendly couldn't be also expert-friendly. My goal is to make quality available for everybody. Too many libraries are either scary (i.e. for experts only) or too limited (i.e. for beginners only).

Quote
Why don't you just explain to him there is only ONE leak that happens only ONCE in the whole program execution ?

Because I had to leave when I wrote my last post ;)
Ok, more details : the leak is indeed a small and controlled one, and its purpose is to enable a very important feature of SFML. Some people would even say it's not a leak; a leak is something which is not controlled and makes the memory consumption grow up and grow up. Actually, some implementations of STL or popular libraries can't free all the memory they use at program exit, and this is perfectly alright.
Title: sfml equivalent of GetMessage()
Post by: Ceylo on November 01, 2008, 07:41:37 pm
Quote from: "Laurent"
SFML is for beginners (most users are), as well as experts. I don't see why a library which is beginner-friendly couldn't be also expert-friendly. My goal is to make quality available for everybody. Too many libraries are either scary (i.e. for experts only) or too limited (i.e. for beginners only).

Then what about the threads ?
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 01, 2008, 07:55:27 pm
Quote
Then what about the threads ?

What's the problem with threads ? As I said, you can add whatever you want on top of SFML.
Title: sfml equivalent of GetMessage()
Post by: Ceylo on November 01, 2008, 08:09:36 pm
I meant thread safeness (safety ?).

But... I don't know much about this topic. Doesn't the main thread need to be thread-safe in order to allow the use of threads "on top of SFML" ?

And by the way, any Mac OS X application already uses multi-threading for the event handling. Do I need to take care of this ?
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 01, 2008, 08:46:32 pm
The most critical parts of SFML are thread safe (mainly OpenGL contexts stuff), but I'm sure there are still many others that would need to be improved. The point is that only a few functions of SFML are usually used in separate threads, so I have almost no multi-thread oriented feedbacks.
Title: sfml equivalent of GetMessage()
Post by: Ceylo on November 01, 2008, 09:42:59 pm
This sounds somewhat approximative to me...
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 01, 2008, 09:51:17 pm
That's basically what I meant ;)
But it's impossible for me to test every single combination of possible multi-threaded situations. So I need feedbacks, but I get only very few of them.
Title: sfml equivalent of GetMessage()
Post by: Ceylo on November 01, 2008, 10:00:22 pm
Quote from: "Laurent"
That's basically what I meant ;)
But it's impossible for me to test every single combination of possible multi-threaded situations. So I need feedbacks, but I get only very few of them.

Then why not protect everything you can ?
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 01, 2008, 10:16:25 pm
Because it adds complexity and decrease performances...
Title: sfml equivalent of GetMessage()
Post by: Ceylo on November 01, 2008, 10:19:53 pm
But on the other side you can't wait for crashes...
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 01, 2008, 11:06:13 pm
Hum... are you actually an expert in multi-threading, and posting relevant suggestions? Or just trying to guess some possible questions that might make sense? :P

I'm always opened (and happy) to discuss multi-threading stuff, but only if it makes sense. This is a complex but important part of this kind of libraries.
Title: sfml equivalent of GetMessage()
Post by: Ceylo on November 01, 2008, 11:27:06 pm
Waiting for crashes in order to fix the problems just does not seem to be a good idea to me.
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 02, 2008, 09:44:11 am
It's much more complex than just "is there a chance this line crashes in a multithread architecture ? ok, I put a mutex and that will be fine".

And, as far as I remember, there is no reported issue which is purely related to thread-safety in SFML so far.

But if you still think it's not enough, then I'll be glad to wait for your analysis regarding possible mutlithreading issues :)
Title: sfml equivalent of GetMessage()
Post by: Ceylo on November 02, 2008, 11:28:36 am
Hm.. yeah, as soon as I know a bit more about the topic :lol: .
Title: sfml equivalent of GetMessage()
Post by: kfriddile on November 03, 2008, 04:52:41 am
Quote from: "Laurent"

1/ I'm making a difference between using CPU and wasting CPU. Of course a function call uses CPU, but for sure doesn't waste it.


Using something when it doesn't need to be used is the definition of the word "waste".

Quote from: "Laurent"

2/ I'm talking about dependencies, not about performances. Libraries providing clean signal / slot features are just too big; the other solution is to wait for their integration into the C++ standard but it's not for now.


I never asked you to implement any kind of signal/slot or callback system.  I just asked for a blocking GetMessage() type call so I could implement my own.

Quote from: "Laurent"

3/ SFML must be 100% bindable to C. Signals / slots are not.


See response to 2 above.

Quote from: "Laurent"

I think SFML is appreciated because I care about beginners. Trust me. And believe me, thread safety is not beginner-friendly at all.


I think if you cared about beginners, you wouldn't hold them back with antiquated architecture.

Quote from: "Laurent"

5/ Regarding the memory leaks... it's a long story ;)


Yes, it was a long read.

Quote from: "Ceylo"

Is SFML supposed to be used for beginners only ? (I admit this would somewhat disapoint me)
I think the point is : do you prefer to focus on the library popularity or the library quality ?


Popularity and quality should be the same thing (unless one desires to only be popular among users who don't know any better...I wouldn't if it was my library).

Quote from: "Ceylo"

I've not thought of this for a long time, but whenever it would be easy for me to support blocking calls for events, that would also block display updates because I manually have to tell the OpenGL context when to swap the back and front buffers (which is done from the polling loop). Is this what you wish ?


If window events are that coupled to rendering (who says your current render-target has to be the back buffer?), then your architecture is flawed.  If such a blocking call existed, then it should be possible to call it on a separate thread than the one swapping the frame buffers.  There is no "polling loop" in that scenario.  "Polling" is akin to constantly asking the question "did anything happen yet?", while a rendering loop is constantly stating "draw this".

Quote from: "Laurent"

Ok, more details : the leak is indeed a small and controlled one, and its purpose is to enable a very important feature of SFML. Some people would even say it's not a leak; a leak is something which is not controlled and makes the memory consumption grow up and grow up. Actually, some implementations of STL or popular libraries can't free all the memory they use at program exit, and this is perfectly alright.


Your definition of "leak" is narrow-sighted.  A one-time allocation that is never freed is obviously a leak, and won't cause memory consumption to grow over time.  Leaks don't even have to refer to memory, since you can leak all kinds of other resources (device contexts, etc.)  Any STL implementation that leaks memory either isn't widely used or has been fixed to not leak memory.  It is ALWAYS possible for a program to free memory it has allocated.  Statements like saying memory leaks are "perfectly alright" are why I've decided not to use anything you've written.  There is absolutely no excuse for allowing any kind of resource leak.  I don't care if leaking one byte will allow your software to wash the dishes while creating world peace at the same time.  The idea of someone knowing about something as heinous as a leak and then rationalizing it instead of fixing it blows my mind.
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 03, 2008, 09:19:50 am
Quote
Using something when it doesn't need to be used is the definition of the word "waste".

Ok, fine for the definition. The point is, are we talking about the definition of "waste" or about the impact of such a design on SFML ? I don't think the main point between blocking wait and polling is to save or not a function call, it's much more a design issue. I think you'll agree with me, so let's not waste time with such considerations and focus on the design stuff ;)

Quote
I never asked you to implement any kind of signal/slot or callback system. I just asked for a blocking GetMessage() type call so I could implement my own.

You're right, sorry.

Quote
I think if you cared about beginners, you wouldn't hold them back with antiquated architecture.

You think not providing a blocking GetEvent is antiquated architecture ? I don't, but indeed I could easily add it. The only reason why I don't, is exactly why you want it : the only use I can see of such a function would be to implement an asynchronous signal / slot (or callback) system, and that's just idiot compared to the same system made synchronous. I agree it's less "elegant", but wasting a function call versus introducing multi-threading and all its potential issues... my choice is done. Imagine I just want to move a sprite in a keypress event, with your solution I would already have to care about concurrent accesses and go with mutexes to protect all accesses to my sprite. Not to talk about the fact that you could end up drawing your sprite in a different position than the one it had when you computed collisions / IA / ... in its update() function. That's just crazy, especially for beginners who are not be aware of all this stuff. And that costs much more than just polling.

Quote
Your definition of "leak" is narrow-sighted

That's wrong, YOUR definition is narrow-sighted. Mine is flexible and adapted to real situations, while yours is a "perfect world" definition. But we're not in a perfect world. Would you sacrifice an important feature of your library for the sake of "perfection" ?
Of course if you can tell me how to free this memory while keeping the feature, it would be great ;)

It's too bad we're fighting rather than trying to find solutions. It might not be obvious to you, but clean and well-designed code is one of my main goals too, I'm not that kind of programmer who just writes "code that works". So if you're ok I'd be really glad to talk more about the benefits and drawbacks of multithreaded event handling, and see what has or not to be added to SFML.
Title: sfml equivalent of GetMessage()
Post by: Ceylo on November 03, 2008, 01:23:36 pm
Quote from: "kfriddile"
Popularity and quality should be the same thing (unless one desires to only be popular among users who don't know any better...I wouldn't if it was my library).

Nop, because it you care about everyone, you won't force users to take care of thread safeness (that may indeed improve performances).


Quote from: "kfriddile"
If window events are that coupled to rendering (who says your current render-target has to be the back buffer?), then your architecture is flawed.  If such a blocking call existed, then it should be possible to call it on a separate thread than the one swapping the frame buffers.  There is no "polling loop" in that scenario.  "Polling" is akin to constantly asking the question "did anything happen yet?", while a rendering loop is constantly stating "draw this".


Okay. I was only thinking of the common event and drawing loop :
Code: [Select]

while(running) {
    // get all events
    // draw
}

This would prevent from drawing if you use blocking events.
Now... I do not know enough about multi-threading issues to tell whether you can safely call for buffer swaping from a separate thread.


Quote from: "kfriddile"
[...]  It is ALWAYS possible for a program to free memory it has allocated.  [...]

But what for ? If it was to be freed, it would be just before the program exits. Therefore it is useless.
Title: sfml equivalent of GetMessage()
Post by: bullno1 on November 03, 2008, 01:48:11 pm
Quote
Your definition of "leak" is narrow-sighted.

If I'm not wrong, modern OSes can free the all memory that a program uses after it exits and even after it crashes. Memory leaks only matters if it grows memory consumption over time. It may be a leak, but it's harmless if it's just a one-time leak. Moreover, while RAM is getting cheaper and people's average age decreases, why should you waste your precious life in such thing?

Btw, where's the leak that you two are talking about? I might try to look into it.

About the blocking thing, IMO, you can consider creating 2 version of the function(one polling and one blocking). I did not look into the implementation and I'm also linux and mac-impaired so I don't know if it's possible to handle event in such ways. Just my 2 cents.
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 03, 2008, 02:27:16 pm
Quote
Btw, where's the leak that you two are talking about? I might try to look into it.

There's a global context created in sfml-window's Context class (in current sources, not 1.3), which can never be destroyed because it may be used until the very last instruction of the program. The trickiest issue requiring this kind of stuff is probably the .Net binding, because the GC can still run and free resources needing a context after the main thread has ended.

Quote
About the blocking thing, IMO, you can consider creating 2 version of the function(one polling and one blocking)

Technically, there's absolutely no problem for the 3 OSes SFML is supporting. It's really just a design question.
Title: sfml equivalent of GetMessage()
Post by: kfriddile on November 03, 2008, 05:52:09 pm
Ok, at this point I'm losing interest in the original feature request of this thread.  I'm starting to feel a moral obligation to set you guys straight about resource leaks.  This is the first time I have EVER heard ANYONE say a resource leak is "ok".  This is software 101 people.  Any competent C/C++ programmer will tell you that resource leaks are right up there with crashes on the bug scale.

Quote from: "Ceylo"

Quote from: "kfriddile"
[...]  It is ALWAYS possible for a program to free memory it has allocated.  [...]

But what for ? If it was to be freed, it would be just before the program exits. Therefore it is useless.


It is in no way "useless".  First of all, there is absolutely no gaurantee that the operating system will clean up your mess (show me in the C++ standard where it says it will and I'll buy you each a car).  If there was, you wouldn't see languages like Java implementing automatic garbage collection.  Second, you may need that memory back BEFORE the program exits.  This is especially true of a middleware product like sfml where you can't anticipate the resource needs of the user.

Quote from: "Ceylo"

If I'm not wrong, modern OSes can free the all memory that a program uses after it exits and even after it crashes. Memory leaks only matters if it grows memory consumption over time. It may be a leak, but it's harmless if it's just a one-time leak. Moreover, while RAM is getting cheaper and people's average age decreases, why should you waste your precious life in such thing?


IF an OS frees resources leaked by a program, it does so as a fail-safe in case of programmer error, not something to be relied upon.  What you just said is the same thing as a pilot saying he can just jump out of his plane instead of landing it safely because his parachute will keep him from falling to his death.

Quote from: "Laurent"

Ok, fine for the definition. The point is, are we talking about the definition of "waste" or about the impact of such a design on SFML ? I don't think the main point between blocking wait and polling is to save or not a function call, it's much more a design issue. I think you'll agree with me, so let's not waste time with such considerations and focus on the design stuff Wink


It's both a design and efficiency issue (they are almost always intertwined).  Simply put, my design proposition allows for a more efficient program because it doesn't needlessly gobble up CPU cycles.

Quote from: "Laurent"

You think not providing a blocking GetEvent is antiquated architecture ? I don't, but indeed I could easily add it. The only reason why I don't, is exactly why you want it : the only use I can see of such a function would be to implement an asynchronous signal / slot (or callback) system, and that's just idiot compared to the same system made synchronous. I agree it's less "elegant", but wasting a function call versus introducing multi-threading and all its potential issues... my choice is done. Imagine I just want to move a sprite in a keypress event, with your solution I would already have to care about concurrent accesses and go with mutexes to protect all accesses to my sprite. Not to talk about the fact that you could end up drawing your sprite in a different position than the one it had when you computed collisions / IA / ... in its update() function. That's just crazy, especially for beginners who are not be aware of all this stuff. And that costs much more than just polling.


Yes, I do think it's antiquated.  So does Microsoft, so does Intel, etc.  "Elegant" design and robustness/efficiency go hand in hand.  For example, coupling with the rest of the app is greatly reduced when the "Window" object is capable of waiting for events itself without dictating the rest of the program's architecture.  Communication purely through callbacks is about as low as coupling gets.  Reduced coupling means more flexibility, reusability, and testability.  Anyways, about your sprite example...there are no concurrency issues involved.  It's just a simple example of the consumer/producer idiom.  There is one producer (the thread pumping the messages) and 0-N consumers.  It's not exactly how I would do it, but the simplest example would be to store key-pressed events in a bool array as 'true' and key-releases as 'false'.  That array would obviously have only one writer and 0-N readers (a sprite for example) so there are no concurrency issues whatsoever.

Quote from: "Laurent"

That's wrong, YOUR definition is narrow-sighted. Mine is flexible and adapted to real situations, while yours is a "perfect world" definition. But we're not in a perfect world. Would you sacrifice an important feature of your library for the sake of "perfection" ?
Of course if you can tell me how to free this memory while keeping the feature, it would be great Wink


Definitions are, by definition (har har), inflexible.  If we were allowed to bend definitions at our will to make things more convienent for us, they would be useless as identifiers for ideas (which is what they are supposed to be).  If I was writing a library to be used as black-box middleware by trusting users, then yes, I would ensure to the best of my abilities that no bugs, such as resource leaks, exist.  I think my coworker offered some suggestions to plug the leak in your email conversation with him, but, at the very worst, could it not be plugged inside an atexit callback?  I shudder to suggest such a hack, but it's far better than the leak.

Quote from: "Laurent"

It's too bad we're fighting rather than trying to find solutions. It might not be obvious to you, but clean and well-designed code is one of my main goals too, I'm not that kind of programmer who just writes "code that works". So if you're ok I'd be really glad to talk more about the benefits and drawbacks of multithreaded event handling, and see what has or not to be added to SFML.


I don't feel like we're fighting since nothing off-topic or personal has been said.  I'm just not the type to acquiesce when I know I'm right.  I get the impression that you view "elegant", "perfect" designs and code as things that are done for fun because one enjoys programming, and that compromises against those ideals are ok when working on an assignment or some other, more practical, software project.  I (and many others) think that mentality is exactly opposite of the truth.  When working on some prototypical peice of code, it's acceptable to hack and kludge a little bit, because the purpose of such code is just to prove that the problem is solvable.  It may not be.  Once the problem is known to be solvable, and it comes time to solve it in a production environment, it's now time to find the OPTIMAL solution to the problem.  By "optimal", I mean the best design, taking into account things like coupling with other components, flexibility, portability, future viability, and performance.
Title: sfml equivalent of GetMessage()
Post by: Wizzard on November 03, 2008, 07:36:28 pm
Is it really a memory leak if the leak is being collected in all the supported operating systems? Not to mention that if it was actually leaking, it is only a couple hundred bytes. On even the oldest computers, that are still being used today for multimedia, you'd have to run the program over a trillion times for that much memory to begin to affect the computer in a noticeable way. It's not like the memory leak is in a loop or anything. Besides, Laurent wouldn't have intentionally put a memory leak into SFML unless it was simplifying the implementation and API greatly. So overall, I don't mind that my debugger says that I have a memory leak.
Title: sfml equivalent of GetMessage()
Post by: Ceylo on November 03, 2008, 07:58:42 pm
Quote from: "kfriddile"
First of all, there is absolutely no gaurantee that the operating system will clean up your mess (show me in the C++ standard where it says it will and I'll buy you each a car).

Yes there is, for any modern operating system. And it does not depends on a programming language.


Quote from: "kfriddile"
If there was, you wouldn't see languages like Java implementing automatic garbage collection.

Except if it was to allow lazy programmers not to take care of memory handling.

Quote from: "kfriddile"
What you just said is the same thing as a pilot saying he can just jump out of his plane instead of landing it safely because his parachute will keep him from falling to his death.

But here there is no plane crashing :].

Quote from: "kfriddile"
Definitions are, by definition (har har), inflexible.

But interpretations are not. That is the point here.
You have your way of seeing the things, which does not always mean it is the good one.

Quote from: "kfriddile"
I don't feel like we're fighting since nothing off-topic or personal has been said.  I'm just not the type to acquiesce when I know I'm right.

Lol :D . *sorry for the useless comment*
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 03, 2008, 10:08:42 pm
Quote
but, at the very worst, could it not be plugged inside an atexit callback?

I'm not sure it would work in every situation (like, as I said, in the C# binding where the main thread terminates before resources are freed), but I've found some good articles about global destruction in "Modern C++ design", I'll take a look at it.

Quote
It's not exactly how I would do it, but the simplest example would be to store key-pressed events in a bool array as 'true' and key-releases as 'false'. That array would obviously have only one writer and 0-N readers (a sprite for example) so there are no concurrency issues whatsoever

And then what ? Polling the array of booleans ? ... I really don't get it.
My point is that operations in a real-time program (not to say a game) have to be sequential (you can't move an entity while computing its collisions or drawing it, it has to be done at a specific place in the game loop). Decoupling event handling from the rest of the application just breaks this rule. I'm really curious to see how you would write a robust game architecture with multithreaded event handling and no polling.
Title: sfml equivalent of GetMessage()
Post by: kfriddile on November 03, 2008, 11:03:43 pm
Quote from: "Ceylo"

Yes there is, for any modern operating system. And it does not depends on a programming language.


No there isn't, and yes it does.

Quote from: "Ceylo"

Except if it was to allow lazy programmers not to take care of memory handling.


Yes, it's a crutch for people who either don't grasp the importance of proper memory management, or who aren't skilled enough to deal with it, and therefore a testament to the importance of proper memory management as well.  C++ doesn't provide such a crutch, so it's YOUR responsibility.  Just because the particular OS your code was compiled and run on THIS TIME is willing to clean up your mess (yes, it is a mess), doesn't mean that will always be the case.  If you are writing C++ code that you wish to be portable or reusable both now and in the future, then you should strive to adhere as closely to the standard as possible (if anyone here doesn't have a copy of the standard, I will be happy to provide one in PDF form).  Shirking memory management duties automatically makes your code non-portable, since, again, the standard makes NO promise that the OS will clean up after you.  Why limit the portability of your code when it's so easy to manage memory properly?

Quote from: "Ceylo"

But here there is no plane crashing :].


How do you know?  How does the pilot know if his plane will land harmlessly in a lake, or right on your head?

Quote from: "Ceylo"

You have your way of seeing the things, which does not always mean it is the good one.


It's always good to be skeptical, but it's far worse to be stubborn in the face of obvious truth.  I've tried to provide examples and logical explanations for everything I've said, but I haven't heard any in return that were able to stand up under scrutiny.  I'm sorry if I sound angry, but the neglect of such fundamental and widely-achnowledged best-practices is very alarming to me.  You might as well be trying to convince me that up is down.

Quote from: "Laurent"

I'm not sure it would work in every situation (like, as I said, in the C# binding where the main thread terminates before resources are freed), but I've found some good articles about global destruction in "Modern C++ design", I'll take a look at it.


That is an excellent book.  I would recommend "C++ Template Metaprogramming" if you plan on actually using metaprogramming.  It is a very practical introduction to the Boost MPL library...something you don't want to write metaprograms without.  I'd also like to point out that you are experiencing another common symptom of using anything globally.  There's a reason that experienced developers will tell you that globals are bad.  List of books every C++ programmer should own:

Modern C++ Design
Exceptional C++
More Exceptional C++
C++ Template Metaprogramming
Design Patterns (aka, the Gang of Four book...just ignore the Singleton pattern)
Effective C++
More Effective C++
Effective STL
Beyond the C++ Standard Library: An Introduction to Boost
Refactoring: Improving the Design of Existing Code
Intel Threading Building Blocks (this library is going to be a lifesaver in the near future)
there are more...

Quote from: "Laurent"

And then what ? Polling the array of booleans ? ... I really don't get it.
My point is that operations in a real-time program (not to say a game) have to be sequential (you can't move an entity while computing its collisions or drawing it, it has to be done at a specific place in the game loop). Decoupling event handling from the rest of the application just breaks this rule. I'm really curious to see how you would write a robust game architecture with multithreaded event handling and no polling.


I know you don't get it and I think it's my fault.  It's difficult to convey certain things via typing.  Yes, querying an array of booleans would be one possibility and is very similar to polling (but not the same by any means), but now I have that CHOICE when designing my architecture.  I also said it probably wouldn't be my solution of choice.  The point is that the input source is no longer dictating my architecture.  I could do all sorts of things instead, such as synchronize access to the sprite's position to allow me to work with copies in Sprite::Update().  I know you're probably already thinking that locking to do such synchronization would be slow, but I say you are being prematurely pessimistic.  Locking it with a tbb::spin_mutex for example, would be negligable.  The possibilities don't stop there.  Say I'm working with some entity that just has a single attribute that needs to be synchronized.  Maybe it could be stored in an atomic variable abstraction, making locks unnescessary?  It's already becoming clear that I have many more options when approaching a problem than I would with your architecture.  Note that I still have the option to do something very similar to what you force people to do.

Quote from: "Laurent"

My point is that operations in a real-time program (not to say a game) have to be sequential


This isn't entirely true either.  There are vast amounts of parallelism to be had in real-time applications, including games, but that discussion would be very lengthy and I'd rather not type it.  I think it can be left as an "exercise for the reader" :)  The main point I wanted to make was about architecture, and how reduced coupling introduces choice and flexibility.
Title: sfml equivalent of GetMessage()
Post by: MrDoomMaster on November 04, 2008, 01:26:13 am
Another great book he should get is Patterns for Parallel Programming
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 04, 2008, 08:35:33 am
I think we agree about the memory leak. It's bad, and any good programmer should do its best to get rid of such issues. I did, but couldn't find a 100% safe way to remove it so I kept it because it was too ridiculous compared to the feature it made possible. Now, unless you tell me that it's cleaner to remove it and break SFML's behaviour, I think we can focus on the solution itself ;)
But please don't say you're the only one who's right, a lot of people always discuss whether controlled leaks are actually leaks or not. And you haven't experienced every single situation to say that 100% of leaks can be removed.

Regarding the asynchronous architecture, I still believe you're not doing such things in real life (do you, actually ?). I've been making games (including commercial ones) and watching game engines' sources for years, and I've never seen such design. Why ? Because it involves too many issues. Your example can work fine, but you can't apply this strategy to a whole game which is processing hundreds of events, millions of entities and that must keep a consistent state across it's game loop including update, physics, AI and drawing. Or just prove me it's possible.

Regarding the "decoupling" stuff, once you've wrapped event handling in a callback / signal system it's all the same (and don't tell me about the CPU wasted in a function call, that's ridiculous), it's just a matter of being synchronous or not.

Quote
This isn't entirely true either. There are vast amounts of parallelism to be had in real-time applications, including games

I was just talking about the top-level logical flow. Of course there are tons of things which have to be parallelized, especially with today's multi-cores and consoles architectures.
Title: sfml equivalent of GetMessage()
Post by: Jaeger on November 04, 2008, 07:18:13 pm
Quote from: "Laurent"

Regarding the asynchronous architecture, I still believe you're not doing such things in real life (do you, actually ?). I've been making games (including commercial ones) and watching game engines' sources for years, and I've never seen such design. Why ? Because it involves too many issues. Your example can work fine, but you can't apply this strategy to a whole game which is processing hundreds of events, millions of entities and that must keep a consistent state across it's game loop including update, physics, AI and drawing. Or just prove me it's possible.



We use a similar mechanism in our current commercial project. In part we chose it because of Amdahl's law. The smaller we make the serial sections of our application the better we'll scale to many core systems. Window and system events come in asynchronously and if the application is in a state where it cannot handle them we block or queue as appropriate for performance. Even if we queue we don't require polling the queue explicitly. Instead when we transition out of our blocking state we can check the queue and if it is not empty we transition into a state that processes the queue.

However the main reason we chose this architecture is the first pillar of concurrency.
http://www.ddj.com/hpc-high-performance-computing/200001985?pgno=2
Title: sfml equivalent of GetMessage()
Post by: kfriddile on November 04, 2008, 08:52:11 pm
Quote from: "Laurent"

I think we agree about the memory leak. It's bad, and any good programmer should do its best to get rid of such issues. I did, but couldn't find a 100% safe way to remove it so I kept it because it was too ridiculous compared to the feature it made possible. Now, unless you tell me that it's cleaner to remove it and break SFML's behaviour, I think we can focus on the solution itself ;)


I'm not at all suggesting that sfml should lose any functionality.  You can fix the leak without losing anything useful.  In the email conversation with my coworker you offered a few use cases to justify the leak:

- requesting the multisampling extension to OpenGL before creating the
first window (and the first OpenGL context)
- loading an texture, a shader or whatever graphical resource before
before having any window
- having all the OpenGL resources and states not destroyed between
destruction and re-creation of a window

Now, you have discovered that implementing something to support these exactly as written requires creating another bug in the form of a memory leak.  Situations like this come up a lot in the design stage, and are a strong indicator of a design deficiency.  You know it's possible to do what you want, now lets find an acceptable, optimal solution.  In the three cases above, a common thread between them is that they are all special-cases of a more general use-case.  The specialization is that they all want to do these things before a "Window" exists.  The fact that this has to be explicitely stated indicates an inherent dependency between a render context and a window.  In fact, I believe you hacked around this by creating a dummy "window" just to create a "global" render context.  Globals also are a strong clue that a better design probably exists.  So, obviously a window is a prerequisite to a render context.  This isn't a limitation, this is something that makes perfect conceptual sense (which means that circumventing it is conceptually wrong and confusing to the logical user).  Lose the global, enforce the prerequisite, and users are still able to do everything they could before, just through a more logical path instead of magically pulling information from the global ether.  I would recommend having "Window" be a construction parameter of "RenderContext" to decouple the two concepts somewhat and allow for multiple contexts for the same window.  Your leak is gone.

Quote from: "Laurent"

And you haven't experienced every single situation to say that 100% of leaks can be removed.


Experience doesn't enter into it, just logic.  Anything you create, you can destroy.

Quote from: "Laurent"

Regarding the asynchronous architecture, I still believe you're not doing such things in real life (do you, actually ?). I've been making games (including commercial ones) and watching game engines' sources for years, and I've never seen such design. Why ? Because it involves too many issues. Your example can work fine, but you can't apply this strategy to a whole game which is processing hundreds of events, millions of entities and that must keep a consistent state across it's game loop including update, physics, AI and drawing. Or just prove me it's possible.


I'm not saying that there aren't things that need to happen in a particular order, but the smaller these sections are, the better.  It's still completely possible to ensure proper ordering if one so desires.  It's just a more-flexible, less-invasive architecture.

Quote from: "Laurent"

Regarding the "decoupling" stuff, once you've wrapped event handling in a callback / signal system it's all the same (and don't tell me about the CPU wasted in a function call, that's ridiculous), it's just a matter of being synchronous or not.


Um...synchronous and asynchronous aren't the same at all with regards to coupling issues.  One requires client code to explicitely check for events and one doesn't.  That's also where the waste happens because 99.999% of the time, there isn't going to be an event.
Title: sfml equivalent of GetMessage()
Post by: MrDoomMaster on November 04, 2008, 10:09:06 pm
I have what I believe is a fairly solid argument in regards to the issue of PeekMessage() vs GetMessage().

Let's assume we have a specific goal: When the user minimizes the game, we want the game to consume 0% CPU. By 0% I mean time spent in our application/game code. This does not include the processing time the operating system spends managing our process. Let's keep it simple.

Below I've outlined 2 scenarios. Scenario 1 doesn't reach our goal at all, however it reflects the current design that SFML imposes on the user. Scenario 1 is being presented because I want to express how SFML could not possibly fulfill this very simple but very important design goal in its current state (architecture).

Scenario 2 will indeed solve the problem, however it utilizes an architectural design that is completely different/incompatible with SFML. This is basically the design that kfriddle is pushing for.


Scenario 1


Suppose the following game loop implementation (Forgive/Ignore any over-simplifications, subtle bugs, or other anomalies, this code has not been compiled):
Code: [Select]

int main()
{
MSG msg;

while( true )
{
if( PeekMessage( &msg, NULL, NULL, NULL, 0 ) )
{
if( msg.message == WM_QUIT )
{
break;
}

TranslateMessage( &msg );
DispatchMessage( &msg );
}

TickGame();
DrawGame();
}

return 0;
}


The above code represents an over-simplified version of your typical "main game loop". This is basically the thing that feeds your entire game and provides it continuous processing. When the user minimizes the application, there is no way to suspend the application. In other words, the while( true ) loop above will never end except for when the application is terminated by the user.

Because this while loop never ends except under the previously noted circumstances, the game will always push to consume as much CPU as possible regardless of the state of the application, such as being minimized.

You may say, "Well let's just do this:"
Code: [Select]

int main()
{
MSG msg;

while( true )
{
if( PeekMessage( &msg, NULL, NULL, NULL, 0 ) )
{
if( msg.message == WM_QUIT )
{
break;
}

TranslateMessage( &msg );
DispatchMessage( &msg );
}

if( bMinimized )
{
Sleep( 1 );
}
else
{
TickGame();
DrawGame();
}
}

return 0;
}


I would then proceed to say you're evil. This is not solving the problem. Sleeping here does not solve the problem since you have no idea how long the user will keep the application in the minimized state. For the entire duration the application is minimized, there should be absolutely 0 iterations of this while loop. No physical code that we control in the application should be getting processed. Any processing happening at application level code is considered wasteful.


Scenario 2


The code for this can get fairly extensive, so I'll only cover the most fundamental and important parts. In one thread (Thread #1), you would have this continuously running:
Code: [Select]
MSG msg;
while( GetMessage( &msg, 0, 0, 0 ) > 0 )
{
TranslateMessage( &msg );
DispatchMessage( &msg );
}


Obviously the window would have been constructed in the same thread processing the above loop. As far as where the message procedure is, let's also assume it is in the same thread for the purposes of this example.

In a completely different thread (Thread #2) you would have the following loop running:
Code: [Select]

while( true )
{
WaitForSingleObject( .... ); // This would suspend the thread if game processing is not currently needed.

TickGame();
DrawGame();
}


Again, I do apologize for the over-simplifications. Bear with me. This is mainly pseudo-code. The above code continues to process the game normally until a request from another thread comes in to tell it to PAUSE or RESUME (hence the WaitForSingleObject() call). If this thread is told to PAUSE, no game processing will occur until a matching RESUME request is given.

So let's tie all of this together. Typically I would use a sequence diagram to properly document the flow of all of this, so once again do bear with me while I try to use a bullet point list to describe the sequence of the application:

[list=1]
Title: sfml equivalent of GetMessage()
Post by: dabo on November 05, 2008, 11:00:52 am
Does the average user really care how this is handled? SDL uses the same approach as SFML or?

Interesting read though.
Title: sfml equivalent of GetMessage()
Post by: kfriddile on November 05, 2008, 04:34:41 pm
Quote from: "dabo"
Does the average user really care how this is handled? SDL uses the same approach as SFML or?

Interesting read though.


I guess that depends on what your definition of "average" is.  Still, the two options are different enough that it isn't just a matter of "caring" which one is used.  You are correct that most existing real-time applications, and middleware for creating those applications, promote a polling approach.  The arguments put forth by proponents of that design are usually "anything else is too slow" or "anything involving threads and concurrency is too complex and hard".  Well, I can tell you that the asynchronous design is certainly not "too slow".  As far as concurrency being "too hard"...anyone who wants to continue being a useful, competitive programmer needs to get over that right now.  Individual cores aren't getting faster, they're just adding more of them.  Concurrency is going to be the only way to make your programs scale with the hardware.

The discussion has obviously strayed a bit from the original feature request.  All I originally asked for was the addition of a function call that would allow me to choose between the two designs above.  I wasn't suggesting that sfml itself issue asynchronous events.  Then, after posting my request, I became aware of other problems that would prevent sfml from being used in most production environments anyways (most projects' coding standards disallow resource leaks).
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 05, 2008, 10:17:44 pm
I admit you couldn't find a better example than the "inactive application" to demonstrate the drawbacks of polling. I'm still not convinced by this architecture on a global scale (but I'll probably experiment it next time I write a small real-time application), but anyway what I'm seeing here is that a few experienced users are writing really big posts to convince me, and I appreciate that. So I'll add a task for a WaitMessage function in the roadmap, and try to find free time after my relocation to implement it ;)

Regarding the leak, it's much more than a design concept of having a window to get a rendering context. First, this rule has been confusing people for years; every graphics library inherits this behavior and people always end up spamming the forums with "why do my initialization code fail??" posts. To me it's purely technical, and I'll never let my public interface suffer from any technical limitation. As a layer on top of raw 3D APIs, I can be smarter and do what is necessary to provide extra flexibility to users.
Anyway, it's not my main concern. My main concern is the tons of issues which arise from this limitation. One of them is managed languages crashing because the GC collects variables after the main thread has ended. One other is the rendering context being lost when I re-create a window, thus invalidating every graphical resource. etc...
Anyway I'm going to fix the leak. It was not my priority (I have many more important features to implement), but I can't ignore this discussion and it's now my top priority. Too bad for people waiting for render-to-image or rendering masks... ;)
Title: sfml equivalent of GetMessage()
Post by: Wizzard on November 05, 2008, 11:53:54 pm
Couldn't you create a sf::Exit() function that closes the graphics context and destructs everything related to it?
Title: sfml equivalent of GetMessage()
Post by: kfriddile on November 06, 2008, 02:00:08 am
Quote from: "Wizzard"
Couldn't you create a sf::Exit() function that closes the graphics context and destructs everything related to it?


Please don't do it that way.  Some sort of RAII/scoped initialization mechanism would be preferable if a global render context has to exist (easy exception safety, etc).

Quote from: "Laurent"

I admit you couldn't find a better example than the "inactive application" to demonstrate the drawbacks of polling. I'm still not convinced by this architecture on a global scale (but I'll probably experiment it next time I write a small real-time application), but anyway what I'm seeing here is that a few experienced users are writing really big posts to convince me, and I appreciate that. So I'll add a task for a WaitMessage function in the roadmap, and try to find free time after my relocation to implement it


I'm glad that reading someone else's claims on the internet isn't enough to convince you of something you've never tried.  I would never want to use something created by anyone that impressionable ;)

Quote from: "Laurent"

Regarding the leak, it's much more than a design concept of having a window to get a rendering context. First, this rule has been confusing people for years; every graphics library inherits this behavior and people always end up spamming the forums with "why do my initialization code fail??" posts. To me it's purely technical, and I'll never let my public interface suffer from any technical limitation. As a layer on top of raw 3D APIs, I can be smarter and do what is necessary to provide extra flexibility to users.
Anyway, it's not my main concern. My main concern is the tons of issues which arise from this limitation. One of them is managed languages crashing because the GC collects variables after the main thread has ended. One other is the rendering context being lost when I re-create a window, thus invalidating every graphical resource. etc...
Anyway I'm going to fix the leak. It was not my priority (I have many more important features to implement), but I can't ignore this discussion and it's now my top priority. Too bad for people waiting for render-to-image or rendering masks...


I agree that compromising the public interface because of technical limitations should be avoided if possible.  I guess I just see the window prerequisite as more of a logical limitation than a technical one.  Can you elaborate on the issue with managed languages?  I try to avoid them like the plague, so that's a bit out of my area of expertise.  I'm familiar with the issue of losing all graphical resources, etc when a render context is destroyed, and I guess I just don't see it as an issue.  If those resources are loaded via that context, it makes sense for them to go away when the context does (ie: they are "local" to that context).  The solution is to simply not destroy the context until it doesn't make sense for your application to have it anymore.

Anyways, once WaitMessage() is implemented and there aren't anymore resource leaks, I'll definitely look at substituting sfml for Win32 in my current design for instant cross-platform support.
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 06, 2008, 08:21:22 am
Quote
Can you elaborate on the issue with managed languages?

Sure.

Managed languages have two main drawbacks: destruction of variables isn't deterministic (i.e. can happen at any time, in any order) and destruction of variables doesn't always happen in the main thread; it might even happen after the main thread has ended. Unfortunately, this stuff mixes very badly with windowing and rendering contexts, which have strict rules regarding multi-threading and order of destruction. I could of course enforce the scope of graphics variables (manually freeing them), but that's not how things should be done in a managed language.

So, the best solution I've found so far is to have a rendering context which can still be active in the GC thread, after the main one has terminated. I'm not saying the is the only solution, but it will be really tricky and take some time to find a more elegant one.

Quote
Anyways, once WaitMessage() is implemented and there aren't anymore resource leaks, I'll definitely look at substituting sfml for Win32 in my current design for instant cross-platform support

I'm glad to see that ;)
Don't hesitate to give more feedback like this once you're using SFML.
Title: sfml equivalent of GetMessage()
Post by: bullno1 on November 06, 2008, 10:11:58 am
Quote
Too bad for people waiting for render-to-image or rendering masks... Wink

I'm one of them :( . Nvm, currently I only need render-to-texture for motion blur effect so I can live without it.
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 06, 2008, 11:54:06 am
If your motion blur is on the whole screen, you can use the new sf::Image::CopyScreen function.
Title: sfml equivalent of GetMessage()
Post by: kfriddile on November 06, 2008, 05:33:34 pm
Quote from: "Laurent"
Quote
Can you elaborate on the issue with managed languages?

Sure.

Managed languages have two main drawbacks: destruction of variables isn't deterministic (i.e. can happen at any time, in any order) and destruction of variables doesn't always happen in the main thread; it might even happen after the main thread has ended. Unfortunately, this stuff mixes very badly with windowing and rendering contexts, which have strict rules regarding multi-threading and order of destruction. I could of course enforce the scope of graphics variables (manually freeing them), but that's not how things should be done in a managed language.

So, the best solution I've found so far is to have a rendering context which can still be active in the GC thread, after the main one has terminated. I'm not saying the is the only solution, but it will be really tricky and take some time to find a more elegant one.


It sounds to me like this is yet another problem that could be easily solved by requiring rendering contexts to be created from, and associated with, a window.  I can see two solutions off the top of my head...the second one is my favorite of the two.  First, 'Window' could act as a factory for render contexts to itself, dispensing references to contexts that it owns.  That way, those contexts are destroyed when the window is destroyed, ensuring proper destruction order.  Second, 'RenderContext' would take a 'Window' as a construction parameter.  Since the architecture makes it obvious that a RenderContext requires a Window, it is completely valid to expect the user to destroy their RenderContext objects before destroying the associated Window.  It's kind of the same thing as expecting someone not to create dangling references.
Title: sfml equivalent of GetMessage()
Post by: Wizzard on November 06, 2008, 09:39:37 pm
I think Laurent's main issue is that he doesn't want the context tied to the window. This way, users can load resources before having a window as well as after a window has been destructed. More importantly, they don't have to re-initialize their resources after changing their video mode. So, your first solution is out of the question. I think your second solution is out of the question too, but maybe I misunderstood you. If the context has to be destroyed before the window, it's no good. If you changed it to have the window take a render context, it may be a viable solution. That way, contexts can linger regardless of a window's scope and new windows can use old rendering contexts. Perhaps a reference count could be used, that way no change to the public interface has to be done. A context will only be deleted when everything using it is destroyed.
Title: sfml equivalent of GetMessage()
Post by: kfriddile on November 06, 2008, 11:30:11 pm
Quote from: "Wizzard"
I think Laurent's main issue is that he doesn't want the context tied to the window. This way, users can load resources before having a window as well as after a window has been destructed. More importantly, they don't have to re-initialize their resources after changing their video mode. So, your first solution is out of the question. I think your second solution is out of the question too, but maybe I misunderstood you. If the context has to be destroyed before the window, it's no good. If you changed it to have the window take a render context, it may be a viable solution. That way, contexts can linger regardless of a window's scope and new windows can use old rendering contexts. Perhaps a reference count could be used, that way no change to the public interface has to be done. A context will only be deleted when everything using it is destroyed.


I'm not even sure how to respond to this.  Render contexts are inherently tied to windows.  There is no practical use-case where you would have to load graphical resources before having or after destroying a window (where would you display these resources?).  What's wrong or confusing about having to reload things that were essentially in a container you destroyed?  How does it make sense to have a Window construct off a RenderContext?  This implies that all windows have render contexts, which is completely, conceptually wrong.  I realize that these suggestions would require changing sfml's public interface, but that's ok because I'm also asserting that the existing interface is wrong.  Besides not imposing logical dependencies, the existing interface has also already caused run-time bugs as detailed already in this thread.  I'm not actually asking for these changes as a feature request (since they don't add or remove any useful functionality), just discussing how I would go about solving these problems.
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 07, 2008, 08:26:04 am
Quote
What's wrong or confusing about having to reload things that were essentially in a container you destroyed?

Graphical resources are not owned by contexts or windows. You can create a sprite and display it in any existing window. Otherwise it would mean loading every resource once for every window, which is quite stupid actually ;)

Same remark for re-creating a window: it can happen when you just want to change the video mode; requiring to reload every existing resource in this case wouldn't make sense. I mean, for someone who's not aware of the technical details behind, why would it make sense?

The strong coupling between windows / contexts and resources is a limitation of every 3D API, and then of almost any derived graphics library. And the point is that it doesn't make any sense for almost every user which is not aware of all the technical details involved by the underlying 3D API. I don't want to just make another graphics library which is just a big wrapper around a 3D API and which inherits its limitations, actually I shouldn't even take in account the 3D API when designing my library, it's just an implementation detail.
Title: sfml equivalent of GetMessage()
Post by: Imbue on November 07, 2008, 09:12:29 am
Quote from: "Laurent"
actually I shouldn't even take in account the 3D API when designing my library, it's just an implementation detail.
Right on, Laurent! :D Seriously, don't ever sacrifice your ideals.

As for the "memory leak", fix it or don't. It doesn't really matter to any reasonable person, as long as you're aware of it and it doesn't grow. Wikipedia (http://en.wikipedia.org/wiki/Memory_leak) calls a memory leak "where the program fails to release memory when no longer needed." Since you're using this memory the entire time it's allocated, is it really even a "memory leak" at all? If you stopped using the memory at some point and didn't free it, then I'd agree that it's totally unacceptable, but that's not the case.

On the GetMessage()/WaitMessage()/Whatever() topic: If this was added I would use it when my game is paused/minimized. If you added it, I bet a lot of others would use it for the same reason. Other than that, I don't personally have any immediate use for it.

In any case, SFML is already an awesome library. Just avoid taking steps backwards (like forcing the user to reload resources when changing video mode) and it'll be an awesome library for some time to come.

Just my 2 cents.

Thanks!
Title: sfml equivalent of GetMessage()
Post by: Imbue on November 07, 2008, 09:25:08 am
Quote from: "kfriddile"
Quote from: "Laurent"

And you haven't experienced every single situation to say that 100% of leaks can be removed.


Experience doesn't enter into it, just logic.  Anything you create, you can destroy.
If your program is completely sandboxed, then yes. In that case logic says that you can free anything you allocate, no matter how convoluted your design becomes.

However, SFML isn't working in a sandboxed atmosphere. It's calling third party libraries. A third party library could be designed so that you could not free everything you allocate. I'm not saying that's the case here (in fact I very much doubt it is), but that is a possibility (I believe logic should lead you to agree).

Thanks! :D
Title: sfml equivalent of GetMessage()
Post by: kfriddile on November 07, 2008, 04:06:48 pm
Quote from: "Laurent"

Graphical resources are not owned by contexts or windows. You can create a sprite and display it in any existing window. Otherwise it would mean loading every resource once for every window, which is quite stupid actually ;)

Same remark for re-creating a window: it can happen when you just want to change the video mode; requiring to reload every existing resource in this case wouldn't make sense. I mean, for someone who's not aware of the technical details behind, why would it make sense?

The strong coupling between windows / contexts and resources is a limitation of every 3D API, and then of almost any derived graphics library. And the point is that it doesn't make any sense for almost every user which is not aware of all the technical details involved by the underlying 3D API. I don't want to just make another graphics library which is just a big wrapper around a 3D API and which inherits its limitations, actually I shouldn't even take in account the 3D API when designing my library, it's just an implementation detail.


Ok, when we're saying "graphical resource" are we talking about something like simple image data, or something like a texture?  Obviously the first has no logical relation to a rendering context, but the second sure as hell does.  Yes, creating render contexts from a window would mean loading duplicate resources if two windows wanted to display the same texture, and I don't see a problem with that at all.  Why should two windows have the nasty hidden implicit coupling of sharing a context?  It's the same reason globals are bad.

As far as the context recreation not being obvious to someone when changing video modes, you're right.  Under your current design, they wouldn't expect that to happen.  If it was designed the way I've suggested, they WOULD expect it because the interface makes it obvious.  They still don't have to be aware of what's going on under the hood at all.

You should absolutely prevent technical limitations of an underlying API from being passed on to the user (or decide not to use that API in the analysis stage...which is why I'm not using sfml yet :) ).  BUT, there is a reason that every API has that dependency between windows and contexts.  It's not any kind of technical limitation, it's just a logical dependency that's enforced by the API's public interface.

Quote from: "Imbue"
Right on, Laurent! :D Seriously, don't ever sacrifice your ideals.

As for the "memory leak", fix it or don't. It doesn't really matter to any reasonable person, as long as you're aware of it and it doesn't grow. Wikipedia (http://en.wikipedia.org/wiki/Memory_leak) calls a memory leak "where the program fails to release memory when no longer needed." Since you're using this memory the entire time it's allocated, is it really even a "memory leak" at all? If you stopped using the memory at some point and didn't free it, then I'd agree that it's totally unacceptable, but that's not the case.

On the GetMessage()/WaitMessage()/Whatever() topic: If this was added I would use it when my game is paused/minimized. If you added it, I bet a lot of others would use it for the same reason. Other than that, I don't personally have any immediate use for it.

In any case, SFML is already an awesome library. Just avoid taking steps backwards (like forcing the user to reload resources when changing video mode) and it'll be an awesome library for some time to come.

Just my 2 cents.

Thanks!


Correct, don't sacrafice your ideals if you can still convince yourself they're valid in the face of scrutiny.  Failing to do the latter is called being stubborn :P

I would say that memory leaks only make sense to unreasonable people.  Wikipedia's definition is pretty much ok, but your assessment of sfml's handling of the leak is wrong.  He does stop using the memory at some point without freeing it (the end of the program).

Quote from: "Imbue"


If your program is completely sandboxed, then yes. In that case logic says that you can free anything you allocate, no matter how convoluted your design becomes.

However, SFML isn't working in a sandboxed atmosphere. It's calling third party libraries. A third party library could be designed so that you could not free everything you allocate. I'm not saying that's the case here (in fact I very much doubt it is), but that is a possibility (I believe logic should lead you to agree).

Thanks! :D


Haha, you've just described why I decided not to use sfml.  It is a third party library that would force my program to have memory leaks.  The choice is still yours whether or not to use such a library, so control over your program's memory usage is still ultimately yours.
Title: sfml equivalent of GetMessage()
Post by: Imbue on November 07, 2008, 07:40:15 pm
kfriddile, how would the current "memory leak" effect an SFML user at all? The memory is freed as soon as the application quits. The memory is used until then. The memory usage doesn't grow over time. Why does it make any difference to you what-so-ever?

I think it only has one disadvantage. That is that it shows up as a false positive on tools looking for real memory errors (the kind that accumulate early and often).

Is there any other disadvantage at all?

Quote from: "kfriddile"
Why should two windows have the nasty hidden implicit coupling of sharing a context? It's the same reason globals are bad.
Do you think that MDI (http://en.wikipedia.org/wiki/Multiple_document_interface) is always bad then?
Title: sfml equivalent of GetMessage()
Post by: zarka on November 07, 2008, 09:32:36 pm
Quote from: "kfriddile"

Haha, you've just described why I decided not to use sfml.  It is a third party library that would force my program to have memory leaks.  The choice is still yours whether or not to use such a library, so control over your program's memory usage is still ultimately yours.


Or since SFML uses the zlib license, take the parts of SFML you like. Or even better fix the memory leak and use that ;).. and if you fix the leak earn brownie points by submitting a patch :) .. it's open source for a reason!
Title: sfml equivalent of GetMessage()
Post by: Imbue on November 07, 2008, 11:27:34 pm
I added a question on stack overflow about the memory leak (http://stackoverflow.com/questions/273209/are-memory-leaks-ever-ok) problem, for what it's worth. Looks like half the answerers didn't read the question, and there is a huge variety of opinion.
Title: sfml equivalent of GetMessage()
Post by: Laurent on November 08, 2008, 09:15:36 am
Quote
Yes, creating render contexts from a window would mean loading duplicate resources if two windows wanted to display the same texture, and I don't see a problem with that at all. Why should two windows have the nasty hidden implicit coupling of sharing a context? It's the same reason globals are bad.

As far as the context recreation not being obvious to someone when changing video modes, you're right. Under your current design, they wouldn't expect that to happen. If it was designed the way I've suggested, they WOULD expect it because the interface makes it obvious. They still don't have to be aware of what's going on under the hood at all.

Hum... and what about the practical / user point of view? Having a consistent design is one thing, but thinking about features that would be useful in real life also make sense. I feel like your only concern is design, and you actually don't care whether the library is easy to use or not. Why should I put so many limitations if I can just remove them ? One of the properties of OpenGL contexts is that they can be shared, I didn't invented this feature you know ;)
Title: sfml equivalent of GetMessage()
Post by: Moomin on January 06, 2009, 12:54:14 pm
For the memory leak why not use atexit http://www.cplusplus.com/reference/clibrary/cstdlib/atexit.html

Will destroy the context in the right place.
Title: sfml equivalent of GetMessage()
Post by: Laurent on January 06, 2009, 01:08:33 pm
atexit is not better than using the destructor of a global object, i.e. we can't make sure it will happen at the right time.
Title: sfml equivalent of GetMessage()
Post by: Moomin on January 06, 2009, 01:48:08 pm
Ok well a less elegant solution, by controlling the order of construction of global static objects, would be to use compiler directives:

Title: sfml equivalent of GetMessage()
Post by: Laurent on January 06, 2009, 02:28:06 pm
Still not a solution. init_priority is too limited, it only controls the order of destruction of a finite set of objects inside a single translation unit.
Title: sfml equivalent of GetMessage()
Post by: Moomin on January 06, 2009, 02:43:43 pm
Um, sorry I think you read it wrong. The link states that in standard c++ objects defined at namespace scope are initialized in order of their definitions in a single translation unit. But using the init_priority attribute overcomes this.
Title: sfml equivalent of GetMessage()
Post by: Laurent on January 06, 2009, 04:00:18 pm
Oops, sorry :)

But that's still not powerful enough. The most tricky case is the .Net binding which requires this variable to exist even after the main thread has ended.
Title: sfml equivalent of GetMessage()
Post by: klusark on January 07, 2009, 10:26:18 pm
Boost smart pointers? (http://www.boost.org/doc/libs/1_37_0/libs/smart_ptr/smart_ptr.htm) It would destroy the object when it is not needed anymore.