Ok, at this point I'm losing interest in the original feature request of this thread. I'm starting to feel a moral obligation to set you guys straight about resource leaks. This is the first time I have EVER heard ANYONE say a resource leak is "ok". This is software 101 people. Any competent C/C++ programmer will tell you that resource leaks are right up there with crashes on the bug scale.
[...] It is ALWAYS possible for a program to free memory it has allocated. [...]
But what for ? If it was to be freed, it would be just before the program exits. Therefore it is useless.
It is in no way "useless". First of all, there is absolutely no gaurantee that the operating system will clean up your mess (show me in the C++ standard where it says it will and I'll buy you each a car). If there was, you wouldn't see languages like Java implementing automatic garbage collection. Second, you may need that memory back BEFORE the program exits. This is especially true of a middleware product like sfml where you can't anticipate the resource needs of the user.
If I'm not wrong, modern OSes can free the all memory that a program uses after it exits and even after it crashes. Memory leaks only matters if it grows memory consumption over time. It may be a leak, but it's harmless if it's just a one-time leak. Moreover, while RAM is getting cheaper and people's average age decreases, why should you waste your precious life in such thing?
IF an OS frees resources leaked by a program, it does so as a fail-safe in case of programmer error, not something to be relied upon. What you just said is the same thing as a pilot saying he can just jump out of his plane instead of landing it safely because his parachute will keep him from falling to his death.
Ok, fine for the definition. The point is, are we talking about the definition of "waste" or about the impact of such a design on SFML ? I don't think the main point between blocking wait and polling is to save or not a function call, it's much more a design issue. I think you'll agree with me, so let's not waste time with such considerations and focus on the design stuff Wink
It's both a design and efficiency issue (they are almost always intertwined). Simply put, my design proposition allows for a more efficient program because it doesn't needlessly gobble up CPU cycles.
You think not providing a blocking GetEvent is antiquated architecture ? I don't, but indeed I could easily add it. The only reason why I don't, is exactly why you want it : the only use I can see of such a function would be to implement an asynchronous signal / slot (or callback) system, and that's just idiot compared to the same system made synchronous. I agree it's less "elegant", but wasting a function call versus introducing multi-threading and all its potential issues... my choice is done. Imagine I just want to move a sprite in a keypress event, with your solution I would already have to care about concurrent accesses and go with mutexes to protect all accesses to my sprite. Not to talk about the fact that you could end up drawing your sprite in a different position than the one it had when you computed collisions / IA / ... in its update() function. That's just crazy, especially for beginners who are not be aware of all this stuff. And that costs much more than just polling.
Yes, I do think it's antiquated. So does Microsoft, so does Intel, etc. "Elegant" design and robustness/efficiency go hand in hand. For example, coupling with the rest of the app is greatly reduced when the "Window" object is capable of waiting for events itself without dictating the rest of the program's architecture. Communication purely through callbacks is about as low as coupling gets. Reduced coupling means more flexibility, reusability, and testability. Anyways, about your sprite example...there are no concurrency issues involved. It's just a simple example of the consumer/producer idiom. There is one producer (the thread pumping the messages) and 0-N consumers. It's not exactly how I would do it, but the simplest example would be to store key-pressed events in a bool array as 'true' and key-releases as 'false'. That array would obviously have only one writer and 0-N readers (a sprite for example) so there are no concurrency issues whatsoever.
That's wrong, YOUR definition is narrow-sighted. Mine is flexible and adapted to real situations, while yours is a "perfect world" definition. But we're not in a perfect world. Would you sacrifice an important feature of your library for the sake of "perfection" ?
Of course if you can tell me how to free this memory while keeping the feature, it would be great Wink
Definitions are, by definition (har har), inflexible. If we were allowed to bend definitions at our will to make things more convienent for us, they would be useless as identifiers for ideas (which is what they are supposed to be). If I was writing a library to be used as black-box middleware by trusting users, then yes, I would ensure to the best of my abilities that no bugs, such as resource leaks, exist. I think my coworker offered some suggestions to plug the leak in your email conversation with him, but, at the very worst, could it not be plugged inside an atexit callback? I shudder to suggest such a hack, but it's far better than the leak.
It's too bad we're fighting rather than trying to find solutions. It might not be obvious to you, but clean and well-designed code is one of my main goals too, I'm not that kind of programmer who just writes "code that works". So if you're ok I'd be really glad to talk more about the benefits and drawbacks of multithreaded event handling, and see what has or not to be added to SFML.
I don't feel like we're fighting since nothing off-topic or personal has been said. I'm just not the type to acquiesce when I know I'm right. I get the impression that you view "elegant", "perfect" designs and code as things that are done for fun because one enjoys programming, and that compromises against those ideals are ok when working on an assignment or some other, more practical, software project. I (and many others) think that mentality is exactly opposite of the truth. When working on some prototypical peice of code, it's acceptable to hack and kludge a little bit, because the purpose of such code is just to prove that the problem is solvable. It may not be. Once the problem is known to be solvable, and it comes time to solve it in a production environment, it's now time to find the OPTIMAL solution to the problem. By "optimal", I mean the best design, taking into account things like coupling with other components, flexibility, portability, future viability, and performance.