Welcome, Guest. Please login or register. Did you miss your activation email?

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - kfriddile

Pages: [1]
1
Feature requests / sfml equivalent of GetMessage()
« on: November 07, 2008, 04:06:48 pm »
Quote from: "Laurent"

Graphical resources are not owned by contexts or windows. You can create a sprite and display it in any existing window. Otherwise it would mean loading every resource once for every window, which is quite stupid actually ;)

Same remark for re-creating a window: it can happen when you just want to change the video mode; requiring to reload every existing resource in this case wouldn't make sense. I mean, for someone who's not aware of the technical details behind, why would it make sense?

The strong coupling between windows / contexts and resources is a limitation of every 3D API, and then of almost any derived graphics library. And the point is that it doesn't make any sense for almost every user which is not aware of all the technical details involved by the underlying 3D API. I don't want to just make another graphics library which is just a big wrapper around a 3D API and which inherits its limitations, actually I shouldn't even take in account the 3D API when designing my library, it's just an implementation detail.


Ok, when we're saying "graphical resource" are we talking about something like simple image data, or something like a texture?  Obviously the first has no logical relation to a rendering context, but the second sure as hell does.  Yes, creating render contexts from a window would mean loading duplicate resources if two windows wanted to display the same texture, and I don't see a problem with that at all.  Why should two windows have the nasty hidden implicit coupling of sharing a context?  It's the same reason globals are bad.

As far as the context recreation not being obvious to someone when changing video modes, you're right.  Under your current design, they wouldn't expect that to happen.  If it was designed the way I've suggested, they WOULD expect it because the interface makes it obvious.  They still don't have to be aware of what's going on under the hood at all.

You should absolutely prevent technical limitations of an underlying API from being passed on to the user (or decide not to use that API in the analysis stage...which is why I'm not using sfml yet :) ).  BUT, there is a reason that every API has that dependency between windows and contexts.  It's not any kind of technical limitation, it's just a logical dependency that's enforced by the API's public interface.

Quote from: "Imbue"
Right on, Laurent! :D Seriously, don't ever sacrifice your ideals.

As for the "memory leak", fix it or don't. It doesn't really matter to any reasonable person, as long as you're aware of it and it doesn't grow. Wikipedia calls a memory leak "where the program fails to release memory when no longer needed." Since you're using this memory the entire time it's allocated, is it really even a "memory leak" at all? If you stopped using the memory at some point and didn't free it, then I'd agree that it's totally unacceptable, but that's not the case.

On the GetMessage()/WaitMessage()/Whatever() topic: If this was added I would use it when my game is paused/minimized. If you added it, I bet a lot of others would use it for the same reason. Other than that, I don't personally have any immediate use for it.

In any case, SFML is already an awesome library. Just avoid taking steps backwards (like forcing the user to reload resources when changing video mode) and it'll be an awesome library for some time to come.

Just my 2 cents.

Thanks!


Correct, don't sacrafice your ideals if you can still convince yourself they're valid in the face of scrutiny.  Failing to do the latter is called being stubborn :P

I would say that memory leaks only make sense to unreasonable people.  Wikipedia's definition is pretty much ok, but your assessment of sfml's handling of the leak is wrong.  He does stop using the memory at some point without freeing it (the end of the program).

Quote from: "Imbue"


If your program is completely sandboxed, then yes. In that case logic says that you can free anything you allocate, no matter how convoluted your design becomes.

However, SFML isn't working in a sandboxed atmosphere. It's calling third party libraries. A third party library could be designed so that you could not free everything you allocate. I'm not saying that's the case here (in fact I very much doubt it is), but that is a possibility (I believe logic should lead you to agree).

Thanks! :D


Haha, you've just described why I decided not to use sfml.  It is a third party library that would force my program to have memory leaks.  The choice is still yours whether or not to use such a library, so control over your program's memory usage is still ultimately yours.

2
Feature requests / sfml equivalent of GetMessage()
« on: November 06, 2008, 11:30:11 pm »
Quote from: "Wizzard"
I think Laurent's main issue is that he doesn't want the context tied to the window. This way, users can load resources before having a window as well as after a window has been destructed. More importantly, they don't have to re-initialize their resources after changing their video mode. So, your first solution is out of the question. I think your second solution is out of the question too, but maybe I misunderstood you. If the context has to be destroyed before the window, it's no good. If you changed it to have the window take a render context, it may be a viable solution. That way, contexts can linger regardless of a window's scope and new windows can use old rendering contexts. Perhaps a reference count could be used, that way no change to the public interface has to be done. A context will only be deleted when everything using it is destroyed.


I'm not even sure how to respond to this.  Render contexts are inherently tied to windows.  There is no practical use-case where you would have to load graphical resources before having or after destroying a window (where would you display these resources?).  What's wrong or confusing about having to reload things that were essentially in a container you destroyed?  How does it make sense to have a Window construct off a RenderContext?  This implies that all windows have render contexts, which is completely, conceptually wrong.  I realize that these suggestions would require changing sfml's public interface, but that's ok because I'm also asserting that the existing interface is wrong.  Besides not imposing logical dependencies, the existing interface has also already caused run-time bugs as detailed already in this thread.  I'm not actually asking for these changes as a feature request (since they don't add or remove any useful functionality), just discussing how I would go about solving these problems.

3
Feature requests / sfml equivalent of GetMessage()
« on: November 06, 2008, 05:33:34 pm »
Quote from: "Laurent"
Quote
Can you elaborate on the issue with managed languages?

Sure.

Managed languages have two main drawbacks: destruction of variables isn't deterministic (i.e. can happen at any time, in any order) and destruction of variables doesn't always happen in the main thread; it might even happen after the main thread has ended. Unfortunately, this stuff mixes very badly with windowing and rendering contexts, which have strict rules regarding multi-threading and order of destruction. I could of course enforce the scope of graphics variables (manually freeing them), but that's not how things should be done in a managed language.

So, the best solution I've found so far is to have a rendering context which can still be active in the GC thread, after the main one has terminated. I'm not saying the is the only solution, but it will be really tricky and take some time to find a more elegant one.


It sounds to me like this is yet another problem that could be easily solved by requiring rendering contexts to be created from, and associated with, a window.  I can see two solutions off the top of my head...the second one is my favorite of the two.  First, 'Window' could act as a factory for render contexts to itself, dispensing references to contexts that it owns.  That way, those contexts are destroyed when the window is destroyed, ensuring proper destruction order.  Second, 'RenderContext' would take a 'Window' as a construction parameter.  Since the architecture makes it obvious that a RenderContext requires a Window, it is completely valid to expect the user to destroy their RenderContext objects before destroying the associated Window.  It's kind of the same thing as expecting someone not to create dangling references.

4
Feature requests / sfml equivalent of GetMessage()
« on: November 06, 2008, 02:00:08 am »
Quote from: "Wizzard"
Couldn't you create a sf::Exit() function that closes the graphics context and destructs everything related to it?


Please don't do it that way.  Some sort of RAII/scoped initialization mechanism would be preferable if a global render context has to exist (easy exception safety, etc).

Quote from: "Laurent"

I admit you couldn't find a better example than the "inactive application" to demonstrate the drawbacks of polling. I'm still not convinced by this architecture on a global scale (but I'll probably experiment it next time I write a small real-time application), but anyway what I'm seeing here is that a few experienced users are writing really big posts to convince me, and I appreciate that. So I'll add a task for a WaitMessage function in the roadmap, and try to find free time after my relocation to implement it


I'm glad that reading someone else's claims on the internet isn't enough to convince you of something you've never tried.  I would never want to use something created by anyone that impressionable ;)

Quote from: "Laurent"

Regarding the leak, it's much more than a design concept of having a window to get a rendering context. First, this rule has been confusing people for years; every graphics library inherits this behavior and people always end up spamming the forums with "why do my initialization code fail??" posts. To me it's purely technical, and I'll never let my public interface suffer from any technical limitation. As a layer on top of raw 3D APIs, I can be smarter and do what is necessary to provide extra flexibility to users.
Anyway, it's not my main concern. My main concern is the tons of issues which arise from this limitation. One of them is managed languages crashing because the GC collects variables after the main thread has ended. One other is the rendering context being lost when I re-create a window, thus invalidating every graphical resource. etc...
Anyway I'm going to fix the leak. It was not my priority (I have many more important features to implement), but I can't ignore this discussion and it's now my top priority. Too bad for people waiting for render-to-image or rendering masks...


I agree that compromising the public interface because of technical limitations should be avoided if possible.  I guess I just see the window prerequisite as more of a logical limitation than a technical one.  Can you elaborate on the issue with managed languages?  I try to avoid them like the plague, so that's a bit out of my area of expertise.  I'm familiar with the issue of losing all graphical resources, etc when a render context is destroyed, and I guess I just don't see it as an issue.  If those resources are loaded via that context, it makes sense for them to go away when the context does (ie: they are "local" to that context).  The solution is to simply not destroy the context until it doesn't make sense for your application to have it anymore.

Anyways, once WaitMessage() is implemented and there aren't anymore resource leaks, I'll definitely look at substituting sfml for Win32 in my current design for instant cross-platform support.

5
Feature requests / sfml equivalent of GetMessage()
« on: November 05, 2008, 04:34:41 pm »
Quote from: "dabo"
Does the average user really care how this is handled? SDL uses the same approach as SFML or?

Interesting read though.


I guess that depends on what your definition of "average" is.  Still, the two options are different enough that it isn't just a matter of "caring" which one is used.  You are correct that most existing real-time applications, and middleware for creating those applications, promote a polling approach.  The arguments put forth by proponents of that design are usually "anything else is too slow" or "anything involving threads and concurrency is too complex and hard".  Well, I can tell you that the asynchronous design is certainly not "too slow".  As far as concurrency being "too hard"...anyone who wants to continue being a useful, competitive programmer needs to get over that right now.  Individual cores aren't getting faster, they're just adding more of them.  Concurrency is going to be the only way to make your programs scale with the hardware.

The discussion has obviously strayed a bit from the original feature request.  All I originally asked for was the addition of a function call that would allow me to choose between the two designs above.  I wasn't suggesting that sfml itself issue asynchronous events.  Then, after posting my request, I became aware of other problems that would prevent sfml from being used in most production environments anyways (most projects' coding standards disallow resource leaks).

6
Feature requests / sfml equivalent of GetMessage()
« on: November 04, 2008, 08:52:11 pm »
Quote from: "Laurent"

I think we agree about the memory leak. It's bad, and any good programmer should do its best to get rid of such issues. I did, but couldn't find a 100% safe way to remove it so I kept it because it was too ridiculous compared to the feature it made possible. Now, unless you tell me that it's cleaner to remove it and break SFML's behaviour, I think we can focus on the solution itself ;)


I'm not at all suggesting that sfml should lose any functionality.  You can fix the leak without losing anything useful.  In the email conversation with my coworker you offered a few use cases to justify the leak:

- requesting the multisampling extension to OpenGL before creating the
first window (and the first OpenGL context)
- loading an texture, a shader or whatever graphical resource before
before having any window
- having all the OpenGL resources and states not destroyed between
destruction and re-creation of a window

Now, you have discovered that implementing something to support these exactly as written requires creating another bug in the form of a memory leak.  Situations like this come up a lot in the design stage, and are a strong indicator of a design deficiency.  You know it's possible to do what you want, now lets find an acceptable, optimal solution.  In the three cases above, a common thread between them is that they are all special-cases of a more general use-case.  The specialization is that they all want to do these things before a "Window" exists.  The fact that this has to be explicitely stated indicates an inherent dependency between a render context and a window.  In fact, I believe you hacked around this by creating a dummy "window" just to create a "global" render context.  Globals also are a strong clue that a better design probably exists.  So, obviously a window is a prerequisite to a render context.  This isn't a limitation, this is something that makes perfect conceptual sense (which means that circumventing it is conceptually wrong and confusing to the logical user).  Lose the global, enforce the prerequisite, and users are still able to do everything they could before, just through a more logical path instead of magically pulling information from the global ether.  I would recommend having "Window" be a construction parameter of "RenderContext" to decouple the two concepts somewhat and allow for multiple contexts for the same window.  Your leak is gone.

Quote from: "Laurent"

And you haven't experienced every single situation to say that 100% of leaks can be removed.


Experience doesn't enter into it, just logic.  Anything you create, you can destroy.

Quote from: "Laurent"

Regarding the asynchronous architecture, I still believe you're not doing such things in real life (do you, actually ?). I've been making games (including commercial ones) and watching game engines' sources for years, and I've never seen such design. Why ? Because it involves too many issues. Your example can work fine, but you can't apply this strategy to a whole game which is processing hundreds of events, millions of entities and that must keep a consistent state across it's game loop including update, physics, AI and drawing. Or just prove me it's possible.


I'm not saying that there aren't things that need to happen in a particular order, but the smaller these sections are, the better.  It's still completely possible to ensure proper ordering if one so desires.  It's just a more-flexible, less-invasive architecture.

Quote from: "Laurent"

Regarding the "decoupling" stuff, once you've wrapped event handling in a callback / signal system it's all the same (and don't tell me about the CPU wasted in a function call, that's ridiculous), it's just a matter of being synchronous or not.


Um...synchronous and asynchronous aren't the same at all with regards to coupling issues.  One requires client code to explicitely check for events and one doesn't.  That's also where the waste happens because 99.999% of the time, there isn't going to be an event.

7
Feature requests / sfml equivalent of GetMessage()
« on: November 03, 2008, 11:03:43 pm »
Quote from: "Ceylo"

Yes there is, for any modern operating system. And it does not depends on a programming language.


No there isn't, and yes it does.

Quote from: "Ceylo"

Except if it was to allow lazy programmers not to take care of memory handling.


Yes, it's a crutch for people who either don't grasp the importance of proper memory management, or who aren't skilled enough to deal with it, and therefore a testament to the importance of proper memory management as well.  C++ doesn't provide such a crutch, so it's YOUR responsibility.  Just because the particular OS your code was compiled and run on THIS TIME is willing to clean up your mess (yes, it is a mess), doesn't mean that will always be the case.  If you are writing C++ code that you wish to be portable or reusable both now and in the future, then you should strive to adhere as closely to the standard as possible (if anyone here doesn't have a copy of the standard, I will be happy to provide one in PDF form).  Shirking memory management duties automatically makes your code non-portable, since, again, the standard makes NO promise that the OS will clean up after you.  Why limit the portability of your code when it's so easy to manage memory properly?

Quote from: "Ceylo"

But here there is no plane crashing :].


How do you know?  How does the pilot know if his plane will land harmlessly in a lake, or right on your head?

Quote from: "Ceylo"

You have your way of seeing the things, which does not always mean it is the good one.


It's always good to be skeptical, but it's far worse to be stubborn in the face of obvious truth.  I've tried to provide examples and logical explanations for everything I've said, but I haven't heard any in return that were able to stand up under scrutiny.  I'm sorry if I sound angry, but the neglect of such fundamental and widely-achnowledged best-practices is very alarming to me.  You might as well be trying to convince me that up is down.

Quote from: "Laurent"

I'm not sure it would work in every situation (like, as I said, in the C# binding where the main thread terminates before resources are freed), but I've found some good articles about global destruction in "Modern C++ design", I'll take a look at it.


That is an excellent book.  I would recommend "C++ Template Metaprogramming" if you plan on actually using metaprogramming.  It is a very practical introduction to the Boost MPL library...something you don't want to write metaprograms without.  I'd also like to point out that you are experiencing another common symptom of using anything globally.  There's a reason that experienced developers will tell you that globals are bad.  List of books every C++ programmer should own:

Modern C++ Design
Exceptional C++
More Exceptional C++
C++ Template Metaprogramming
Design Patterns (aka, the Gang of Four book...just ignore the Singleton pattern)
Effective C++
More Effective C++
Effective STL
Beyond the C++ Standard Library: An Introduction to Boost
Refactoring: Improving the Design of Existing Code
Intel Threading Building Blocks (this library is going to be a lifesaver in the near future)
there are more...

Quote from: "Laurent"

And then what ? Polling the array of booleans ? ... I really don't get it.
My point is that operations in a real-time program (not to say a game) have to be sequential (you can't move an entity while computing its collisions or drawing it, it has to be done at a specific place in the game loop). Decoupling event handling from the rest of the application just breaks this rule. I'm really curious to see how you would write a robust game architecture with multithreaded event handling and no polling.


I know you don't get it and I think it's my fault.  It's difficult to convey certain things via typing.  Yes, querying an array of booleans would be one possibility and is very similar to polling (but not the same by any means), but now I have that CHOICE when designing my architecture.  I also said it probably wouldn't be my solution of choice.  The point is that the input source is no longer dictating my architecture.  I could do all sorts of things instead, such as synchronize access to the sprite's position to allow me to work with copies in Sprite::Update().  I know you're probably already thinking that locking to do such synchronization would be slow, but I say you are being prematurely pessimistic.  Locking it with a tbb::spin_mutex for example, would be negligable.  The possibilities don't stop there.  Say I'm working with some entity that just has a single attribute that needs to be synchronized.  Maybe it could be stored in an atomic variable abstraction, making locks unnescessary?  It's already becoming clear that I have many more options when approaching a problem than I would with your architecture.  Note that I still have the option to do something very similar to what you force people to do.

Quote from: "Laurent"

My point is that operations in a real-time program (not to say a game) have to be sequential


This isn't entirely true either.  There are vast amounts of parallelism to be had in real-time applications, including games, but that discussion would be very lengthy and I'd rather not type it.  I think it can be left as an "exercise for the reader" :)  The main point I wanted to make was about architecture, and how reduced coupling introduces choice and flexibility.

8
Feature requests / sfml equivalent of GetMessage()
« on: November 03, 2008, 05:52:09 pm »
Ok, at this point I'm losing interest in the original feature request of this thread.  I'm starting to feel a moral obligation to set you guys straight about resource leaks.  This is the first time I have EVER heard ANYONE say a resource leak is "ok".  This is software 101 people.  Any competent C/C++ programmer will tell you that resource leaks are right up there with crashes on the bug scale.

Quote from: "Ceylo"

Quote from: "kfriddile"
[...]  It is ALWAYS possible for a program to free memory it has allocated.  [...]

But what for ? If it was to be freed, it would be just before the program exits. Therefore it is useless.


It is in no way "useless".  First of all, there is absolutely no gaurantee that the operating system will clean up your mess (show me in the C++ standard where it says it will and I'll buy you each a car).  If there was, you wouldn't see languages like Java implementing automatic garbage collection.  Second, you may need that memory back BEFORE the program exits.  This is especially true of a middleware product like sfml where you can't anticipate the resource needs of the user.

Quote from: "Ceylo"

If I'm not wrong, modern OSes can free the all memory that a program uses after it exits and even after it crashes. Memory leaks only matters if it grows memory consumption over time. It may be a leak, but it's harmless if it's just a one-time leak. Moreover, while RAM is getting cheaper and people's average age decreases, why should you waste your precious life in such thing?


IF an OS frees resources leaked by a program, it does so as a fail-safe in case of programmer error, not something to be relied upon.  What you just said is the same thing as a pilot saying he can just jump out of his plane instead of landing it safely because his parachute will keep him from falling to his death.

Quote from: "Laurent"

Ok, fine for the definition. The point is, are we talking about the definition of "waste" or about the impact of such a design on SFML ? I don't think the main point between blocking wait and polling is to save or not a function call, it's much more a design issue. I think you'll agree with me, so let's not waste time with such considerations and focus on the design stuff Wink


It's both a design and efficiency issue (they are almost always intertwined).  Simply put, my design proposition allows for a more efficient program because it doesn't needlessly gobble up CPU cycles.

Quote from: "Laurent"

You think not providing a blocking GetEvent is antiquated architecture ? I don't, but indeed I could easily add it. The only reason why I don't, is exactly why you want it : the only use I can see of such a function would be to implement an asynchronous signal / slot (or callback) system, and that's just idiot compared to the same system made synchronous. I agree it's less "elegant", but wasting a function call versus introducing multi-threading and all its potential issues... my choice is done. Imagine I just want to move a sprite in a keypress event, with your solution I would already have to care about concurrent accesses and go with mutexes to protect all accesses to my sprite. Not to talk about the fact that you could end up drawing your sprite in a different position than the one it had when you computed collisions / IA / ... in its update() function. That's just crazy, especially for beginners who are not be aware of all this stuff. And that costs much more than just polling.


Yes, I do think it's antiquated.  So does Microsoft, so does Intel, etc.  "Elegant" design and robustness/efficiency go hand in hand.  For example, coupling with the rest of the app is greatly reduced when the "Window" object is capable of waiting for events itself without dictating the rest of the program's architecture.  Communication purely through callbacks is about as low as coupling gets.  Reduced coupling means more flexibility, reusability, and testability.  Anyways, about your sprite example...there are no concurrency issues involved.  It's just a simple example of the consumer/producer idiom.  There is one producer (the thread pumping the messages) and 0-N consumers.  It's not exactly how I would do it, but the simplest example would be to store key-pressed events in a bool array as 'true' and key-releases as 'false'.  That array would obviously have only one writer and 0-N readers (a sprite for example) so there are no concurrency issues whatsoever.

Quote from: "Laurent"

That's wrong, YOUR definition is narrow-sighted. Mine is flexible and adapted to real situations, while yours is a "perfect world" definition. But we're not in a perfect world. Would you sacrifice an important feature of your library for the sake of "perfection" ?
Of course if you can tell me how to free this memory while keeping the feature, it would be great Wink


Definitions are, by definition (har har), inflexible.  If we were allowed to bend definitions at our will to make things more convienent for us, they would be useless as identifiers for ideas (which is what they are supposed to be).  If I was writing a library to be used as black-box middleware by trusting users, then yes, I would ensure to the best of my abilities that no bugs, such as resource leaks, exist.  I think my coworker offered some suggestions to plug the leak in your email conversation with him, but, at the very worst, could it not be plugged inside an atexit callback?  I shudder to suggest such a hack, but it's far better than the leak.

Quote from: "Laurent"

It's too bad we're fighting rather than trying to find solutions. It might not be obvious to you, but clean and well-designed code is one of my main goals too, I'm not that kind of programmer who just writes "code that works". So if you're ok I'd be really glad to talk more about the benefits and drawbacks of multithreaded event handling, and see what has or not to be added to SFML.


I don't feel like we're fighting since nothing off-topic or personal has been said.  I'm just not the type to acquiesce when I know I'm right.  I get the impression that you view "elegant", "perfect" designs and code as things that are done for fun because one enjoys programming, and that compromises against those ideals are ok when working on an assignment or some other, more practical, software project.  I (and many others) think that mentality is exactly opposite of the truth.  When working on some prototypical peice of code, it's acceptable to hack and kludge a little bit, because the purpose of such code is just to prove that the problem is solvable.  It may not be.  Once the problem is known to be solvable, and it comes time to solve it in a production environment, it's now time to find the OPTIMAL solution to the problem.  By "optimal", I mean the best design, taking into account things like coupling with other components, flexibility, portability, future viability, and performance.

9
Feature requests / sfml equivalent of GetMessage()
« on: November 03, 2008, 04:52:41 am »
Quote from: "Laurent"

1/ I'm making a difference between using CPU and wasting CPU. Of course a function call uses CPU, but for sure doesn't waste it.


Using something when it doesn't need to be used is the definition of the word "waste".

Quote from: "Laurent"

2/ I'm talking about dependencies, not about performances. Libraries providing clean signal / slot features are just too big; the other solution is to wait for their integration into the C++ standard but it's not for now.


I never asked you to implement any kind of signal/slot or callback system.  I just asked for a blocking GetMessage() type call so I could implement my own.

Quote from: "Laurent"

3/ SFML must be 100% bindable to C. Signals / slots are not.


See response to 2 above.

Quote from: "Laurent"

I think SFML is appreciated because I care about beginners. Trust me. And believe me, thread safety is not beginner-friendly at all.


I think if you cared about beginners, you wouldn't hold them back with antiquated architecture.

Quote from: "Laurent"

5/ Regarding the memory leaks... it's a long story ;)


Yes, it was a long read.

Quote from: "Ceylo"

Is SFML supposed to be used for beginners only ? (I admit this would somewhat disapoint me)
I think the point is : do you prefer to focus on the library popularity or the library quality ?


Popularity and quality should be the same thing (unless one desires to only be popular among users who don't know any better...I wouldn't if it was my library).

Quote from: "Ceylo"

I've not thought of this for a long time, but whenever it would be easy for me to support blocking calls for events, that would also block display updates because I manually have to tell the OpenGL context when to swap the back and front buffers (which is done from the polling loop). Is this what you wish ?


If window events are that coupled to rendering (who says your current render-target has to be the back buffer?), then your architecture is flawed.  If such a blocking call existed, then it should be possible to call it on a separate thread than the one swapping the frame buffers.  There is no "polling loop" in that scenario.  "Polling" is akin to constantly asking the question "did anything happen yet?", while a rendering loop is constantly stating "draw this".

Quote from: "Laurent"

Ok, more details : the leak is indeed a small and controlled one, and its purpose is to enable a very important feature of SFML. Some people would even say it's not a leak; a leak is something which is not controlled and makes the memory consumption grow up and grow up. Actually, some implementations of STL or popular libraries can't free all the memory they use at program exit, and this is perfectly alright.


Your definition of "leak" is narrow-sighted.  A one-time allocation that is never freed is obviously a leak, and won't cause memory consumption to grow over time.  Leaks don't even have to refer to memory, since you can leak all kinds of other resources (device contexts, etc.)  Any STL implementation that leaks memory either isn't widely used or has been fixed to not leak memory.  It is ALWAYS possible for a program to free memory it has allocated.  Statements like saying memory leaks are "perfectly alright" are why I've decided not to use anything you've written.  There is absolutely no excuse for allowing any kind of resource leak.  I don't care if leaking one byte will allow your software to wash the dishes while creating world peace at the same time.  The idea of someone knowing about something as heinous as a leak and then rationalizing it instead of fixing it blows my mind.

10
Feature requests / sfml equivalent of GetMessage()
« on: November 01, 2008, 12:09:13 am »
Quote from: "Laurent"

calling a function that does nothing if no event happened, doesn't waste CPU at all.


This is simply false.  Calling a non-blocking function to check for events ( such as PeekMessage() ) in a loop uses 100% of the CPU core that thread is running on.  I'm sure laptop users running on battery would consider that "wasteful".  Even with a Sleep(0) to relinquish the rest of your time slice, you're still making at least one function call which, by definition, uses CPU.

Quote from: "Laurent"

The problem is that this kind of stuff is not yet part of the C++ standard


What does that have to do with anything?

Quote from: "Laurent"

libraries providing it are too heavy to be used by SFML


I would love to see some profiling or benchmark data to support this claim.

Quote from: "Laurent"

Moreover, it's confusing for beginners


I'm sure C++ was confusing for them at first too.

Quote from: "Laurent"

doesn't mix well with C, which is required to write bindings.


If you're talking about the name-mangling differences between C and C++, I think that is solved by simply using "extern C" with your C++ functions.  If you're talking about something else I'm not aware of, then you may have a point.  I will admit that bindings for other languages aren't important to my particular use case.

Quote from: "Laurent"

calling an event handler from a separate thread is too much dangerous to be the default behaviour (requiring to be thread safe for a basic application is too much to ask, especially for beginners who might not even know what threads are).


I couldn't disagree more.  EVERYONE needs to be conscious of thread safety issues, because single-threaded applications won't be an option for much longer, especially for resource-intensive real-time applications like your library seems to be designed for.  Individual cores aren't getting faster at the rate that they used to (in some cases they are even getting slower).  Instead, additional cores are being added.  If your application's performance can't scale with the addition of more cores via multithreading, then your application won't be viable in the very near future.  Furthermore, if you, as a library developer, don't realize this and design your library to be safely usable in such an environment, then nobody will be able to write viable applications with your library.

Anyways, it's also come to my attention that sfml contains intentional memory leaks, so it can't be used in any production-quality code anyways.  It seems odd to me that you would point to the C++ standard above, while at the same time relying on the operating system to clean up your memory leaks...behavior which is obviously not garaunteed by the standard.

11
Feature requests / sfml equivalent of GetMessage()
« on: October 31, 2008, 03:33:05 pm »
Quote from: "Laurent"

- It's almost useless in this context (SFML graphics is meant for real time)


Real-time in no way requires polling (wasting CPU by continually asking if something has happened instead of just being told when something happened).  For example, the windowing abstraction I wrote that I mentioned earlier asynchronously calls subscribed methods when an event has occured.  I've had no trouble writing a real-time opengl app using this solution.  Those particular apps may use a lot of CPU, but I know all of that CPU is being consumed by something that actually needs it (the update/render loop)...not by continually asking if there are window events.  It also drastically reduces the coupling of the window object with the rest of the app.  I can just create a 'Window' on the stack and if an event occurs that some part of my code has subscribed for, it gets called.  My app doesn't have to concern itself with constantly checking up on the window to see if anything has happened only to be disappointed when nothing has.  That functionality is the window's responsibility.

Quote from: "Laurent"

- It's implementable with 3 lines of code (the hacky Sleep(0))


That isn't implementing the same thing at all.  It's still fundamentally different because it's still polling.  It's been discussed all over the internet why polling and the Sleep(0) "fix" is bad, so I'll just provide one of the better links:

http://blogs.msdn.com/oldnewthing/archive/2005/10/04/476847.aspx

Also, Microsoft themselves has this to say about using PeekMessage() (polling):

"PeekMessage shouldn't be needed in modern, well-written applications."

http://msdn.microsoft.com/en-us/library/ms644928(VS.85).aspx
(toward the bottom...it is under community content, but that statement has been there for years, so I'm interpreting that as an endorsement from Microsoft)

Quote from: "Laurent"

By the way, I'm curious to know how you achieved to call GetMessage in another thread than the one which created the window ? MSDN says this is technically impossible ;)


You are correct in that it isn't possible, but I never said I was calling it in a thread other than the one that created the window :)  My window abstraction owns and spawns a thread that creates the window and then starts the blocking message pump.

12
Feature requests / sfml equivalent of GetMessage()
« on: October 30, 2008, 10:19:32 pm »
I would really like to see support for a blocking call to get events, similar to the Win32 API's GetMessage() call.  It would be nice to be able to run sfml's message pump in a separate thread using this blocking call.  This would allow the app to idle nicely when there's nothing to do, without any of this hacky Sleep(0) silliness.  I've implemented a Win32-only windowing abstraction that does this, but I'd rather use something like sfml to easily make it cross-platform.  Does polling make anyone else feel dirty inside?

Pages: [1]