Welcome, Guest. Please login or register. Did you miss your activation email?

Author Topic: sfml equivalent of GetMessage()  (Read 37660 times)

0 Members and 1 Guest are viewing this topic.

kfriddile

  • Newbie
  • *
  • Posts: 12
    • View Profile
sfml equivalent of GetMessage()
« Reply #30 on: November 03, 2008, 11:03:43 pm »
Quote from: "Ceylo"

Yes there is, for any modern operating system. And it does not depends on a programming language.


No there isn't, and yes it does.

Quote from: "Ceylo"

Except if it was to allow lazy programmers not to take care of memory handling.


Yes, it's a crutch for people who either don't grasp the importance of proper memory management, or who aren't skilled enough to deal with it, and therefore a testament to the importance of proper memory management as well.  C++ doesn't provide such a crutch, so it's YOUR responsibility.  Just because the particular OS your code was compiled and run on THIS TIME is willing to clean up your mess (yes, it is a mess), doesn't mean that will always be the case.  If you are writing C++ code that you wish to be portable or reusable both now and in the future, then you should strive to adhere as closely to the standard as possible (if anyone here doesn't have a copy of the standard, I will be happy to provide one in PDF form).  Shirking memory management duties automatically makes your code non-portable, since, again, the standard makes NO promise that the OS will clean up after you.  Why limit the portability of your code when it's so easy to manage memory properly?

Quote from: "Ceylo"

But here there is no plane crashing :].


How do you know?  How does the pilot know if his plane will land harmlessly in a lake, or right on your head?

Quote from: "Ceylo"

You have your way of seeing the things, which does not always mean it is the good one.


It's always good to be skeptical, but it's far worse to be stubborn in the face of obvious truth.  I've tried to provide examples and logical explanations for everything I've said, but I haven't heard any in return that were able to stand up under scrutiny.  I'm sorry if I sound angry, but the neglect of such fundamental and widely-achnowledged best-practices is very alarming to me.  You might as well be trying to convince me that up is down.

Quote from: "Laurent"

I'm not sure it would work in every situation (like, as I said, in the C# binding where the main thread terminates before resources are freed), but I've found some good articles about global destruction in "Modern C++ design", I'll take a look at it.


That is an excellent book.  I would recommend "C++ Template Metaprogramming" if you plan on actually using metaprogramming.  It is a very practical introduction to the Boost MPL library...something you don't want to write metaprograms without.  I'd also like to point out that you are experiencing another common symptom of using anything globally.  There's a reason that experienced developers will tell you that globals are bad.  List of books every C++ programmer should own:

Modern C++ Design
Exceptional C++
More Exceptional C++
C++ Template Metaprogramming
Design Patterns (aka, the Gang of Four book...just ignore the Singleton pattern)
Effective C++
More Effective C++
Effective STL
Beyond the C++ Standard Library: An Introduction to Boost
Refactoring: Improving the Design of Existing Code
Intel Threading Building Blocks (this library is going to be a lifesaver in the near future)
there are more...

Quote from: "Laurent"

And then what ? Polling the array of booleans ? ... I really don't get it.
My point is that operations in a real-time program (not to say a game) have to be sequential (you can't move an entity while computing its collisions or drawing it, it has to be done at a specific place in the game loop). Decoupling event handling from the rest of the application just breaks this rule. I'm really curious to see how you would write a robust game architecture with multithreaded event handling and no polling.


I know you don't get it and I think it's my fault.  It's difficult to convey certain things via typing.  Yes, querying an array of booleans would be one possibility and is very similar to polling (but not the same by any means), but now I have that CHOICE when designing my architecture.  I also said it probably wouldn't be my solution of choice.  The point is that the input source is no longer dictating my architecture.  I could do all sorts of things instead, such as synchronize access to the sprite's position to allow me to work with copies in Sprite::Update().  I know you're probably already thinking that locking to do such synchronization would be slow, but I say you are being prematurely pessimistic.  Locking it with a tbb::spin_mutex for example, would be negligable.  The possibilities don't stop there.  Say I'm working with some entity that just has a single attribute that needs to be synchronized.  Maybe it could be stored in an atomic variable abstraction, making locks unnescessary?  It's already becoming clear that I have many more options when approaching a problem than I would with your architecture.  Note that I still have the option to do something very similar to what you force people to do.

Quote from: "Laurent"

My point is that operations in a real-time program (not to say a game) have to be sequential


This isn't entirely true either.  There are vast amounts of parallelism to be had in real-time applications, including games, but that discussion would be very lengthy and I'd rather not type it.  I think it can be left as an "exercise for the reader" :)  The main point I wanted to make was about architecture, and how reduced coupling introduces choice and flexibility.

MrDoomMaster

  • Newbie
  • *
  • Posts: 26
    • View Profile
sfml equivalent of GetMessage()
« Reply #31 on: November 04, 2008, 01:26:13 am »
Another great book he should get is Patterns for Parallel Programming

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
sfml equivalent of GetMessage()
« Reply #32 on: November 04, 2008, 08:35:33 am »
I think we agree about the memory leak. It's bad, and any good programmer should do its best to get rid of such issues. I did, but couldn't find a 100% safe way to remove it so I kept it because it was too ridiculous compared to the feature it made possible. Now, unless you tell me that it's cleaner to remove it and break SFML's behaviour, I think we can focus on the solution itself ;)
But please don't say you're the only one who's right, a lot of people always discuss whether controlled leaks are actually leaks or not. And you haven't experienced every single situation to say that 100% of leaks can be removed.

Regarding the asynchronous architecture, I still believe you're not doing such things in real life (do you, actually ?). I've been making games (including commercial ones) and watching game engines' sources for years, and I've never seen such design. Why ? Because it involves too many issues. Your example can work fine, but you can't apply this strategy to a whole game which is processing hundreds of events, millions of entities and that must keep a consistent state across it's game loop including update, physics, AI and drawing. Or just prove me it's possible.

Regarding the "decoupling" stuff, once you've wrapped event handling in a callback / signal system it's all the same (and don't tell me about the CPU wasted in a function call, that's ridiculous), it's just a matter of being synchronous or not.

Quote
This isn't entirely true either. There are vast amounts of parallelism to be had in real-time applications, including games

I was just talking about the top-level logical flow. Of course there are tons of things which have to be parallelized, especially with today's multi-cores and consoles architectures.
Laurent Gomila - SFML developer

Jaeger

  • Newbie
  • *
  • Posts: 1
    • View Profile
sfml equivalent of GetMessage()
« Reply #33 on: November 04, 2008, 07:18:13 pm »
Quote from: "Laurent"

Regarding the asynchronous architecture, I still believe you're not doing such things in real life (do you, actually ?). I've been making games (including commercial ones) and watching game engines' sources for years, and I've never seen such design. Why ? Because it involves too many issues. Your example can work fine, but you can't apply this strategy to a whole game which is processing hundreds of events, millions of entities and that must keep a consistent state across it's game loop including update, physics, AI and drawing. Or just prove me it's possible.



We use a similar mechanism in our current commercial project. In part we chose it because of Amdahl's law. The smaller we make the serial sections of our application the better we'll scale to many core systems. Window and system events come in asynchronously and if the application is in a state where it cannot handle them we block or queue as appropriate for performance. Even if we queue we don't require polling the queue explicitly. Instead when we transition out of our blocking state we can check the queue and if it is not empty we transition into a state that processes the queue.

However the main reason we chose this architecture is the first pillar of concurrency.
http://www.ddj.com/hpc-high-performance-computing/200001985?pgno=2

kfriddile

  • Newbie
  • *
  • Posts: 12
    • View Profile
sfml equivalent of GetMessage()
« Reply #34 on: November 04, 2008, 08:52:11 pm »
Quote from: "Laurent"

I think we agree about the memory leak. It's bad, and any good programmer should do its best to get rid of such issues. I did, but couldn't find a 100% safe way to remove it so I kept it because it was too ridiculous compared to the feature it made possible. Now, unless you tell me that it's cleaner to remove it and break SFML's behaviour, I think we can focus on the solution itself ;)


I'm not at all suggesting that sfml should lose any functionality.  You can fix the leak without losing anything useful.  In the email conversation with my coworker you offered a few use cases to justify the leak:

- requesting the multisampling extension to OpenGL before creating the
first window (and the first OpenGL context)
- loading an texture, a shader or whatever graphical resource before
before having any window
- having all the OpenGL resources and states not destroyed between
destruction and re-creation of a window

Now, you have discovered that implementing something to support these exactly as written requires creating another bug in the form of a memory leak.  Situations like this come up a lot in the design stage, and are a strong indicator of a design deficiency.  You know it's possible to do what you want, now lets find an acceptable, optimal solution.  In the three cases above, a common thread between them is that they are all special-cases of a more general use-case.  The specialization is that they all want to do these things before a "Window" exists.  The fact that this has to be explicitely stated indicates an inherent dependency between a render context and a window.  In fact, I believe you hacked around this by creating a dummy "window" just to create a "global" render context.  Globals also are a strong clue that a better design probably exists.  So, obviously a window is a prerequisite to a render context.  This isn't a limitation, this is something that makes perfect conceptual sense (which means that circumventing it is conceptually wrong and confusing to the logical user).  Lose the global, enforce the prerequisite, and users are still able to do everything they could before, just through a more logical path instead of magically pulling information from the global ether.  I would recommend having "Window" be a construction parameter of "RenderContext" to decouple the two concepts somewhat and allow for multiple contexts for the same window.  Your leak is gone.

Quote from: "Laurent"

And you haven't experienced every single situation to say that 100% of leaks can be removed.


Experience doesn't enter into it, just logic.  Anything you create, you can destroy.

Quote from: "Laurent"

Regarding the asynchronous architecture, I still believe you're not doing such things in real life (do you, actually ?). I've been making games (including commercial ones) and watching game engines' sources for years, and I've never seen such design. Why ? Because it involves too many issues. Your example can work fine, but you can't apply this strategy to a whole game which is processing hundreds of events, millions of entities and that must keep a consistent state across it's game loop including update, physics, AI and drawing. Or just prove me it's possible.


I'm not saying that there aren't things that need to happen in a particular order, but the smaller these sections are, the better.  It's still completely possible to ensure proper ordering if one so desires.  It's just a more-flexible, less-invasive architecture.

Quote from: "Laurent"

Regarding the "decoupling" stuff, once you've wrapped event handling in a callback / signal system it's all the same (and don't tell me about the CPU wasted in a function call, that's ridiculous), it's just a matter of being synchronous or not.


Um...synchronous and asynchronous aren't the same at all with regards to coupling issues.  One requires client code to explicitely check for events and one doesn't.  That's also where the waste happens because 99.999% of the time, there isn't going to be an event.

MrDoomMaster

  • Newbie
  • *
  • Posts: 26
    • View Profile
sfml equivalent of GetMessage()
« Reply #35 on: November 04, 2008, 10:09:06 pm »
I have what I believe is a fairly solid argument in regards to the issue of PeekMessage() vs GetMessage().

Let's assume we have a specific goal: When the user minimizes the game, we want the game to consume 0% CPU. By 0% I mean time spent in our application/game code. This does not include the processing time the operating system spends managing our process. Let's keep it simple.

Below I've outlined 2 scenarios. Scenario 1 doesn't reach our goal at all, however it reflects the current design that SFML imposes on the user. Scenario 1 is being presented because I want to express how SFML could not possibly fulfill this very simple but very important design goal in its current state (architecture).

Scenario 2 will indeed solve the problem, however it utilizes an architectural design that is completely different/incompatible with SFML. This is basically the design that kfriddle is pushing for.


Scenario 1


Suppose the following game loop implementation (Forgive/Ignore any over-simplifications, subtle bugs, or other anomalies, this code has not been compiled):
Code: [Select]

int main()
{
MSG msg;

while( true )
{
if( PeekMessage( &msg, NULL, NULL, NULL, 0 ) )
{
if( msg.message == WM_QUIT )
{
break;
}

TranslateMessage( &msg );
DispatchMessage( &msg );
}

TickGame();
DrawGame();
}

return 0;
}


The above code represents an over-simplified version of your typical "main game loop". This is basically the thing that feeds your entire game and provides it continuous processing. When the user minimizes the application, there is no way to suspend the application. In other words, the while( true ) loop above will never end except for when the application is terminated by the user.

Because this while loop never ends except under the previously noted circumstances, the game will always push to consume as much CPU as possible regardless of the state of the application, such as being minimized.

You may say, "Well let's just do this:"
Code: [Select]

int main()
{
MSG msg;

while( true )
{
if( PeekMessage( &msg, NULL, NULL, NULL, 0 ) )
{
if( msg.message == WM_QUIT )
{
break;
}

TranslateMessage( &msg );
DispatchMessage( &msg );
}

if( bMinimized )
{
Sleep( 1 );
}
else
{
TickGame();
DrawGame();
}
}

return 0;
}


I would then proceed to say you're evil. This is not solving the problem. Sleeping here does not solve the problem since you have no idea how long the user will keep the application in the minimized state. For the entire duration the application is minimized, there should be absolutely 0 iterations of this while loop. No physical code that we control in the application should be getting processed. Any processing happening at application level code is considered wasteful.


Scenario 2


The code for this can get fairly extensive, so I'll only cover the most fundamental and important parts. In one thread (Thread #1), you would have this continuously running:
Code: [Select]
MSG msg;
while( GetMessage( &msg, 0, 0, 0 ) > 0 )
{
TranslateMessage( &msg );
DispatchMessage( &msg );
}


Obviously the window would have been constructed in the same thread processing the above loop. As far as where the message procedure is, let's also assume it is in the same thread for the purposes of this example.

In a completely different thread (Thread #2) you would have the following loop running:
Code: [Select]

while( true )
{
WaitForSingleObject( .... ); // This would suspend the thread if game processing is not currently needed.

TickGame();
DrawGame();
}


Again, I do apologize for the over-simplifications. Bear with me. This is mainly pseudo-code. The above code continues to process the game normally until a request from another thread comes in to tell it to PAUSE or RESUME (hence the WaitForSingleObject() call). If this thread is told to PAUSE, no game processing will occur until a matching RESUME request is given.

So let's tie all of this together. Typically I would use a sequence diagram to properly document the flow of all of this, so once again do bear with me while I try to use a bullet point list to describe the sequence of the application:

[list=1]
  • The application starts and Thread #1 is executed, which results in the message pump being processed.
  • When the user minimizes the application, a WM_MINIMIZED event occurs, which Thread #1 handles by atomically telling Thread #2 to PAUSE.
  • During this time, the application is in the minimized state consuming 0% CPU because the game loop in Thread #2 is not running, nor is Thread #1 continuously spamming calls to PeekMessage().
  • When the user restores/maximizes the window, the respective message is handled and results in a RESUME request being sent to Thread #2, which causes the game loop to continue processing.[/list:o]


    Conclusion


    As kfriddle has been saying all along, the design SFML utilizes (Scenario 1) is antiquated. I won't go over his arguments since they were well spoken. I'm simply giving a detailed side-by-side example to make his points a bit more realized. There is no sense in justifying memory leaks, many have already told you this is just plain EVIL in every sense of the word. There is also no sense in justifying usage of polling, as it greatly limits the conservativeness of the application as I've just explained in detail.

dabo

  • Sr. Member
  • ****
  • Posts: 260
    • View Profile
    • http://www.dabostudios.net
sfml equivalent of GetMessage()
« Reply #36 on: November 05, 2008, 11:00:52 am »
Does the average user really care how this is handled? SDL uses the same approach as SFML or?

Interesting read though.

kfriddile

  • Newbie
  • *
  • Posts: 12
    • View Profile
sfml equivalent of GetMessage()
« Reply #37 on: November 05, 2008, 04:34:41 pm »
Quote from: "dabo"
Does the average user really care how this is handled? SDL uses the same approach as SFML or?

Interesting read though.


I guess that depends on what your definition of "average" is.  Still, the two options are different enough that it isn't just a matter of "caring" which one is used.  You are correct that most existing real-time applications, and middleware for creating those applications, promote a polling approach.  The arguments put forth by proponents of that design are usually "anything else is too slow" or "anything involving threads and concurrency is too complex and hard".  Well, I can tell you that the asynchronous design is certainly not "too slow".  As far as concurrency being "too hard"...anyone who wants to continue being a useful, competitive programmer needs to get over that right now.  Individual cores aren't getting faster, they're just adding more of them.  Concurrency is going to be the only way to make your programs scale with the hardware.

The discussion has obviously strayed a bit from the original feature request.  All I originally asked for was the addition of a function call that would allow me to choose between the two designs above.  I wasn't suggesting that sfml itself issue asynchronous events.  Then, after posting my request, I became aware of other problems that would prevent sfml from being used in most production environments anyways (most projects' coding standards disallow resource leaks).

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
sfml equivalent of GetMessage()
« Reply #38 on: November 05, 2008, 10:17:44 pm »
I admit you couldn't find a better example than the "inactive application" to demonstrate the drawbacks of polling. I'm still not convinced by this architecture on a global scale (but I'll probably experiment it next time I write a small real-time application), but anyway what I'm seeing here is that a few experienced users are writing really big posts to convince me, and I appreciate that. So I'll add a task for a WaitMessage function in the roadmap, and try to find free time after my relocation to implement it ;)

Regarding the leak, it's much more than a design concept of having a window to get a rendering context. First, this rule has been confusing people for years; every graphics library inherits this behavior and people always end up spamming the forums with "why do my initialization code fail??" posts. To me it's purely technical, and I'll never let my public interface suffer from any technical limitation. As a layer on top of raw 3D APIs, I can be smarter and do what is necessary to provide extra flexibility to users.
Anyway, it's not my main concern. My main concern is the tons of issues which arise from this limitation. One of them is managed languages crashing because the GC collects variables after the main thread has ended. One other is the rendering context being lost when I re-create a window, thus invalidating every graphical resource. etc...
Anyway I'm going to fix the leak. It was not my priority (I have many more important features to implement), but I can't ignore this discussion and it's now my top priority. Too bad for people waiting for render-to-image or rendering masks... ;)
Laurent Gomila - SFML developer

Wizzard

  • Full Member
  • ***
  • Posts: 213
    • View Profile
sfml equivalent of GetMessage()
« Reply #39 on: November 05, 2008, 11:53:54 pm »
Couldn't you create a sf::Exit() function that closes the graphics context and destructs everything related to it?

kfriddile

  • Newbie
  • *
  • Posts: 12
    • View Profile
sfml equivalent of GetMessage()
« Reply #40 on: November 06, 2008, 02:00:08 am »
Quote from: "Wizzard"
Couldn't you create a sf::Exit() function that closes the graphics context and destructs everything related to it?


Please don't do it that way.  Some sort of RAII/scoped initialization mechanism would be preferable if a global render context has to exist (easy exception safety, etc).

Quote from: "Laurent"

I admit you couldn't find a better example than the "inactive application" to demonstrate the drawbacks of polling. I'm still not convinced by this architecture on a global scale (but I'll probably experiment it next time I write a small real-time application), but anyway what I'm seeing here is that a few experienced users are writing really big posts to convince me, and I appreciate that. So I'll add a task for a WaitMessage function in the roadmap, and try to find free time after my relocation to implement it


I'm glad that reading someone else's claims on the internet isn't enough to convince you of something you've never tried.  I would never want to use something created by anyone that impressionable ;)

Quote from: "Laurent"

Regarding the leak, it's much more than a design concept of having a window to get a rendering context. First, this rule has been confusing people for years; every graphics library inherits this behavior and people always end up spamming the forums with "why do my initialization code fail??" posts. To me it's purely technical, and I'll never let my public interface suffer from any technical limitation. As a layer on top of raw 3D APIs, I can be smarter and do what is necessary to provide extra flexibility to users.
Anyway, it's not my main concern. My main concern is the tons of issues which arise from this limitation. One of them is managed languages crashing because the GC collects variables after the main thread has ended. One other is the rendering context being lost when I re-create a window, thus invalidating every graphical resource. etc...
Anyway I'm going to fix the leak. It was not my priority (I have many more important features to implement), but I can't ignore this discussion and it's now my top priority. Too bad for people waiting for render-to-image or rendering masks...


I agree that compromising the public interface because of technical limitations should be avoided if possible.  I guess I just see the window prerequisite as more of a logical limitation than a technical one.  Can you elaborate on the issue with managed languages?  I try to avoid them like the plague, so that's a bit out of my area of expertise.  I'm familiar with the issue of losing all graphical resources, etc when a render context is destroyed, and I guess I just don't see it as an issue.  If those resources are loaded via that context, it makes sense for them to go away when the context does (ie: they are "local" to that context).  The solution is to simply not destroy the context until it doesn't make sense for your application to have it anymore.

Anyways, once WaitMessage() is implemented and there aren't anymore resource leaks, I'll definitely look at substituting sfml for Win32 in my current design for instant cross-platform support.

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
sfml equivalent of GetMessage()
« Reply #41 on: November 06, 2008, 08:21:22 am »
Quote
Can you elaborate on the issue with managed languages?

Sure.

Managed languages have two main drawbacks: destruction of variables isn't deterministic (i.e. can happen at any time, in any order) and destruction of variables doesn't always happen in the main thread; it might even happen after the main thread has ended. Unfortunately, this stuff mixes very badly with windowing and rendering contexts, which have strict rules regarding multi-threading and order of destruction. I could of course enforce the scope of graphics variables (manually freeing them), but that's not how things should be done in a managed language.

So, the best solution I've found so far is to have a rendering context which can still be active in the GC thread, after the main one has terminated. I'm not saying the is the only solution, but it will be really tricky and take some time to find a more elegant one.

Quote
Anyways, once WaitMessage() is implemented and there aren't anymore resource leaks, I'll definitely look at substituting sfml for Win32 in my current design for instant cross-platform support

I'm glad to see that ;)
Don't hesitate to give more feedback like this once you're using SFML.
Laurent Gomila - SFML developer

bullno1

  • Jr. Member
  • **
  • Posts: 66
    • View Profile
sfml equivalent of GetMessage()
« Reply #42 on: November 06, 2008, 10:11:58 am »
Quote
Too bad for people waiting for render-to-image or rendering masks... Wink

I'm one of them :( . Nvm, currently I only need render-to-texture for motion blur effect so I can live without it.

Laurent

  • Administrator
  • Hero Member
  • *****
  • Posts: 32504
    • View Profile
    • SFML's website
    • Email
sfml equivalent of GetMessage()
« Reply #43 on: November 06, 2008, 11:54:06 am »
If your motion blur is on the whole screen, you can use the new sf::Image::CopyScreen function.
Laurent Gomila - SFML developer

kfriddile

  • Newbie
  • *
  • Posts: 12
    • View Profile
sfml equivalent of GetMessage()
« Reply #44 on: November 06, 2008, 05:33:34 pm »
Quote from: "Laurent"
Quote
Can you elaborate on the issue with managed languages?

Sure.

Managed languages have two main drawbacks: destruction of variables isn't deterministic (i.e. can happen at any time, in any order) and destruction of variables doesn't always happen in the main thread; it might even happen after the main thread has ended. Unfortunately, this stuff mixes very badly with windowing and rendering contexts, which have strict rules regarding multi-threading and order of destruction. I could of course enforce the scope of graphics variables (manually freeing them), but that's not how things should be done in a managed language.

So, the best solution I've found so far is to have a rendering context which can still be active in the GC thread, after the main one has terminated. I'm not saying the is the only solution, but it will be really tricky and take some time to find a more elegant one.


It sounds to me like this is yet another problem that could be easily solved by requiring rendering contexts to be created from, and associated with, a window.  I can see two solutions off the top of my head...the second one is my favorite of the two.  First, 'Window' could act as a factory for render contexts to itself, dispensing references to contexts that it owns.  That way, those contexts are destroyed when the window is destroyed, ensuring proper destruction order.  Second, 'RenderContext' would take a 'Window' as a construction parameter.  Since the architecture makes it obvious that a RenderContext requires a Window, it is completely valid to expect the user to destroy their RenderContext objects before destroying the associated Window.  It's kind of the same thing as expecting someone not to create dangling references.

 

anything