226
General / No executable ?
« on: April 15, 2010, 06:36:34 pm »
It should be in your bin/debug or bin/release folder
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
As for the benefits, I was rather thinking of the benefits of using XNA, not of making a port for Xbox 360, even if both are linked. So.. ok as far as the benefits of a Xbox 360 port, even if I don't know whether it's achievable.
Quote from: "Ashenwraith"I thought an XNA port of SFML would be cool.
I don't think "cool" is good reason for Laurent to implement a XNA backend :lol: . Moreover it'd be a huge work, it's not portable, and benefits still need to be demonstrated. Thus Laurent might keep using OpenGL .
Quote from: "Ashenwraith"but isn't an array essentially an indexed list of pointers?No. Actually, arrays are independent of pointers. The reason why those two concepts are often confused is that there exists an implicit conversion from an array to a pointer to the first element of the array. That's why you can writeCode: [Select]T array[size];
T* ptr = array;Quote from: "Ashenwraith"And also, aren't arrays much faster than vectors, especially for linear iterations?Static, stack-based arrays are faster, but they're not flexible because the size must be known at compile time and cannot change. For static arrays, the class std::tr1::array is very useful, as it doesn't have the problems of C arrays like out-of-range accesses, non-copyability or the lack of a generic interface.
In practice, you often need dynamically allocated arrays and other data structures. One way is to use new[] and delete[], but in many cases, there are better alternatives. Here, the STL containers (std::vector is one of them) come into play. Some of their advantages are:
- Automatic memory management. You needn't call delete[] nor worry about memory leaks. Your code becomes exception-safe.
- A lot of useful functions (e.g. to erase/insert elements or to get the size). When you use new[], you have to store the number of elements separately and you need tedious loops when inserting an element.
- Uniform interface inside the STL. When you decide to switch from a linear vector to a doubly linked list, just typedef the container type and change one identifier at one place in the code. Imagine the equivalent refactoring using new[] and delete[].
- Support for debugging. The most STL implementations perform runtime checks in debug mode, this means errors like invalid indices are immediately detected.
- Zero abstraction overhead. Thanks to this C++ philosophy, the STL containers are not slower than manually using new[] and delete[] for the very most cases. Why should they be? They use the same functionality in a generic, encapsulated design (classes and templates). The STL may even be faster, since you can 1. choose the best-fitting data structure, 2. provide specific allocator strategies, 3. make use of optimizations like pre-allocations at std::vector.
Quote from: "Ashenwraith"
Speaking of performance, how is xna/directx working for 2d?
Xna actually has surprisingly great support for 2D. Its not as simple as SFML, but one big thing they have going for them is their SpriteBatch class. There are 5 sorting methods:
1. Immediate: Draw whenever the state (texture) changes.
2. Texture: Draw everything at once, ordered by texture (for when stuff doesn't overlap).
3. Deferred: Draw all at once in the same order called (same as Immediate, but delayed, I believe - its so you can build up multiple batches at once without messing up the drawing).
4. Front-to-back: Deferred, but sorted by depth.
5. Back-to-front: Deferred, but sorted by depth (other direction).
I personally only really used Immediate, since I find it easy enough to handle the ordering myself. I assume this is pretty much like the batching Laurent added to SFML 2.0. If a call to draw uses the same state as the last call, you queue it into the batch. If the state changes, you flush the queue, clear it, then place the newest call in the front of the batch. Combined with texture atlases, I was able to render my whole scenes in under 30-or-so batches.
However, with XNA, you are in a load of hurt if you try to do things your own way. One reason I switch from XNA to SFML for my project was since I didn't like how XNA's ContentManager worked. Switching to SFML, I had a robust content manager that supports lazy-loading, only loading one instance of each asset, and automatic asset reloading (if you call an asset that has been disposed, it reloads from file transparently) done very quickly. Trying to do this with XNA, I gave up after about a full day of working on it, and tons of hacking with reflection.
In my project, since I just replaced XNA with SFML, things are being used pretty much the same way. From what I can see, the only real performance hits are coming from the fact that I no longer have any batching. But even still, I am hitting 60 FPS no sweat. I think the SFML version is running about 50%* slower while doing the exact same stuff (even using texture atlases still), but again, we're comparing no batching to batching, and I am still using SFML 1.5. And my rendering is still written with XNA in mind, not SFML.
*This is a complete guess from memory based purely on approximate numbers I remember when looking at the Task Manager. In no way have I EVER actually measured it.
As far as simplicity goes, SFML beats XNA hands-down for 2D. Its quite scary how simple SFML is. And at least when you don't like how something works in SFML, you can roll your own without a fight.
But... i mean, i think u r on the wrong way, meanin that the fact that if you have sync the event manager with the frame rate, that would mean, that in a low frame rate, you will have less response from the event manager.
PD. Sorry my bad english, just not from english speakin countrys.