As I understand from your posts, you apply orto to iso conversion on all tiles and objects. That's a wrong aproach, IMHO.
Just to objects, but I don't know why this is wrong from your point of view. Yeah, physics should ... but physics don't know anything about the actual representation. The clue of decoupling systems is also to decouple and isolate domain-specific tasks.
The main problem is the objects iteration, not the maths involved per object transformation apply... if 1K objects move... doesn't mathers if you just make two or four math operation... the problem is the 1000 object batch... and you cannot avoid it
Well, without transformation everything runs smooth - also over those 1k objects. Each one is picked, each one's dirty flag is checked and then nothing is done (if transformations are disabled in code). And because no optimization is applied, the compiler won't optimize that in any way.
Yes, now the typical "profile your code and you'll see that you're wrong!"-posts might occur
But indeed, profiling with
-pg gives me not a clue about the bottleneck. So anyway, messuring systems' elapsed time cannot be
that bad at all ^^
By the way: the physics system is iterating over the same number of objects in the same way (contigueos array) but will less math... Guess how many time it is consuming xD
I have some years of experience with a text MUD games where is quite normal having many objects. The solution is to keep living objects at minimum. I can tell you some tips if you want.
Well, of course large object numbers might be common. But large numbers of
moving objects is not - at least in roleplaying genre. Of course there might be lots of objects: chests, torched, enemys' corpses... but they are all not moving. They stay/lay where they are, do not rotate or everything else. Having lots of non-moving objects inside my system doesn't slow it down the way lots of moving objects do.
So I think the best way might be reducing the number of
moving objects - as well as looking for optimization possibilities.
/EDIT: But, profiling gave me at least one clue: pushing back to a vector, which is already large enough to contain those additional objects, seems slow
(at least from the profiling output's point of view). Changing this helped a bit, but not quite much. ... Hey, but I've tried
Iteration with matrix math is quite tough, anyway ^^