I recently added a tiled background to my project. It was a straightforward implementation. I parse the information from a tmx file (Tiled Editor) and then loop through each layer and tile. I have two layers and my map is 50 tiles by 50 tiles. So to draw the background I loop 5,000 times (2 layers x 50 tiles wide x 50 height).
Looking at the debug and release build before implementing this I was running at 680 FPS. If I loop through the logic, but don't call the draw method my release's performance holds at 680 FPS, but my debug's performance plunges to 40 FPS! Next, if I loop through all the logic, and call the draw method my debug FPS drops to 15 and my release FPS drops to 330.
First, I thought it was really strange the performance hit looping through the logic takes in debug mode. That's a 17x performance decrease. It seems very quickly my game would become unplayable in debug mode.
Next, my release mode is running at 330 FPS with a character and a background. That seems fast enough, but I wish I had something to compare it to. Does this seem like a reasonable place to begin building a simple 2D game on?
Finally, I can think of a number of optimizations. The simplest being to not redraw the character and background every single frame. I mention the character and background together, because I center the camera on the character. They are linked. Is there a "recommended" number of times to redraw per second?