Welcome, Guest. Please login or register. Did you miss your activation email?

Author Topic: Microbenchmarks  (Read 2940 times)

0 Members and 1 Guest are viewing this topic.

Kojay

  • Full Member
  • ***
  • Posts: 104
    • View Profile
Microbenchmarks
« on: November 04, 2015, 08:23:25 pm »
Inspired by , I was thinking SFML could have a suite of microbenchmarks.

I have made a prototype at https://github.com/Kojirion/SFML/tree/benchmarks
This is using GoogleBenchmark as a submodule.



These are of course only a few of SFML's classes. It will be far simpler to have these benchmarks if/when there are unit tests.These
  • highlight the differences between getters that return a value in memory and those that perform a calculation.
  • highlight he difference between flipping an image vertically and horizontally
  • if these functions are refactored, they can serve as indication on how performance is affected

There are some differences to what Carruth did in the talk. He goes on to use perf to determine exactly what is being benchmarked, as well as using an assembly trick to prevent the optimizer from optimizing away the things he's interested in.

I have only gone as far as to use volatile for the return value of functions. This is a different strategy (he gets a question about it towards the end of the talk) and seems sufficient for the moment, as the functions which do not return a value do not appear optimized away (note how he had to put std::vector in the loop, while in what I have written so far, the variables can be defined outside).
But I have not gone through perf/assembly to verify.

And surely it looks pretty.
« Last Edit: November 04, 2015, 10:24:57 pm by Kojay »

Nexus

  • SFML Team
  • Hero Member
  • *****
  • Posts: 6287
  • Thor Developer
    • View Profile
    • Bromeon
Re: Microbenchmarks
« Reply #1 on: November 05, 2015, 10:49:02 am »
That's interesting, although I'm not sure how expressive it is to call single functions without considering the application's context. But it may be interesting to compare different functions with each other.

What's the point of hindering the optimizer? Of course, often-called functions can contribute massively to the spent processing time if they're not inlined... That's why such optimizations exist in the first place, disabling them just creates a distorted view of realistic conditions. Good profilers are able to measure the time for inlined functions.
Zloxx II: action platformer
Thor Library: particle systems, animations, dot products, ...
SFML Game Development:

Kojay

  • Full Member
  • ***
  • Posts: 104
    • View Profile
Re: Microbenchmarks
« Reply #2 on: November 05, 2015, 11:02:10 am »
For a start, if you don't hinder the optimizer at all, it will determine that the return values of the functions are not used and so optimize them away entirely. That makes for a superfast benchmark that measures doing nothing  :D

volatile prevents that. It is still possible that the compiler will be able to determine the result of a function and simply write that to the variable, hence Carruth's more involved tricks. Basically, you still compile with optimizations, because you want the benchmark as close as possible to production environment, but you don't want the optimizer to get rid of the code you 're trying to measure.
Inlining is not be prevented, but the inlined instructions should still be there.

These 'microbenchmarks' are a complement, not replacement to profiling.
« Last Edit: November 05, 2015, 11:39:59 am by Kojay »