The problem is that the way it selects the best visual is not suited here, example:
GlContext::evaluateFormat check for best fitting context, but can produce unwanted output; and my Intel chipset can do antialias, so that's not the problem.
Good case:
Want: 32bpp, 16 bits depth, 8 bits stencil, x4 antialias
Have: 32bpp, 16 bits depth, 8 bits stencil, x4 antialias
0 + 0 + 0 + 0 = 0, best, everything is OK
Bad case:
Want: 32bpp, 8 bits depth, 8 bits stencil, x2 antialias
Have: 32bpp, 8 bits depth, 8 bits stencil, x0 antialias
0 + 0 + 0 + 2 = 2, gets chosen but I want antialias
32bpp, 16 bits depth, 8 bits stencil, x2 antialias
0 + 8 + 0 + 0 = 8
Even if values seem illogical to you, I seem to be in the bad case (I haven't got the actual values though).
This behavior is due to the std::abses in GlContext::evaluateFormat, that makes the rating worse if the avaliable settings is actually better; which is both a bad and good thing: it gets the closest context settings, but better isn't necessarily worse.
As I said, using the OpenGL 3/4 init code does work for 1.x/2.x; and even if it fails the fallback will still be available to get the closest settings possible. So I think it'd be better trying to create a context with exact options then falling back, regardless of the requested OpenGL version.