SFML community forums
Help => Graphics => Topic started by: Tank on July 17, 2009, 12:56:12 pm
-
Hi guys,
I've recently discovered a very disturbing "bug" -- if it's any. I'll try to describe the problem:
I'm using two threads in my application: the first polls SFML events, handles them and calls then another class (called "State") to render the scene. The second thread is inside that State class. It's launched when the State is created, waits a second and just creates an sf::Image and loads an image.
While the image gets loaded, it's possible that the first thread is just rendering the scene/clearing the window/displaying the contents.
So, my first question is: Is SFML threadsafe enough so that I don't have to manage that myself?
What happens is: when you start the program often enough (depends on luck, I guess...), it freezes. Backtrace:
#0 0x00007f8d9c74e1f4 in __lll_lock_wait () from /lib/libpthread.so.0
#1 0x00007f8d9c749acb in _L_lock_312 () from /lib/libpthread.so.0
#2 0x00007f8d9c7494d1 in pthread_mutex_lock () from /lib/libpthread.so.0
#3 0x00007f8d9dbff3c9 in ?? () from /usr/lib/libGL.so.1
#4 0x00007f8d9ac71d98 in ?? () from /usr/lib/libGLcore.so.1
#5 0x00007f8d9ac2caaa in ?? () from /usr/lib/libGLcore.so.1
#6 0x00007f8d9a8f133b in ?? () from /usr/lib/libGLcore.so.1
#7 0x00007f8d9d4a215e in sf::RenderTarget::Clear () from /usr/lib/libsfml-graphics.so.1.6
#8 0x000000000040354f in Tilechat::Run ()
#9 0x0000000000402f3b in main ()
So the last call before going to system libraries is sf::RenderTarget::Clear(). Then it's diving into OpenGL, where an infinite lock happens. This *normally* tells us it's a bug in OpenGL. But since I haven't learned the SFML source code good enough, that'd be also a possibility. ;)
In the worst case, that "bug" even kills my X-Server! The log tells me this:
Backtrace:
560 0: /usr/bin/X(xf86SigHandler+0x6a) [0x47898a]
561 1: /lib/libc.so.6 [0x7f4c034470f0]
562 2: /usr/lib/xorg/modules/drivers//nvidia_drv.so(_nv001853X+0x6) [0x7f4bffbdf756]
563 3: /usr/lib/xorg/modules/drivers//nvidia_drv.so [0x7f4bffd5561c]
564 4: /usr/lib/xorg/modules/drivers//nvidia_drv.so [0x7f4bffd6c499]
565 5: /usr/bin/X(Dispatch+0x342) [0x44f7e2]
566 6: /usr/bin/X(main+0x4a5) [0x436bd5]
567 7: /lib/libc.so.6(__libc_start_main+0xe6) [0x7f4c034335a6]
568 8: /usr/bin/X [0x435e99]
nvidia_drv.so, that really sounds bad. But again, I don't know in which context and whatever. It's of course possible the bug starts very sooner and just kicks in at driver level.
So far, so good. Here's a ZIP containing source code for a minimal reproduction setup: *click* (http://pitload.org/get/1717) . Please keep in mind that the sources come from a rather full-blown game. ;) I've stripped them down as much as possible to isolate the bug.
Okay, then something else, which maybe has something todo with the described bug above:
In my full game, I've also got one critical bug: *Sometimes* (not always!) the program crashes with an GLXBadContext error code. That ALWAYS happens when 1. using sf::Image::LoadFromFile() OR 2. doing something with the loaded image after that.
If that helps, I can post more source code, but since it's a big project, it would take some more time to understand it. So I thought I'd just point you in the right direction at first to see if you know anything that can help.
My system setup is:
Debian GNU/Linux on a 64 Bit AMD CPU
SFML version: SVN r1177 (latest)
Graphics driver: nvidia, 185.18.14 (latest)
Kernel: 2.6.26
If you need any information, please let me know. It's important for me to get that thing fixed, since it stops my programming entirely.
-
You should try the sfml2 branch, the OpenGL context handling has been totally rewritten so this kind of problem has typically disappeared.
If you're using the graphics module in a secondary thread in SFML 2, you'll have to explicitely instanciate a context and keep it alive as long as you're making graphics calls. See the documentation (header) of the sf::Context class for an example.
-
Good to know that there's a solution to this, thank you very much.
I'll switch over to SFML2 and see if that problem will disappear. Is there any way to automate that context behaviour in the future?
-
Is there any way to automate that context behaviour in the future?
I've been thinking about this issue for a very (very very very) long time, and this is the simplest solution I found. Handling properly OpenGL contexts across threads is a pain in the ass ;)
But it's not a big deal:
void ThreadedFunc(void*)
{
sf::Context context;
// your code
}
-
Okay, I switched to SFML2 and get some other errors now regarding to the sf::Context. In total 3 possible errors:
BadIDChoice
X Error of failed request: BadIDChoice (invalid resource ID chosen for this connection)
Major opcode of failed request: 1 (X_CreateWindow)
Resource id in failed request: 0x2600010
Serial number of failed request: 104
Current serial number in output stream: 104
Backtrace from gdb:
#0 0x00007f6f0398f469 in glDeleteTextures () from /usr/lib/libGL.so.1
#1 0x00007f6f030bb232 in sf::Image::DestroyTexture () from /usr/lib/libsfml-graphics.so.2.0
#2 0x00007f6f030bb4d6 in sf::Image::~Image () from /usr/lib/libsfml-graphics.so.2.0
#3 0x00000000004179ec in ~Font (this=0x7f6f0339d940) at /usr/include/SFML/Graphics/Font.hpp:55
#4 0x00007f6f019957dd in exit () from /lib/libc.so.6
#5 0x00007f6f0145033d in _XDefaultError () from /usr/lib/libX11.so.6
#6 0x00007f6f0145040c in _XError () from /usr/lib/libX11.so.6
#7 0x00007f6f01457e93 in _XReply () from /usr/lib/libX11.so.6
#8 0x00007f6f012039df in ?? () from /usr/lib/libXrandr.so.2
#9 0x00007f6f01203d88 in XRRGetScreenInfo () from /usr/lib/libXrandr.so.2
#10 0x00007f6f033b348a in sf::priv::VideoModeSupport::GetDesktopVideoMode () from /usr/lib/libsfml-window.so.2.0
#11 0x00007f6f033ae8a9 in sf::VideoMode::GetDesktopMode () from /usr/lib/libsfml-window.so.2.0
#12 0x00007f6f033b27af in sf::priv::ContextGLX::ContextGLX () from /usr/lib/libsfml-window.so.2.0
#13 0x00007f6f033ae0da in sf::priv::ContextGL::New () from /usr/lib/libsfml-window.so.2.0
#14 0x00007f6f033add19 in sf::Context::Context () from /usr/lib/libsfml-window.so.2.0
#15 0x0000000000416e2e in DebugState::ThreadFunc (arg=0x184a6d0) at client/src/debugstate.cpp:17
#16 0x00007f6f035c0cb9 in sf::priv::ThreadImpl::EntryPoint () from /usr/lib/libsfml-system.so.2.0
#17 0x00007f6f0174afaa in start_thread () from /lib/libpthread.so.0
#18 0x00007f6f01a2d29d in clone () from /lib/libc.so.6
#19 0x0000000000000000 in ?? ()
BadLength
X Error of failed request: BadLength (poly request too large or internal Xlib length error)
Major opcode of failed request: 62 (X_CopyArea)
Serial number of failed request: 105
Current serial number in output stream: 106
Backtrace from gdb:
#0 0x00007f4b0a922469 in glDeleteTextures () from /usr/lib/libGL.so.1
#1 0x00007f4b0a04e232 in sf::Image::DestroyTexture () from /usr/lib/libsfml-graphics.so.2.0
#2 0x00007f4b0a04e4d6 in sf::Image::~Image () from /usr/lib/libsfml-graphics.so.2.0
#3 0x00000000004179ec in ~Font (this=0x7f4b0a330940) at /usr/include/SFML/Graphics/Font.hpp:55
#4 0x00007f4b089287dd in exit () from /lib/libc.so.6
#5 0x00007f4b083e333d in _XDefaultError () from /usr/lib/libX11.so.6
#6 0x00007f4b083e340c in _XError () from /usr/lib/libX11.so.6
#7 0x00007f4b083ea719 in ?? () from /usr/lib/libX11.so.6
#8 0x00007f4b083eae00 in _XReply () from /usr/lib/libX11.so.6
#9 0x00007f4b083c8d29 in _XGetWindowAttributes () from /usr/lib/libX11.so.6
#10 0x00007f4b083c8ed8 in XGetWindowAttributes () from /usr/lib/libX11.so.6
#11 0x00007f4b0a344cd2 in sf::priv::ContextGLX::CreateContext () from /usr/lib/libsfml-window.so.2.0
#12 0x00007f4b0a3457d1 in sf::priv::ContextGLX::ContextGLX () from /usr/lib/libsfml-window.so.2.0
#13 0x00007f4b0a3410da in sf::priv::ContextGL::New () from /usr/lib/libsfml-window.so.2.0
#14 0x00007f4b0a340d19 in sf::Context::Context () from /usr/lib/libsfml-window.so.2.0
#15 0x0000000000416e2e in DebugState::ThreadFunc (arg=0x1c447d0) at client/src/debugstate.cpp:17
#16 0x00007f4b0a553cb9 in sf::priv::ThreadImpl::EntryPoint () from /usr/lib/libsfml-system.so.2.0
#17 0x00007f4b086ddfaa in start_thread () from /lib/libpthread.so.0
#18 0x00007f4b089c029d in clone () from /lib/libc.so.6
#19 0x0000000000000000 in ?? ()
Freeze
Explained in the 1st posting.
Works!
Aaaand sometimes, it just works. ;)
Edit: BTW, I just added the line "sf::Context context;" to the thread function, like explained in the Context.hpp.
-
Could you post the complete code that reproduces these errors?
-
Sure: Testsuite (http://pitload.org/get/1718)
And btw, another error came along: BadWindow. ;)
-
It seems to run fine on Windows (no error, got black screen with white text showing FPS).
I'll now try on Linux.
-
That's good. That shows at least it probably a problem with OpenGL or the driver under Linux.
-
Ok, same errors on Linux.
By the way, this is a minimal example ;)
#include <iostream>
#include <SFML/Graphics.hpp>
sf::Image image;
void ThreadFunc(void*)
{
sf::Context context;
sf::Sleep(1.f);
image.LoadFromFile("rock.png");
std::cout << "OK!" << std::endl;
}
int main()
{
sf::RenderWindow window(sf::VideoMode(1024, 768, 32), "Testsuite");
sf::Thread thread(&ThreadFunc);
thread.Launch();
while (window.IsOpened())
{
sf::Event event;
while (window.GetEvent(event))
{
if (event.Type == sf::Event::KeyPressed && event.Key.Code == sf::Key::Escape)
window.Close();
}
window.Clear();
window.Draw(sf::String("blah..."));
window.Display();
}
return 0;
}
-
Yep, you're right. I just took as less lines of code from my actual project to a testsuite environment as possible -- too lazy to write something from scratch, sorry. ;)
Do you already have an idea why that happens? Could it be the driver or the underlying OpenGL libraries?
-
Actually, nothing is working, even the samples. And it has nothing to do with threading.
The only thing I made is installing the proprietary nVidia driver; I checked with the free driver and it works fine. It's surprising that nobody reported me this error before (is everyone using the free driver??).
It's not a bug in the driver because SFML 1.5 works fine.
:evil:
-
So something that got in trunk (or SFML2) lately?
-
Actually SFML 1.5 is not working. So it may be a modification I made recently.
-
Hopefully an obviously change that can get fixed quick. ;) Let me know when there's something to test or whatever.
-
Just wanted to ask if there's any update on this issue, yet.
-
No, sorry. It doesn't seem to be related to a recent modification: I tried an old revision and the bug is still there. I'm confused because I'm sure that it was working after I updated my graphics driver (the first things I tried were the P-Buffer implementation and post-effects).
So:
- it's not the driver (although it works with the free one)
- it's not SFML 2
- it's not a recent modification
But it worked a few days ago...
-
Did you do a system upgrade, maybe? The only thing I remember that changed on my machine the last days are some packages. Too bad I don't remember which ones that were.
-
If I remember correctly, there was a new version of the kernel (2.6.30).
-
I'm still on 2.6.26, so that doesn't cause the failure. But I also got a new updated kernel version (from 2.6.26-1 to 2.6.26-2). I really don't have any clue why that happens. :/
Edit: Your minimal example also crashed on a friend's machine running ArchLinux, kernel 2.6.30 with an *Intel graphics card*. So we can really exclude the driver to be failing, I think.
Edit: I'm always getting "GLXBadContextTag" error messages, so it may be something related to context switching. The proper sourcecode for that part can be read here: https://dev.mobileread.com/svn/iliados/upstream/xserver/GL/glx/glxext.c
-
I've tried with another computer: same system, same kernel, same updates, same nVidia driver, same revision of SFML, and it works perfectly.
I'm so confused :shock:
-
That's indeed strange. I also don't have a clue what crashes where. And since it's an irregular crash, it's hard to find the line of code causing it.
What I'm wondering about is that nobody else has reported that bug, yet. Would be nice if some more developers with Linux systems could take the minimal example, compile it and test out.
-
I was able to isolate the bug, finally!
WORKING source:
#include <SFML/Graphics.hpp>
void ThreadFunc( void *arg ) {
sf::Context context;
}
int main() {
sf::RenderWindow window( sf::VideoMode( 800, 600, 32 ), "Context test" );
sf::Thread thread( ThreadFunc, 0 );
sf::Event event;
thread.Launch();
while( window.IsOpened() ) {
//window.GetEvent( event );
window.Clear();
window.Display();
}
thread.Wait();
return 0;
}
Compile that and you won't get any failures. Now, uncomment the line "window.GetEvent( event )" and start again: crashes.
A friend of mine pointed me to this page: http://www.linuxquestions.org/questions/programming-9/xnextevent-select-409355/ . I think it's interesting that threading with X seems to be a mess. An Xlib resource can be only used once at a time, so maybe there lies the problem.
I really hope this can be solved. I'd do it myself, but I highly suck at X programming.
Edit:
By the way, this is a backtrace I was able to get:
#12 0x00007fc8d6edfd88 in XRRGetScreenInfo () from /usr/lib/libXrandr.so.2
#13 0x00007fc8d847248a in sf::priv::VideoModeSupport::GetDesktopVideoMode () from /usr/lib/libsfml-window.so.2.0
#14 0x00007fc8d846d8a9 in sf::VideoMode::GetDesktopMode () from /usr/lib/libsfml-window.so.2.0
#15 0x00007fc8d84717af in sf::priv::ContextGLX::ContextGLX () from /usr/lib/libsfml-window.so.2.0
#16 0x00007fc8d846d0da in sf::priv::ContextGL::New () from /usr/lib/libsfml-window.so.2.0
#17 0x00007fc8d846cd19 in sf::Context::Context () from /usr/lib/libsfml-window.so.2.0
#18 0x000000000040108f in ThreadFunc (arg=0x0) at context.cpp:4
Could it be possible that by using GetEvent() and creating the Context the screen you fetch in GetDesktopVideoMode() is used twice?
-
Well, like I said it now happens with every SFML program, even the SDK's samples where there's no threading and no use of sf::Context.
However if there's a bug related to Xlib not being thread friendly, there is a quick fix I was planning to do this week ;)
-
That'd be absolutely great. Hopefully that will fix all that disturbing behaviour. I'll keep an eye on the SVN log. ;)
-
Well, like I said it now happens with every SFML program, even the SDK's samples where there's no threading and no use of sf::Context.
I just checked that, and no crashes happen at all with all sample applications. I've tested the latest trunk as well as SFML2 samples.
The only error I still get is when constructing sf::Context when there's an event loop running. If I remove one of both, the proper application runs fine (like specified in my example code above).
-
Are there any news on this issue? ;) Don't get me wrong, I don't want to disturb you, I'm just curious about it. A "Shut up, I'm working on it" would be fine, too! ;)
-
It's ok, you can ask as often as you want ;)
I did a new dist-upgrade at home, there were new xserver-xorg updates, and... it still doesn't work at all (I mean, even the samples in 1.5). So this one seems to be related to my system only.
Regarding the fix for Xlib and multi-threading, I haven't had time to do it yet. I'll keep you informed as soon as I do it, so that you can test it and see if that solves the problem.
-
Thank you very much for the reply. Yeah, the bug happening with the samples seems to be system-related (but it's also strange).
-
I've finally solved the problem on my machine: as the proprietary nvidia drivers are broken on the official repositories, I had to compile it myself; unfortunately I forgot to recompile it when I upgraded to the new kernel version, that's why everything was failing (I found it after realizing that even glxgears was failing to create an OpenGL context :D).
So now I'll be able to work seriously on your issue, and on the fix I told you about ;)
-
Thank god. It's really getting boring here without being able to continue my work. Thanks for the status report. ;)
-
I made the first fix. I still have an error from times to times (otherwise it works fine), but at least it should no longer be related to VideoMode, like the backtrace showed in one of your previous posts.
That's the first step, now it's going to be more difficult if X is really unable to handle multi-threading properly, as all the windows and contexts in SFML are shared and thus use the same X display :?
I guess you can work around that by using mutexes, so that you can continue to work until I fix all this stuff.
-
Hey look at this:
http://www.sfml-dev.org/forum/viewtopic.php?t=1188
:lol:
-
I made another fix, this one seems to solve your problem at 100% (sf::Context instances now have their own connection to the X display).
There's just one fix to be done (for using multiple windows in different threads) and everything should be ok :)
-
I commited the last fix. Enjoy :)
(and pleeeeease confirm that everything works fine now)
-
Thank you very much for your work! I'll check if that fixes my problems later, but I highly think so, since you fixed exactly what I also thought would be the issue.
Awesome!
-
You did a great job, my application is running flawlessly now. :) Okay, not completely: I've still got a memory violation when the program is about to close (i.e. objects get destructed). But since I don't know if that's caused by SFML, I'll have to investigate that. Too bad I'll be away for a week (holiday trip).
Again, thank you very much for the fix. You saved my semester break. ;)
-
I'm glad to hear that :)
-
Back from holiday, tested application, very happy. ;) All bugs are gone now, thanks again for your good work.
-
I don't know if it's related to this fix you just made, but I have a "GLXBadContext" error whenever I run an SFML app. The opengl sample doesn't work (the only one I tested), but glxgears does, so I believe it comes from SFML.
-
Can you give more informations about your configuration? Which version / revision of SFML are you using?
-
kreeg@deian:~/files/soft/sfml/branches/sfml2/samples/bin$ ./opengl
X Error of failed request: GLXBadContext
Major opcode of failed request: 143 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Serial number of failed request: 24
Current serial number in output stream: 27
kreeg@deian:~/files/soft/sfml/branches/sfml2/samples/bin$
I use the 1205th revision and I run a 32 bit linux laptop with an Intel IGP.
-
Did you try a complete rebuild (make clean)?
-
Yes. Here's my uname -a.
Linux deian 2.6.26-1-686 #1 SMP Sat Jan 10 18:29:31 UTC 2009 i686 GNU/Linux
-
Did it work with SFML 1.5?
-
I have the same problem :? with an ati card (radeon xpress 1100), 2.6.28 kernel, and SFML2 r1205 (I haven't tried any other revision).
X Error of failed request: GLXBadContext
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Serial number of failed request: 30
Current serial number in output stream: 33
Everything works fine with SFML 1.x.
Edit : the problem is the same with compiz on or off.
-
I'll try...
EDIT : it works here, too.
-
UP
-
UP
-
I have the same problem :? with an ati card (radeon xpress 1100), 2.6.28 kernel, and SFML2 r1205 (I haven't tried any other revision).
X Error of failed request: GLXBadContext
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Serial number of failed request: 30
Current serial number in output stream: 33
Everything works fine with SFML 1.x.
Edit : the problem is the same with compiz on or off.
Exactly the same error message here with the latest revision.
EDIT: Ubuntu 9.04, ATI Radeon Xpress 1100
-
It's still working flawlessly on my side with the latest SFML2 revision and Nvidia graphics driver.
-
I get the same error with all the sample applications.
-
Is there any resource on the web that tells when the next nvidia driver is integrated in the repositories? I can't get the one from nvidia.com to work properly and the ones from the repository cause the errors above.
-
I can reproduce the problem on my laptop with an Intel graphics chipset:
X Error of failed request: GLXBadContext
Major opcode of failed request: 152 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Serial number of failed request: 22
Current serial number in output stream: 26
All other OpenGL programs on my system are working like expected. This bug doesn't occure on another system with an Nvidia GTX260.
-
How is it now? Did the problem disappear with the latest revision?
-
Seems like it's working now, thanks ;)
-
Ok, great :)
There seems to be a problem with the implementation of OpenGL 3 contexts, so I deactivated them for now.
-
Does that mean no more contexts in threads?
-
No, it just means OGL 1.x contexts instead of OGL 3.x context for cards supporting it.
OpenGL 3 contexts were implemented in SFML to allow people to play with the new features of OpenGL 3 without having to create their contexts manually.
But anyway, I'll fix that soon ;)
-
There is still a GLX error for me, with the new revision (1223)... I have yet removed all other sfml traces.
X Error of failed request: GLXBadContext
Major opcode of failed request: 144 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Serial number of failed request: 28
Current serial number in output stream: 31
-
No, it just means OGL 1.x contexts instead of OGL 3.x context for cards supporting it.
Okay, thanks for clarification.
By the way, pixel-perfect rendering works flawlessly. :)
-
Sadly I still have this problem on Arch Linux, Msi Wind 120. I feel... Alone :D
Ps. The one with glxcontext message
-
You are not alone =)
Debian 2.6.26
ATI Radeon Mobility HD 2600 Series
-
I narrowed the error to sf::Window constructor. The error also appears in RenderWindow.
The below code triggers the error.
#include <SFML/Window,hpp>
int main()
{
sf::Window window
}
-
Thanks for your help.
I have a clue (two actually), but I'll have to do more tests to figure it out.
-
With valgrind, I get this :
==5604== Invalid read of size 8
==5604== at 0x45C6188: (within /usr/lib/libGL.so.1.2)
==5604== Address 0x6578058 is 3,096 bytes inside a block of size 3,102 alloc'd
==5604== at 0x4023D6E: malloc (vg_replace_malloc.c:207)
==5604== by 0x458C1BA: __glXGetClientGLExtensionString (in /usr/lib/libGL.so.1.2)
X Error of failed request: GLXBadContext
Major opcode of failed request: 144 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Serial number of failed request: 28
Current serial number in output stream: 31
==5604==
It might help ?
-
I still have the same error on my Laptop (ATI Radeon Xpress 1100). However, on my PC (Nvidia) it works.
-
I tried the last ATI Radeon drivers (9.9) but the error still occurred.
-
I've still got the problem on my laptop, too:
X Error of failed request: GLXBadContext
Major opcode of failed request: 152 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Serial number of failed request: 22
Current serial number in output stream: 26
Hardware:
00:02.1 Display controller: Intel Corporation Mobile 945GM/GMS/GME, 943/940GML Express Integrated Graphics Controller (rev 03)
-
Some news ? I haven't found any answer on my side.
-
I didn't have time for that yet, but I'll try to look into it soon.
-
Don't know if this helps, but the application seems to crash at line 133 in the file src/SFML/Window/Linux/ContextGLX.cpp:
return glXMakeCurrent(myDisplay, myWindow, myContext) != 0;
The error code says "GLXBadContext".
-
That's weird, the error you previously reported is about X_GLXCreateContext. Is it a different one?
-
Nope, it's the same. "X_GLXCreateContext" is the minor code. Maybe the given context (myContext) is just invalid. But this is normally detected by evaluating return values, before (what you do in the source, of course).
-
Hey Laurent!
By now I'm using Ubuntu instead of Windows Vista. But I've problems with the new OS.
When I downloaded the SFML2 of the SVN repository I compiled it without any problems. Then I wrote a very short application and compiled it without problems, too. But when I wanted to run it, these errors came in my terminal:
X Error of failed request: GLXBadContext
Major opcode of failed request: 154 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Serial number of failed request: 30
Current serial number in output stream: 33
I found out, that is so because my graphic card. I run this command in the terminal and found out, that this is my card:
01:00.0 VGA compatible controller: ATI Technologies Inc RV280 [Radeon 9200 PRO] (rev 01)
Tank said me, that there are problems with ATI and the newest SFML2. Please, can you correct this error? It's so boring without developing with SFML2! :wink:
Thank you in advance!
Paul
PS.: Sorry for my (bad) English! :lol:
-
I know, this issue is very annoying. It's hard for me to fix it as I can't reproduce the error with my nVidia graphics card. I'll probably need help from you guys ;)
-
I'll probably need help from you guys ;)
What do you want us to do, send you an ATI card? :D
-
What do you want us to do, send you an ATI card?
Testing my modifications and reporting success / failures should be enough :)
-
What do you want us to do, send you an ATI card?
Testing my modifications and reporting success / failures should be enough :)
Ok! I'm ready to help you if you want! ;)
-
The first thing you can test is an empty program linked against sfml-window
#include <SFML/Window.hpp>
int main()
{
return 0;
}
And one with a sf::Window declared
#include <SFML/Window.hpp>
int main()
{
sf::Window window(sf::VideoMode(800, 600), "test");
return 0;
}
-
Something else that you can test: someone reported me that the probem appeared at revision 1191; 1190 should work fine.
-
The first thing you can test is an empty program linked against sfml-window
#include <SFML/Window.hpp>
int main()
{
return 0;
}
This works fine. No problems.
And one with a sf::Window declared
#include <SFML/Window.hpp>
int main()
{
sf::Window window(sf::VideoMode(800, 600), "test");
return 0;
}
But here the error appeared again.
Something else that you can test: someone reported me that the probem appeared at revision 1191; 1190 should work fine.
Sorry, but what do you mean with this?
-
Thanks for your help.
Sorry, but what do you mean with this?
I'm talking about SVN revisions. The revision number 1191 is supposed to be the one where the problem appeared.
-
Revision 1190 works fine for me.
-
Revision 1190 works fine for me.
And what about 1191?
-
Revision 1190 works fine for me.
And what about 1191?
GLX error ;)
-
Great :)
I have an idea, I'll try to submit a fix tomorrow.
-
Ok, 1190 successed and 1191 failed for me too ;)
-
Ok, 1190 successed and 1191 failed for me too ;)
With me it was the same problem.
-
New test:
// test.cpp
// --------
#include <X11/Xlib.h>
#include <GL/glx.h>
#include <GL/gl.h>
#include <GL/glxext.h>
#include <iostream>
class Context
{
public :
Context(bool sameDisplay = true, const Context* shared = NULL)
{
if (shared && sameDisplay)
{
display = shared->display;
sharedDisplay = true;
}
else
{
display = XOpenDisplay(NULL);
sharedDisplay = false;
}
window = XCreateWindow(display,
RootWindow(display, DefaultScreen(display)),
0, 0,
1, 1,
0,
DefaultDepth(display, DefaultScreen(display)),
InputOutput,
DefaultVisual(display, DefaultScreen(display)),
0, NULL);
XVisualInfo tpl;
tpl.depth = DefaultDepth(display, DefaultScreen(display));
tpl.visualid = XVisualIDFromVisual(DefaultVisual(display, DefaultScreen(display)));
tpl.screen = DefaultScreen(display);
int nbVisuals = 0;
XVisualInfo* visuals = XGetVisualInfo(display, VisualDepthMask | VisualIDMask | VisualScreenMask, &tpl, &nbVisuals);
if (!visuals || (nbVisuals == 0))
{
if (visuals)
XFree(visuals);
std::cout << "There is no valid visual for the selected screen" << std::endl;
return;
}
context = glXCreateContext(display, &visuals[0], shared ? shared->context : NULL, true);
if (!context)
{
std::cout << "Failed to create an OpenGL context for this window" << std::endl;
return;
}
Window root = RootWindow(display, DefaultScreen(display));
Colormap colorMap = XCreateColormap(display, root, visuals[0].visual, AllocNone);
XSetWindowColormap(display, window, colorMap);
XFree(visuals);
glXMakeCurrent(display, window, context);
std::cout << "Succeeded to create context!" << std::endl;
}
~Context()
{
glXMakeCurrent(display, None, NULL);
glXDestroyContext(display, context);
XDestroyWindow(display, window);
if (!sharedDisplay)
XCloseDisplay(display);
}
private :
Display* display;
Window window;
GLXContext context;
bool sharedDisplay;
};
int main()
{
std::cout << "Creating the first context..." << std::endl;
Context c1;
std::cout << "Creating a shared context on the same display..." << std::endl;
Context c2(true, &c1);
std::cout << "Creating a shared context on a new display..." << std::endl;
Context c3(false, &c1);
return 0;
}
g++ test.cpp -o test -lX11 -lGL
./test
If I'm right, the two first contexts should succeed, and the last one should fail (with people having the GLX issue).
-
Creating the first context...
Succeeded to create context!
Creating a shared context on the same display...
Succeeded to create context!
Creating a shared context on a new display...
Succeeded to create context!
:?
EDIT: I'm such an idiot, I compiled it on my PC :D I redo it on my Laptop now.
EDIT2: Same output on my Laptop, no GLX error.
-
Same issue for me...
-
Thanks for this fast feedback :)
* The good news: it seems that I can share OpenGL contexts across different X connections; if not, I would have had to revert to using a single X display for every context and window, which would have brought back the old multithreading issues.
* The bad news: I was 90% sure it was that... now I'm really confused.
-
Sharing the context is a good point. I was able to successfully open a window with some changes to line 48 of src/SFML/Window/Linux/ContextGLX.cpp:
Instead of calling XOpenDisplay() for every new context, I did call it once and passed the pointer to all following contexts (for debugging I used a static variable and some poor reference counting ;)).
Good news: The window opens (thus eliminating the GLX_BadContext error) and Clear() does its job very well.
Bad news: Drawable::Draw() doesn't do anything.
I decided to stop here because you're a lot more experienced in that sector. But I hope this gives a hint for the right direction.
Edit: Funny, I just read your modified code where you seem to be doing exactly what I tried to do. Too late. :)
-
I was able to successfully open a window with some changes to line 48 of src/SFML/Window/Linux/ContextGLX.cpp:
Instead of calling XOpenDisplay() for every new context, I did call it once and passed the pointer to all following contexts
I'm lost. The test above clearly shows that the multiple X displays are not involved in the error, but your modification demonstrates the exact opposite.
By the way, did you run the test?
-
Here is another test. Same code as above, with this main() instead:
// Globals
Context c1;
Context c2(false, &c1);
int main()
{
std::cout << "Creating a shared context..." << std::endl;
Context c3(false, &c1);
return 0;
}
If it fails, try to pass "true" to the constructor of c2.
-
Hehe.
Your testcode passes all tests (all contexts get created and activated). Looks like side-effects..
Edit: This is for the first testcode, I'll test the new one now.
Edit2: Succeeds for all possible combinations. ;)
-
What about this one?
#include <SFML/Window.hpp>
int main()
{
sf::Context context;
return 0;
}
-
GLXBadContext.
-
The exact same from scratch (i.e. without SFML):
#include <X11/Xlib.h>
#include <GL/glx.h>
#include <GL/gl.h>
#include <GL/glxext.h>
#include <stdlib.h>
#include <iostream>
class Context
{
public :
Context(Context* shared = NULL)
{
myDisplay = XOpenDisplay(NULL);
int screen = DefaultScreen(myDisplay);
myWindow = XCreateWindow(myDisplay,
RootWindow(myDisplay, screen),
0, 0,
1, 1,
0,
DefaultDepth(myDisplay, screen),
InputOutput,
DefaultVisual(myDisplay, screen),
0, NULL);
CreateContext(shared);
if (shared)
glXMakeCurrent(myDisplay, myWindow, myContext);
std::cout << "Succeeded to create context!" << std::endl;
}
~Context()
{
if (glXGetCurrentContext() == myContext)
glXMakeCurrent(myDisplay, None, NULL);
glXDestroyContext(myDisplay, myContext);
XDestroyWindow(myDisplay, myWindow);
XCloseDisplay(myDisplay);
}
private :
void CreateContext(Context* shared)
{
XWindowAttributes windowAttributes;
if (XGetWindowAttributes(myDisplay, myWindow, &windowAttributes) == 0)
{
std::cerr << "Failed to get the window attributes" << std::endl;
return;
}
XVisualInfo tpl;
tpl.depth = windowAttributes.depth;
tpl.visualid = XVisualIDFromVisual(windowAttributes.visual);
tpl.screen = DefaultScreen(myDisplay);
int nbVisuals = 0;
XVisualInfo* visuals = XGetVisualInfo(myDisplay, VisualDepthMask | VisualIDMask | VisualScreenMask, &tpl, &nbVisuals);
if (!visuals || (nbVisuals == 0))
{
if (visuals)
XFree(visuals);
std::cerr << "There is no valid visual for the selected screen" << std::endl;
return;
}
int bestScore = 0xFFFF;
XVisualInfo* bestVisual = NULL;
while (!bestVisual)
{
for (int i = 0; i < nbVisuals; ++i)
{
int RGBA, doubleBuffer, red, green, blue, alpha, depth, stencil, multiSampling, amples;
glXGetConfig(myDisplay, &visuals[i], GLX_RGBA, &RGBA);
glXGetConfig(myDisplay, &visuals[i], GLX_DOUBLEBUFFER, &doubleBuffer);
glXGetConfig(myDisplay, &visuals[i], GLX_RED_SIZE, &red);
glXGetConfig(myDisplay, &visuals[i], GLX_GREEN_SIZE, &green);
glXGetConfig(myDisplay, &visuals[i], GLX_BLUE_SIZE, &blue);
glXGetConfig(myDisplay, &visuals[i], GLX_ALPHA_SIZE, &alpha);
glXGetConfig(myDisplay, &visuals[i], GLX_DEPTH_SIZE, &depth);
glXGetConfig(myDisplay, &visuals[i], GLX_STENCIL_SIZE, &stencil);
glXGetConfig(myDisplay, &visuals[i], GLX_SAMPLE_BUFFERS_ARB, &multiSampling);
glXGetConfig(myDisplay, &visuals[i], GLX_SAMPLES_ARB, &samples);
if ((RGBA == 0) || (doubleBuffer == 0))
continue;
int color = red + green + blue + alpha;
int score = abs(static_cast<int>(32 - color)) +
abs(static_cast<int>( 0 - depth)) +
abs(static_cast<int>( 0 - stencil)) +
abs(static_cast<int>( 0 - 0));
if (score < bestScore)
{
bestScore = score;
bestVisual = &visuals[i];
}
}
}
GLXContext toShare = shared ? shared->myContext : NULL;
myContext = glXCreateContext(myDisplay, bestVisual, toShare, true);
if (!myContext)
{
std::cerr << "Failed to create an OpenGL context for this window" << std::endl;
return;
}
Window root = RootWindow(myDisplay, DefaultScreen(myDisplay));
Colormap colorMap = XCreateColormap(myDisplay, root, bestVisual->visual, AllocNone);
XSetWindowColormap(myDisplay, myWindow, colorMap);
XFree(visuals);
}
Display* myDisplay;
Window myWindow;
GLXContext myContext;
};
Context c1;
Context c2(&c1);
int main()
{
Context c3(&c1);
return 0;
}
g++ test.cpp -o test -lX11 -lGL
./test
-
Ok, so the first context succeed.
But with a new one (global or not), the glx error occured. (especially on glxMakeCurrent call)
I looked at http://cours.logti.etsmtl.ca/log750/share/GL/glx/xmakecurrent.html
and found :
Because glXMakeCurrent always replaces the current rendering
context with ctx, there can be only one current context per
thread.
Maybe it can help you... ;)
-
I maybe found a solution :
/*
if (shared)
{
glXMakeCurrent(myDisplay, myWindow, myContext);
}
*/
glXMakeCurrent(myDisplay, myWindow, myContext);
Anyone can test this patch ?
-
Ok, so the first context succeed.
But with a new one (global or not), the glx error occured
You mean c3? Or a fourth one?
the glx error occured. (especially on glxMakeCurrent call)
The exact same error happens on glxMakeCurrent?
I maybe found a solution
Actually, by doing this you removed a fix ;)
A context that is shared cannot be active at the time it is shared. Thus I only activate a context by default it is not the one that will be shared.
Did doing this make the GLX error disappear for you?
-
You mean c3? Or a fourth one?
Context1 has been created but Context2 or Context3 (shared contexts to c1) send a glx error
And yes the error then disappeared.
-
Context1 has been created but Context2 or Context3 (shared contexts to c1) send a glx error
Ok. So the problem seems to be caused by activation (or not) of the first context. Can you try the first test (which succeeded) with the same condition on activation (if (shared) ...)?
-
test1.cpp
Context c1;
Context c2(&c1);
int main()
{
Context c3(&c1);
return 0;
}
With glXMakeCurrent(...) shared condition, the test below confirm the error on a second context :
Succeeded to create context!
X Error of failed request: GLXBadContext
Major opcode of failed request: 144 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Serial number of failed request: 26
Current serial number in output stream: 29
Without the condition :
Succeeded to create context!
Succeeded to create context!
Creating a shared context...
Succeeded to create context!
test2.cpp
// Globals
Context c1;
Context c2(false, &c1);
int main()
{
std::cout << "Creating a shared context..." << std::endl;
Context c3(false, &c1);
return 0;
}
And for that test too, the condition failed the context sharing creation
:?
-
I've commited the fix (removing the if (shared) condition), you can test whether it's working or not.
I've also activated again the OpenGL 3 contexts (K-Bal will be able to test that on his PC ;)).
-
The window opens, Clear() and Display() work, but I get a segmentation fault with this minimal example:
#include <SFML/Graphics.hpp>
int main() {
sf::RenderWindow wnd( sf::VideoMode( 800, 600, 32 ), "Test" );
sf::Event event;
sf::String str( "Foo" );
while( wnd.IsOpened() ) {
while( wnd.GetEvent( event ) ) {
if( event.Type == sf::Event::Closed ) {
wnd.Close();
}
}
wnd.Clear();
wnd.Display();
}
return 0;
}
Backtrace:
#0 0xb7ea955a in std::less<unsigned int>::operator() (this=0xb7fa023c, __x=@0x10, __y=@0xbfffe034) at /usr/include/c++/4.3/bits/stl_function.h:230
#1 0xb7eabcdf in std::_Rb_tree<unsigned int, std::pair<unsigned int const, sf::Glyph>, std::_Select1st<std::pair<unsigned int const, sf::Glyph> >, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, sf::Glyph> > >::_M_insert_unique_ (this=0xb7fa023c, __position=..., __v=...)
at /usr/include/c++/4.3/bits/stl_tree.h:1183
#2 0xb7eac09d in std::map<unsigned int, sf::Glyph, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, sf::Glyph> > >::insert (
this=0xb7fa023c, __position=..., __x=...) at /usr/include/c++/4.3/bits/stl_map.h:496
#3 0xb7eac198 in std::map<unsigned int, sf::Glyph, std::less<unsigned int>, std::allocator<std::pair<unsigned int const, sf::Glyph> > >::operator[] (
this=0xb7fa023c, __k=@0x871f67c) at /usr/include/c++/4.3/bits/stl_map.h:419
#4 0xb7ea844d in sf::priv::FontLoader::CreateBitmapFont (this=0xb7fa02c8, face=0x8a5a4a8, charSize=30, charset=..., font=...) at FontLoader.cpp:228
#5 0xb7ea8bf4 in sf::priv::FontLoader::LoadFontFromMemory (this=0xb7fa02c8, data=0xb7f29100 "", sizeInBytes=367112, charSize=30, charset=..., font=...)
at FontLoader.cpp:150
#6 0xb7ea52b3 in sf::Font::LoadFromMemory(char const*, unsigned int, unsigned int, sf::Unicode::Text const&) () from /usr/local/lib/libsfml-graphics.so.2.0
#7 0xb7ea54d3 in sf::Font::GetDefaultFont() () from /usr/local/lib/libsfml-graphics.so.2.0
#8 0x0804a7ce in main ()
Reported error:
Program received signal SIGSEGV, Segmentation fault.
0xb7ea955a in std::less<unsigned int>::operator() (this=0xb7fa023c, __x=@0x10, __y=@0xbfffd4e4) at /usr/include/c++/4.3/bits/stl_function.h:230
230 { return __x < __y; }
When I comment out the line with sf::String, everything works. I wonder if this really belongs to the context stuff. "__x=@0x10" looks weird, like an invalid pointer.
Edit: Stop! A clean rebuild fixed that issue. ;) Doing some more tests now..
-
Okay, so the fix is partially working. ;)
The following code works:
#include <SFML/Graphics.hpp>
int main() {
sf::RenderWindow wnd( sf::VideoMode( 800, 600, 32 ), "Test" );
sf::Event event;
wnd.SetActive( true );
sf::String str( "Foo" );
while( wnd.IsOpened() ) {
while( wnd.GetEvent( event ) ) {
if( event.Type == sf::Event::Closed ) {
wnd.Close();
}
}
wnd.Clear();
wnd.Draw( str );
wnd.Display();
}
return 0;
}
If I don't add "wnd.SetActive( true )", the string is not visible.
When exiting the application, I get a segmentation fault with the following backtrace:
#0 0x00000000 in ?? ()
#1 0xb76c33d7 in intel_region_release () from /usr/lib/dri/i915_dri.so
#2 0xb76c6bb5 in intel_miptree_release () from /usr/lib/dri/i915_dri.so
#3 0xb76ca2be in ?? () from /usr/lib/dri/i915_dri.so
#4 0xb77881ea in _mesa_delete_texture_image () from /usr/lib/dri/i915_dri.so
#5 0xb778efbd in _mesa_delete_texture_object () from /usr/lib/dri/i915_dri.so
#6 0xb76ca348 in ?? () from /usr/lib/dri/i915_dri.so
#7 0xb778e421 in _mesa_reference_texobj () from /usr/lib/dri/i915_dri.so
#8 0xb778fb07 in _mesa_DeleteTextures () from /usr/lib/dri/i915_dri.so
#9 0xb7e8ccf0 in sf::Image::DestroyTexture (this=0xb7f9a458) at Image.cpp:802
#10 0xb7e8fc43 in ~Image (this=0xb7f9a458, __in_chrg=<value optimized out>) at Image.cpp:127
#11 0xb7e82ce2 in ~Font (this=0xb7f9a440, __in_chrg=<value optimized out>) at ../../../include/SFML/Graphics/Font.hpp:55
#12 0xb7bc6589 in exit () from /lib/i686/cmov/libc.so.6
#13 0xb7bac7ad in __libc_start_main () from /lib/i686/cmov/libc.so.6
#14 0x0804a631 in _start () at ../sysdeps/i386/elf/start.S:119
Seems like some context switching issue or similar.
-
The svn update resolved the bug for me :)
I used Tank sample code for testing it and the string is visible (with or without SetActive(...))
And without segfault xD
-
Then it still seems to fail for Intel graphics cards. I'll do some more tests later.
-
The current revision fails on my PC with my nvidia card again :? Don't know about my Laptop with the ATI chip.
X Error of failed request: BadAlloc (insufficient resources for operation)
Major opcode of failed request: 128 (GLX)
Minor opcode of failed request: 34 ()
Serial number of failed request: 26
Current serial number in output stream: 29
-
The current revision fails on my PC with my nvidia card again
Ok. So the OpenGL 3 issue was not related to this one. I'll disable the code again.
-
The newest revision works on my Laptop (ATI) :D
-
Still issues with Intel GPUs. Direct OpenGL is working flawlessly, but SFML stuff isn't.
OpenGL sample:
(http://www.abload.de/thumb/opengl0jng.png) (http://www.abload.de/image.php?img=opengl0jng.png)
Pong sample:
(http://www.abload.de/thumb/pongn8vb.png) (http://www.abload.de/image.php?img=pongn8vb.png)
When exiting the samples, I also get a memory access violation. Backtrace in pong sample:
#0 0x00000000 in ?? ()
#1 0xb74233d7 in intel_region_release () from /usr/lib/dri/i915_dri.so
#2 0xb7426bb5 in intel_miptree_release () from /usr/lib/dri/i915_dri.so
#3 0xb742a2be in ?? () from /usr/lib/dri/i915_dri.so
#4 0xb74e81ea in _mesa_delete_texture_image () from /usr/lib/dri/i915_dri.so
#5 0xb74eefbd in _mesa_delete_texture_object () from /usr/lib/dri/i915_dri.so
#6 0xb742a348 in ?? () from /usr/lib/dri/i915_dri.so
#7 0xb74ee421 in _mesa_reference_texobj () from /usr/lib/dri/i915_dri.so
#8 0xb74efb07 in _mesa_DeleteTextures () from /usr/lib/dri/i915_dri.so
#9 0xb7edf677 in sf::Image::DestroyTexture() () from /usr/local/lib/libsfml-graphics.so.2.0
#10 0xb7edf9e5 in sf::Image::~Image() () from /usr/local/lib/libsfml-graphics.so.2.0
#11 0x0804a835 in main ()
-
By the way, did you notice that the default install directory has changed? It is now /usr/local instead of /usr. It can cause weird stuff it you still have an old version of SFML in /usr :)
-
Sure. I was one who appreciated that change. :)
And by the way, you can see that in the backtrace.
-
Ok. The same question applies for all others who still have problems ;)
-
I made some changes to the OpenGL contexts code. Does it change anything for those who still had problems?
-
I'll try to test that as soon as possible. Hopefully this eliminates that disturbing bug. :)
-
Very good work. SFML is working perfectly now on Linux with Intel graphics chipsets. Thanks!
May I ask what the problem was?
-
I have absolutely no idea :lol:
I removed my last modification, which was to disable windows' contexts by default. I also cleaned the corresponding code, but I don't think it changed anything to this problem.