Welcome, Guest. Please login or register. Did you miss your activation email?

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - 7krs

Pages: [1] 2 3
1
Java / Handling Individual Keys
« on: January 19, 2014, 05:59:02 am »
Edit2: figured it out:

 ...KeyEvent keyev = event.asKeyEvent();
                                        switch (keyev.key)
                                        {
                                                case B: ...

Edit: I found it here http://jsfml.org/javadoc/org/jsfml/window/Keyboard.Key.html, but I still wonder how I can use it in a switch.

I was reading the input section, and I've been reading some of the Javadocs on input handling. I am looking for key-specific input handling as implemented in SFML.

The examples (here: https://github.com/pdinklag/JSFML/wiki/Input) print out the name of an individual key; but how can I construct a switch-case structure to check for individual keys coming from KEY_PRESSED events?

2
Window / Re: [PROBABLY-SOLVED] I Think I Found A Memory Leak
« on: April 22, 2013, 10:23:13 pm »
Decided that a lib should at adapt to me, not the other way around. So I started messing around...
I know this is an old thread, but I had to bump it because I've found what I believe to be a very good fix that may help anyone with the same problem. (We all want improvement right?  :-X)

in sf::priv::GlContext we define the static method "clean":

// Notifies SFML that a thread is ending, and that the context needs to be deleted.
void GlContext::clean()
{
        sf::Lock lock(internalContextsMutex);

    GlContext *ptr = getInternalContext();
    delete ptr;
    internalContexts.erase(ptr);

//      std::cout << "The following contexts are present:\n";
//      for (std::set<GlContext*>::iterator it = internalContexts.begin(); it != internalContexts.end(); ++it)
//      {
//              std::cout << "\t" << *it << std::endl;
//      }
}

The std::couts are for checking if there are garbage contexts left in the internalContexts set.
In GlResource we define a static member function that calls priv::GlContext::clean();.


When sf::GlResource::releaseThreadResource(); is called at the end of a thread, the resources are cleared.
Here's my test code:

#include <SFML/Graphics.hpp>
#include <thread>
#include <cstdio>
#include <X11/Xlib.h>


int main()
{
    XInitThreads();
    sf::RenderWindow wnd(sf::VideoMode(800, 600, 32), "Title", sf::Style::Close);
    while (true)
    {
        std::getchar();

        std::thread([&wnd]()
                    {
                        wnd.setActive(true);
                        wnd.clear(sf::Color::Blue);
                        wnd.display();
                        wnd.setActive(false);

                        // THIS ///////////////////////////////////////////////////////////////
                        sf::GlResource::releaseThreadResource();
                        // //////// ///////////////////////////////////////////////////////////////
                    }).join();

        std::getchar();

        wnd.setActive(true);
        wnd.clear(sf::Color::Green);
        wnd.display();
        wnd.setActive(false);

    }
    return 0;
}
 

SFML snapshot: LaurentGomila-SFML-86897a8, downloaded 22nd of April 2013.
OS: Linux Mint 14 (MATE desktop)
Compiler: GNU GCC
IDE: Code::Blocks
Compiler flags: -std=c++0x -Wextra -Wall -g
Linker: -lpthread /usr/lib/libsfml-audio-d.so /usr/lib/libsfml-graphics-d.so /usr/lib/libsfml-window-d.so /usr/lib/libsfml-system-d.so

Please correct me if/where I am wrong so I can try and fix it.

3
General / Re: Can't even do the simplest thing right
« on: January 20, 2013, 02:05:59 am »
PS. BONUS:  just for testing and learning, is there any simple way to make them bounce off on the border of the screen?

How fortunate I had some code I implement for testing my own screen refreshes:

{
    std::vector<int> x, y;
    for (int i = 0; i < 50; i++)
    {
        rs.emplace_back(new sf::RectangleShape(sf::Vector2f(10, 10)));
        rs[i]->setPosition((std::rand() % 79) * 10, (std::rand() % 59) * 10);
        rs[i]->setFillColor(sf::Color(std::rand() % 255, std::rand() % 255, std::rand() % 255));
        x.emplace_back(1);
        y.emplace_back(1);
    }

    ready_rsMover.wait();
    rs_ready.notify();
    while (keep_drawing)
    {
        for (int i = 0; i < 50; i++)
        {
            if (rs[i]->getPosition().x >= wnd->getSize().x - 10)
                x[i] = -(std::rand() % 2 + 1); // Make the counter go negative.
            else if (rs[i]->getPosition().x <= 0)
                x[i] = (std::rand() % 2 + 1); // Make the x counter plus.
            if (rs[i]->getPosition().y >= wnd->getSize().y - 10)
                y[i] = -(std::rand() % 2 + 1);
            else if (rs[i]->getPosition().y <= 0)
                y[i] = (std::rand() % 2 + 1);
            rs[i]->move(x[i], y[i]);
        }
        std::this_thread::sleep_for(std::chrono::milliseconds(20));
        drawer_flare.notify();
    }
}
 

Focus on the body of the while (keep_drawing) loop, ignore that drawer_flare, replace that with your drawing mechanism (simply iterate over all rectangle shapes, then display).

4
Window / Re: I Think I Found A Memory Leak
« on: January 18, 2013, 08:45:18 pm »
This is because Laurent inserted ensureGLContext(); checks everywhere and new GLContexts are created every time they are needed and no valid one exists.

I didn't know that function was called upon each draw call, explains a lot, thanks.
I figure that this solves my issue quite well, so I'm done here.

Thanks for the hints Laurent!

5
Window / Re: I Think I Found A Memory Leak
« on: January 18, 2013, 03:10:33 pm »
You delete all the internal contexts, even those which might be in use in other threads.

I realize this, for trying to delete the thread-local context does not seem to work. The weird thing that surprised me is that all the other drawers stayed valid, they drew their rendertextures and updated them succesfully. After this the screen_drawer rendered the sprites of these textures succesfully.

Getting rid of all internal GLContexts every time a thread ends isn't a clean solution. If it weren't for context sharing (which I despise with passion) your OpenGL resources would also disappear every time a thread ends. Not only that, GLContext creation is relatively expensive and doing it more times than needed will significantly impact the performance of your application. If you are content with this solution for your own needs (where I assume memory consumption is more important than application execution speed) then this is an acceptable solution. However most users don't share this requirement and your code probably will not be used in SFML as it is not a clean solution.

I am not constantly switching states if that is what you think, I'm aiming for different states that last a long time, and I do not want to create many shared drawables and a single drawer thread. So the GlContexts are basically created upon entering a new major state in my program. So deleting all contexts is useful in my situation, but I would agree (if you imply); that it is more useful to let every thread kill its own context.

6
SFML website / Re: Promote SFML 2 instead of SFML 1.6
« on: January 18, 2013, 10:43:49 am »
I agree with OP, but ALL standard tutorials should be uploaded first.

7
Window / Re: I Think I Found A Memory Leak
« on: January 17, 2013, 10:37:30 pm »
I'm really sorry, I don't have time to write and test a solution for you.

I think you should:
- create a public function named "releaseThreadContext"
- inside this function, you remove internalContext  from the internalContexts array and then you delete it

But this will work only if you nothing after executing this function (ie. not calling setActive(false) or destroying a render-window/render-texture).

#ifdef LINUX
#include <X11/Xlib.h>
#endif


#include <SFML/Graphics.hpp>
#include <thread>

int main()
{
    #ifdef LINUX
    XInitThreads(); // Required on Linux to handle multiple threads with windowing.
    #endif


    // The drawer thread, active during the entire program's lifetime, constant drawing ////////////
    std::thread([&]()
                {
                    sf::RenderWindow wnd(sf::VideoMode(300, 200, 32), "Title");
                    wnd.setFramerateLimit(1);
                    while (true)
                    {
                        wnd.clear();
                        wnd.display();
                    }
                }
                ).detach();
    //////////////////////////////////////////////////////////////////////////////////////////////


    // Simulation of switching states ////////////////////////////////////////////////////////////
    while (true)
    {
        std::thread([&]()
                    {
                        sf::RenderTexture *rt = new sf::RenderTexture;
                        delete rt;
                        sf::GlResource::releaseThreadContext();
                        std::this_thread::sleep_for(std::chrono::seconds(1));
                    } // rt should be destroyed here, but its effect on the memory remains.
                    ).join();
    }
    //////////////////////////////////////////////////////////////////////////////////////////////
    return 0;
}
 


I gave sf::GlResource the following public static member function:
static void releaseThreadContext();
---
void GlResource::releaseThreadContext()
{
        priv::GlContext::releaseThreadContext();
}
 

So I thus gave sf::priv::GlContext the following public static member function:
static void releaseThreadContext();
---
void GlContext::releaseThreadContext()
{
        sf::Lock lock(internalContextsMutex);
        internalContexts.erase(internalContext);
        delete internalContext;
}
 


However, no resources are freed in my testing scenario. I wonder what I am doing horribly wrong.


EDIT:
I replaced the
void GlContext::releaseThreadContext();
internals with the following:

sf::Lock lock(internalContextsMutex);
    for (std::set<GlContext*>::iterator it = internalContexts.begin(); it != internalContexts.end(); ++it)
            delete *it;
    internalContexts.clear();
 
And it seems to work even when I have already-existing sprites being drawn, among other things. I'll now test it in my program to see if it works just as well there.

EDIT2: TOTALLY AWESOME! IT WORKS! Aaaah finally, after all these hours (so happy). I basically call sf::GlResource::releaseThreadContext(); after my drawing thread finishes, so another drawing thread can start drawing. This works perfectly, no segfaults, no error messages from OpenGL, nothing. Absolutely wonderful. Magnificent-> *good feels* :). But now it would need to be tested thoroughly. I really hope that this is something that stays eternally stable. It should also be a nice feature to add officially to make SFML more thread-friendly... Aaah, now I can rest :D

8
Window / Re: I Think I Found A Memory Leak
« on: January 17, 2013, 07:40:19 pm »
(there's a global reference counter which triggers the globalCleanup() function).

It was found. Also this:

    sf::priv::GlContext* getInternalContext()
    {
        if (!hasInternalContext())
        {
            internalContext = sf::priv::GlContext::create();
            sf::Lock lock(internalContextsMutex);
            internalContexts.insert(internalContext);
        }

        return internalContext;
    }

You seem to delete the entire set when there are 0 contexts. I'm busy creating a function that deletes a single context + remove the pointer from the set with the only parameter being the Texture. Tried several times now, kinda like walking into a maze...

9
Window / Re: I Think I Found A Memory Leak
« on: January 17, 2013, 05:10:55 pm »
There is. Play with the private functions in GlContext.cpp and make public what you need.

I looked at it and it's an abstract base class with only a one private function, and two protected functions.

protected :
    GlContext();
    static int evaluateFormat(unsigned int bitsPerPixel, const ContextSettings& settings, int colorBits, int depthBits, int stencilBits, int antialiasing);
    ContextSettings m_settings; ///< Creation settings of the context

private:
    void initialize();
 

On Linux we implement GlxContext, so I opened the GlxContext .hpp and .cpp. I'm focussing on the destructor, since I'm using this testing code:

#ifdef DEBUG
#include <iostream>
#endif

#include <SFML/Graphics.hpp>
#include <thread>

int main()
{
    // The drawer thread, active during the entire program's lifetime, constant drawing ////////////
    std::thread([&]()
                {
                    sf::RenderWindow wnd(sf::VideoMode(300, 200, 32), "Title");
                    wnd.setFramerateLimit(1);
                    while (true)
                    {
                        wnd.clear();
                        wnd.display();
                    }
                }
                ).detach();
    //////////////////////////////////////////////////////////////////////////////////////////////


    // Simulation of switching states ////////////////////////////////////////////////////////////
    while (true)
    {
        std::thread([&]()
                    {
                        sf::RenderTexture rt;
                        std::this_thread::sleep_for(std::chrono::seconds(1));
                    } // rt should be destroyed here, but its effect on the memory remains.
                    ).join();
    }
    //////////////////////////////////////////////////////////////////////////////////////////////

    return 0;
}
 

And whenever a window is active, the RenderTexture destructor is NOT properly called! Try to comment out the window thread, it'll not leak. With the window, it does leak! If we un-thread the state-switcher, it also does not leak. So I decided to take a closer look at the destructor:

GlxContext::~GlxContext()
{
    // Destroy the context
    if (m_context)
    {
        if (glXGetCurrentContext() == m_context)
            glXMakeCurrent(m_display, None, NULL);
        glXDestroyContext(m_display, m_context);
    }
   
    // Destroy the window if we own it
    if (m_window && m_ownsWindow)
    {
        XDestroyWindow(m_display, m_window);
        XFlush(m_display);
    }

    // Close the connection with the X server
    CloseDisplay(m_display);
}


Seems alright, we then have code from the base class's virtual destructor:

GlContext::~GlContext()
{
    // Deactivate the context before killing it, unless we're inside Cleanup()
    if (sharedContext)
        setActive(false);
}
 

An old foe? I showed that setActive causes memory expansion. Now even when the object is destroyed, memory is expanded. Does it have something to do with this? What about the functions in the GlxContext destructor? XFlush is ignored when the window is not owned? Am I even on the right tracks here?

Anyway, I removed the contents of ~GlContext(), however, I now get an error when rebuilding my testing scenario about an error in WindowStyle.hpp, line 40. Wat do?

10
Window / Re: I Think I Found A Memory Leak
« on: January 17, 2013, 09:37:57 am »
I haven't tried it in the exact form because I do not have Arch Linux, but I downloaded the latest drivers and even restarted. It does not help my issue.

Edit: Seems to be a common problem when searching about fglrx on the internet (gamers, programmers mostly). Very annoying tho. I'll see if I can edit my design a bit so the drawing thread can just transfer control to a different location instead of being re-created.

Edit2: Is there no way to force some kind of cleanup? I'm gonna try to edit the some sfml sources...

11
Window / Re: I Think I Found A Memory Leak
« on: January 17, 2013, 09:10:51 am »
Most definitely, I even rebuilt them right now. I link to the -d.so files, which were created by the created project when entering Debug in CMake.

However, I think this may be an ATI / AMD driver problem:
https://bbs.archlinux.org/viewtopic.php?pid=401468

Edit:
Installing fglrx updates (using Synaptic)...

12
Window / Re: I Think I Found A Memory Leak
« on: January 17, 2013, 08:46:37 am »
I did, identical call stack.

13
Window / Re: I Think I Found A Memory Leak
« on: January 17, 2013, 08:39:17 am »
What about the following code that causes a segmentation fault? Without the thread, it does not throw  a segfault. Why?

#include <SFML/Graphics.hpp>
#include <thread>

int main()
{
    sf::RenderWindow* ptr;
    while (true)
    {
        ptr = new sf::RenderWindow(sf::VideoMode(800, 600, 32), "title");
        ptr->setActive(false);
        std::thread([=]()
                    {
                        ptr->setActive(true);
                        ptr->setActive(false);
                    }
                    ).join();
        ptr->setActive(true);
        ptr->close();
        std::this_thread::sleep_for(std::chrono::seconds(1));
        delete ptr;
    }
    return 0;
}

 

#0 0x7ffff1749239   ??() (/usr/lib/fglrx/dri/fglrx_dri.so:??)
#1 0x7ffff17b040e   ??() (/usr/lib/fglrx/dri/fglrx_dri.so:??)
#2 0x7ffff18878c2   ??() (/usr/lib/fglrx/dri/fglrx_dri.so:??)
#3 0x7ffff17a0384   ??() (/usr/lib/fglrx/dri/fglrx_dri.so:??)
#4 0x7ffff2444295   ??() (/usr/lib/fglrx/dri/fglrx_dri.so:??)
#5 0x7ffff243fa73   ??() (/usr/lib/fglrx/dri/fglrx_dri.so:??)
#6 0x7ffff2440cb6   ??() (/usr/lib/fglrx/dri/fglrx_dri.so:??)
#7 0x7ffff244fa68   ??() (/usr/lib/fglrx/dri/fglrx_dri.so:??)
#8 0x7ffff244ff86   ??() (/usr/lib/fglrx/dri/fglrx_dri.so:??)
#9 0x7ffff3078e83   ??() (/usr/lib/fglrx/dri/fglrx_dri.so:??)
#10 0x7ffff2ee8dc9   ??() (/usr/lib/fglrx/dri/fglrx_dri.so:??)
#11 0x7ffff1743112   ??() (/usr/lib/fglrx/dri/fglrx_dri.so:??)
#12 (   0x00007fffffffd6c0 in ??() (??:??)
#13 0x7ffff30e0891   ??() (/usr/lib/fglrx/dri/fglrx_dri.so:??)
#14 (   0x0000000000000026 in ??() (??:??)
#15 0x7ffff7de992d   ??() (/lib64/ld-linux-x86-64.so.2:??)

14
Window / Re: I Think I Found A Memory Leak
« on: January 17, 2013, 07:40:21 am »
I just downloaded the latest repositories (LaurentGomila-SFML-9fac5d7). Built both Release and Debug mode using CMake. The error persists.

15
Window / Re: I Think I Found A Memory Leak
« on: January 16, 2013, 11:24:18 pm »
...

My exact version appears to be "SFML-2.0-rc-102-g044eb85" and was downloaded from this website (the github link) on the 10th of November 2012. I check memory usage with the "System Monitor" provided by Linux Mint 13 x64.


#include <SFML/Graphics.hpp>
#include <thread>

int main()
{
    sf::RenderWindow* ptr = new sf::RenderWindow(sf::VideoMode(800, 600, 32), "title");
    ptr->setActive(false);
    while (true)
    {
        std::thread([=]()
                    {
                        ptr->setActive(true);
                        ptr->setActive(false);
                    }
                    ).join();
    }
    return 0;
}
 

Pages: [1] 2 3
anything