Welcome, Guest. Please login or register. Did you miss your activation email?

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Topics - Jabberwocky

Pages: [1]
1
Graphics / sprite scaling: local-space vs. world-space
« on: April 04, 2017, 12:56:46 pm »
This seems like it should be a remarkably easy thing to solve.  But for some reason it's stumping me.

I have a sprite.  This sprite might be scaled, rotated, or otherwise transformed in any way.
Now, I want to further stretch (scale) this sprite along the x axis of the screen (i.e. stretch left and right) regardless of the sprite's orientation.  I do not want to stretch along the x axis of the sprite itself.

Simply calling this:
sf::Sprite::scale(scaleFactor, 1.f)
doesn't work, because it would only stretch in the proper direction if the sprite was non-rotated.  For example, if I turned the sprite 90 degrees, then it would now be stretched on the screen y axis.  I always want the stretching on the screen x axis.

In other words, the sf::Sprite::scale function works as a local-space scaling factor.  I need a world-space scaling factor.

Is there something simple I can do with an sf::Transform here?  It's no problem if I have to perform a per-frame function call to accomplish this, to account for a rotating sprite.

It seems like I'm missing something obvious.
Thanks!

2
Graphics / SFML graphics perf analysis
« on: February 20, 2017, 04:29:16 pm »
Hello SFML people,

I was doing some CPU perf testing on my game.  As expected, graphics-related stuff takes up quite a bit of the overall CPU usage.  But I did find some interesting hot spots in SFML I wanted to discuss.

Some upfront info:

1.  I am using SFML 2.3 on a new windows 10 laptop, NVIDIA card.

2.  My game is fairly graphically intense, from a usual SFML standpoint.  For example, I use a lot of shaders, I use several render textures which are updated per-frame, and I draw a lot of stuff (using VertexArrays where possible)

So,
sf::RenderTarget::draw(const Vertex* vertices, ...)
is a hotspot, as you might expect.  But what I didn't expect was the following (these are all lines of code from this function):

This line takes up about 20% of the CPU work done by RenderTarget::Draw:
    if (activate(true))
... which is because of a call to WglContext::makeCurrent()
Is this something which needs to be done every time draw is called?
Perhaps with the most recent context changes to SFML 2.4 this is no longer an issue?
Or perhaps this is a symptom of the fact I update several different RenderTextures each frame?  (I ensure to batch up all the operations on a single RenderTexture before moving on to a different one.)

These lines take up over 20% of the CPU work done by RenderTarget::Draw:
   applyShader(states.shader)
   applyShader(NULL);

The expensive aspects of these applyShader calls are because of:
1.  Shader::isAvailable is called every time, which takes a mutex lock.  This seems very wasteful for each draw call.

2.  GLEXT_glUseProgramObject is called first on the shader program, then on NULL for every call.  This is perhaps wasteful for a program which reuses the same shader across many draw calls.  Would it be possible to cache the last used shader, and only call GLEXT_glUseProgramObject if the shader has changed?

This line takes up most of the remaining CPU (~55%), which I would expect:
   glCheck(glDrawArrays(mode, 0, vertexCount));


Thanks for any thoughts you have to share.

3
General / Texture coordinate interpolation
« on: February 28, 2016, 08:11:01 am »
Hi all,

A question for any OpenGL gurus among you:

Does OpenGL expose the functions it uses to interpolate a texture coordinate across a primitive?

I'm looking for a function which would do something like this:

sf::Vector2f InterpolateTexCoord(const std::vector<sf::Vertex>& primitive, sf::Vector2f p)
{
   // Examine the texture coordinates of primitive, and calculate the
   // interpolated texture coordinate for point p
}
 

... in other words, exactly what OpenGL does internally when calculating the color of a fragment from a texture.  Or when OpenGL passes the interpolated uv coordinate to a fragment shader.

Ideally, the function could handle either quads or triangles.  But I could get by with just one or the other, if necessary.

Thanks!

4
General / Yet another RAII discussion
« on: February 10, 2016, 01:21:43 pm »
To be honest if you stick to modern C++ with RAII memory leak tracking will become obsolete pretty quickly.

Alternatively, memory leak tracking solves the same problem as RAII does, but does not involve the extra overhead and ugly syntax of smart pointers. 

Except that memory leak tracking alone is sufficient to find any problems. 

RAII alone is not sufficient to prevent memory leaks (although it certainly helps), as it of course relies on there being no programming errors, in either your code or 3rd party library code.

So, if you're choosing between the two, memory leak tracking is a better choice.

Just another opinion.  :P

5
General discussions / SFML in Gamasutra Article
« on: October 29, 2015, 09:41:00 am »
I just ran across a mention of SFML in a Gamasutra article.

I figured I'd post a link here, in case people here are either interested in who is talking about SFML, or interested in the article itself.

Link:  Writing a Game Engine from Scratch - Part 1: Messaging

Here's the quote:
Quote
How then should our Draw Framework be designed? Simply put, like our own little API. SFML is a great example of this.

Quick note - I don't fully agree with all the author's ideas on messaging in a game engine.  But it's a good read regardless of whether you agree or disagree.

6
Hiya,

I just thought I'd share a problem I encountered incorporating a 3rd party opengl-based UI library into my game.

There were some SFML functions which, if called, would cause problems with the UI library, such as corrupt or missing graphics.  It turns out the problem was that these SFML functions create a new sf::Context.  Example:

Texture.cpp
    unsigned int checkMaximumTextureSize()
    {
        // Create a temporary context in case the user queries
        // the size before a GlResource is created, thus
        // initializing the shared context
        sf::Context context;

        GLint size;
        glCheck(glGetIntegerv(GL_MAX_TEXTURE_SIZE, &size));

        return static_cast<unsigned int>(size);
    }
 

The code comments make it obvious why this is done.  And that makes sense.  However, in my case this led to quite a bit of debugging while incorporating the UI library.  Granted, I am quite clueless about low level OpenGL code, and I'm sure an expert may have discovered the problem more quickly.  And the solution is simple, once you understand the problem - just ensure to call setActive again on the RenderWindow (or whatever target you're rendering to).

Regardless, to save others the trouble in the future, perhaps it might be worth it to check for an existing context first, before creating a new one?  And only create a new context if none is already present.

If there's good reasons not to do this, no problem.  I'm just sharing a user experience with SFML here.  It's one of the few times it's caused me some trouble.

7
System / Single or separate axis for Xbox controller
« on: May 07, 2015, 06:57:53 am »
Does anyone know if SFML treats the xbox controller triggers as a single axis, or two separate axes?

8
Graphics / GL_MAX_TEXTURE_IMAGE_UNITS?
« on: April 11, 2015, 08:57:53 pm »
Hi,

From SFML 2.2, Shader.cpp:

    GLint checkMaxTextureUnits()
    {
        GLint maxUnits = 0;

        glCheck(glGetIntegerv(GL_MAX_TEXTURE_COORDS_ARB, &maxUnits));

        return maxUnits;
    }
 

This function is invoked when determining how many texture params I can bind to a shader.  Here's my question, though.  Shouldn't this function be checking GL_MAX_TEXTURE_IMAGE_UNITS instead?  I believe that number is usually higher than GL_MAX_TEXTURE_COORDS_ARB, so this function is artificially limiting the number of textures I can send to my shader. 

9
General discussions / GPL Discussion
« on: January 23, 2015, 09:26:55 pm »
Edit (by eXpl0it3r): Split the topic from here.

GPL is a bad license, as it infects the whole project. 
If someone uses your library, then their entire codebase must also become GPL, which pretty much nobody wants to do.

http://en.wikipedia.org/wiki/GNU_General_Public_License#Linking_and_derived_works

10
Graphics / stuttering on some hardware, not others
« on: October 02, 2014, 09:23:51 am »
Hi,

I have noticed my SFML game experiences stuttering or frame loss on some graphics cards, but not others.

Here's a video which shows the problem.  Although the video compression makes it appear worse than it actually is.  In the non-compressed avi captured using FRAPS (264mb download here), the motion is smooth except when the stuttering occurs.

I noticed the stuttering on my laptop, which has 2 graphics cards:
1.  Intel HD Graphics 4600 (integrated)
2.  NVIDIA GeForce GT 750M

Interestingly, the stuttering occurs only on the more powerful GeForce, but not on the Intel.

I also tested the game on my desktop, which has an AMD Radeon 7700 card.  The game runs perfectly smooth there.  So:
  • NVidia (Win8.1 laptop):  Significant stuttering.  Other (non OpenGL) games run fine on this card.
  • Intel (Win8.1 laptop):  No stuttering
  • AMD (Win7 desktop):  No stuttering

The game was tested in fullscreen, with vsync on.  All drivers are updated.  I am using a SFML 2.1 stable release which I would have downloaded probably >6months ago.

Given the different behaviour on different cards, maybe it is an NVidia OpenGL driver problem.

Questions:
1.  Has anyone else experienced this kind of issue with SFML?
2.  Is NVidia known to have crappy OpenGL drivers?
3.  Have there been any recent changes to SFML that might address this issue?
4.  It appears to be an OpenGL-specific problem.  Is it possible it is because SFML uses old/legacy OpenGL?
5.  Any ideas on how to fix it?

My biggest concern is that this problem will affect all, or even a significant portion of NVidia users.  I really like SFML, and have already written a lot of code around it.  But *if* the problem is OpenGL specific, and there is no work around, I may have to consider migrating to DirectX-based graphics middleware.  Unless maybe SFML plans to support a DirectX back-end. 

Thank you very much for your time and help.

11
Graphics / Guaranteed order of drawing quads within a VertexArray?
« on: September 01, 2014, 11:29:31 am »
Hi,

Here's a simple use-case to set the stage for my question.

I'm making a top-down view game.  I'm rendering a table top, with a plate on the table, and an apple on the plate.  I want to draw this with a single VertexArray (using a texture atlas) for performance / batch count reasons.  The table, plate, and apple are separate images within the atlas (and must be so, so different tables can have different items on top). 

So I make a VertexArray of Quads primitive type.  The VertexArray has 3 quads (12 verticies).
  • The first quad is the table texture
  • The second quad is the plate texture
  • The third quad is the apple texture

Question:  Am I guaranteed that the apple will always be drawn overtop on the plate, and the plate overtop the table?  Or is there no guarantee of which order the quads within a VertexArray will be drawn?

Thanks for the help.

Pages: [1]