Welcome, Guest. Please login or register. Did you miss your activation email?

### Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

### Messages - SteelGiant

Pages: [1]
1
##### SFML wiki / Re: Tutorial: Building SFML for Android on Windows
« on: April 25, 2018, 09:43:44 pm »
And one more thing: in order to get my own test to build for android, as I was using c++11 features, I had to add

Code: [Select]
LOCAL_CPPFLAGS := -std=c++1y
Then I had to manually edit one of the include files in the old android NDK
Code: [Select]
path\to\android-ndk-r12b\sources\cxx-stl\llvm-libc++\libcxx\include\tupleto comment out two lines (291 and 357- the const versions) that were apparently incompatible with c++11

Aside from that it built fairly painlessly.

Shaders unfortunately didn't work, and I haven't had a chance to look at why.

2
##### SFML wiki / Re: Tutorial: Building SFML for Android on Windows
« on: April 25, 2018, 12:51:34 am »
Having failed to get the most recent version of the SFML example that uses gradle to build, I just went through the Windows tutorial using all the old toolchain.

It was absolutely miserable to get it to go in the end, as many steps have been obsoleted and are hard to get hold of. Just in case anyone else is going through the same nightmare between now and when the gradle version "just works" I thought I would share my notes here:

Get a version of the SFML repo from when the tutorial was written - I don't know if this is strictly necessary, but after days fighting with the latest build, I was taking no chances
Code: [Select]
git checkout ffd9c94381ad4739ce56428f10dda3daba376f30
Downgrade SDK tools to 25.2.5 or lower, because the build commands in the tutorial have been removed since then
You'll probably have to delete the tools and build-tools folders in the sdk dir because of course it won't have cleaned them out even after unchecking all the sdk tools
Make extra sure that there aren't newer tools checked in any menu in Android studio if you're using it - there are separate checkboxes for the latest ones well away from all the others. Just delete the folders in the SDK dir, Android studio can't be trusted.
Even the url to get the required version of the tools is hidden: http://dl-ssl.google.com/android/repository/tools_r25.2.5-windows.zip
Have to manually put the tools folder inside the zip in the sdk dir as "tools"
Then the "android update ..." command will work

Of course despite having the JDK installed, it tried to look at the JRE for some reason. Fixed with (replace path as appropriate) at the console where you're trying to build:
Code: [Select]
set "JAVA_HOME=C:\Program Files\Java\jdk1.8.0_172"
Then ant fails because platform-tools aren't installed
In the SDK/tools folder run (with extreme care) and make sure it only has the box for the specific platform tools you need checked (it tried to stick 12 other packages in by default for me, many of which would have broken everything else)
Code: [Select]
android update sdkOnly option apparently is to install SDK platform tools 27.0.1, this seems to work

Despite having my phone in developer mode and plugged in "ant debug install" failed, but the apk was created successfully

Actually builds:
Code: [Select]
ant debug
Presumably replacing debug with release above would build a release version

Transfer the apk file manually

It finally works

3
##### Graphics / Re: Blur shader confusion (Update: full source provided)
« on: March 12, 2018, 11:06:39 pm »
After looking into it for a while I'm not sure. But I found that any number can be put in that's 255 place and it'll affect the length of the tail and that calculation needs to be done only on the a. I didn't look into the math that would cause this any further though and I don't think I will because it's so much work for so little gain.

Interesting, thanks for taking another look at it. Good to confirm that it was only alpha that needed to be transformed.

It is especially strange that the problems are solved by adding any amount of rounding. I had tried rounding down low alpha values, and scaling all alpha values down by a factor more aggressively that than this rounding would do and that didn't fix it.

Hopefully I'll have a chance to look at it more in the near future.

4
##### Graphics / Re: Blur shader confusion (Update: full source provided)
« on: March 11, 2018, 02:25:48 pm »
I have found the discrepancy between the CPU and GPU versions. It seems that if in the GPU version I convert everything to int and back as is done in the CPU version, then they match up exactly, so this is "solved".

colour.a = float(int(colour.a * 255)) / 255.0;
colour.r = float(int(colour.r * 255)) / 255.0;
colour.g = float(int(colour.g * 255)) / 255.0;
colour.b = float(int(colour.b * 255)) / 255.0;

I'm still very confused how without the rounding the trail was maintaining alpha values of over 0.5, when they should have been decaying exponentially.

Along the way in my tests I wrote a shader that just translated the image by a couple of pixels, and that worked perfectly, showing that the RenderTextures weren't doing anything strange, so the problem had to be in the shader. I then wrote another shader that coloured the image differently depending purely on alpha values, so I could verify that regions were somehow maintaining high alpha values without decaying.

If anyone can figure out how the rounding fixes things then I would be glad to hear it. Having done the mathematical calculations by hand I can't see a way for it to work though...

5
##### Graphics / Re: Blur shader confusion (Update: full source provided)
« on: March 10, 2018, 01:17:46 pm »
Ok, here is the full source for a minimal example that shows both (broken) GPU and (working) CPU versions side by side. The full cpp code is only about 100 lines for each implementation, and I have annotated which blocks of code are for CPU and GPU. There is also a flag, cpu_enable, at the top that controls if the CPU version is done - I have made no effort to make the CPU implementation efficient, so it is incredibly slow.

The cpp code:

#include "stdafx.h"
#include <SFML/Graphics.hpp>

int main(int argc, char *argv[])
{

unsigned int defaultSize = 160;

bool cpu_enable = true;

//Window
//GPU:
sf::RenderWindow window(sf::VideoMode(defaultSize, defaultSize), "SFML test");

//CPU:
sf::RenderWindow cpu_Window(sf::VideoMode(defaultSize, defaultSize), "CPU");

window.setFramerateLimit(60);

//Draw layer
//GPU
sf::RenderTexture drawRenderTexture;
drawRenderTexture.create(window.getSize().x, window.getSize().y);

drawRenderTexture.clear(sf::Color::Transparent);

sf::Sprite drawSprite;
drawSprite.setTexture(drawRenderTexture.getTexture());

sf::RenderTexture blurRenderTexture;
blurRenderTexture.create(drawRenderTexture.getSize().x, drawRenderTexture.getSize().y);

blurRenderTexture.clear(sf::Color::Transparent);

sf::Sprite blurSprite;
blurSprite.setTexture(blurRenderTexture.getTexture());

return EXIT_FAILURE;
}
if (!drawBlur.isAvailable()) {
return EXIT_FAILURE;
}
drawBlur.setUniform("xResolution", float(1.0f / float(drawRenderTexture.getSize().x)));
drawBlur.setUniform("yResolution", float(1.0f / float(drawRenderTexture.getSize().y)));

//CPU
sf::Image cpu_drawImage;
cpu_drawImage.create(cpu_Window.getSize().x, cpu_Window.getSize().y, sf::Color::Transparent);

sf::Image cpu_blurImage;
cpu_blurImage.create(cpu_drawImage.getSize().x, cpu_drawImage.getSize().y, sf::Color::Transparent);

sf::Texture cpu_texture;
cpu_texture.create(cpu_blurImage.getSize().x, cpu_blurImage.getSize().y);

sf::Sprite cpu_drawSprite;
cpu_drawSprite.setTexture(cpu_texture);

//Main loop:
int iFrame = 0;

while (window.isOpen())
{

//Input:
sf::Event event;
while (window.pollEvent(event))
{
if (event.type == sf::Event::Closed) {
window.close();
}
if (event.type == sf::Event::Resized) {
window.setView(sf::View(sf::FloatRect(0.f, 0.f, window.getSize().x, window.getSize().y)));
}
}

//Logic:
//Draw circling trail
double radius = 0.7 * double(window.getSize().x) / 2.0;
double prec = 1.0;
double angle = double(iFrame % int(360 * prec)) / prec;

double degreesToRadians = std::acos(-1) / 180.0;

sf::Vertex drawVertices[1000];
int nVertices = 0;
for (int iX = 0; iX < 10; ++iX) {
for (int iY = 0; iY < 10; ++iY) {
sf::Color drawColour = sf::Color::White;

float xLoc = window.getSize().x / 2 + radius * cos(angle * degreesToRadians) + iX;
float yLoc = window.getSize().y / 2 + radius * sin(angle * degreesToRadians) + iY;

//GPU:
drawVertices[nVertices++] = sf::Vertex(sf::Vector2f(xLoc, yLoc), drawColour);

//CPU:
cpu_drawImage.setPixel(unsigned int(xLoc), unsigned int(yLoc), drawColour);
}
}

//GPU:
drawRenderTexture.draw(drawVertices, nVertices, sf::PrimitiveType::Points);
drawRenderTexture.display();

blurRenderTexture.clear(sf::Color::Transparent);//Don't need to clear, as when using sf::BlendNone the original is entirely overdrawn
blurRenderTexture.draw(drawSprite, sf::RenderStates(sf::BlendNone, sf::Transform(), NULL, &drawBlur));
blurRenderTexture.display();

//Cycle texture back to image:
drawRenderTexture.clear(sf::Color::Transparent);//Don't need to clear, as when using sf::BlendNone the original is entirely overdrawn
drawRenderTexture.draw(blurSprite, sf::RenderStates(sf::BlendNone));
//TODO: Why doesn't it dissipate? Seems to get to a certain distance and minimal density, then neither get more spread out, nor get less dense...??

//CPU:
//Draw blurred version of the image to the blur texture:
if (cpu_enable) {
for (unsigned int x = 0; x < cpu_drawImage.getSize().x; ++x) {
for (unsigned int y = 0; y < cpu_drawImage.getSize().y; ++y) {

float weightCentre = 1.0;//TODO: Could it be an issue of float precision when numbers are summed together and then divided at the end?
float weightAdj = 0.5;
float weightDiag = 0.25;

float weights[3] = {weightCentre, weightAdj, weightDiag};

float alphaWeight = 0.0;
float colourWeight = 0.0;

float colour[4] = {0.0, 0.0, 0.0, 0.0};

for (int dx = -1; dx <= 1; ++dx) {
for (int dy = -1; dy <= 1; ++dy) {

unsigned int tX = x + dx;
unsigned int tY = y + dy;

sf::Color inColour(0, 0, 0, 0);
if (tX < 0 || tX >= cpu_drawImage.getSize().x || tY < 0 || tY >= cpu_drawImage.getSize().y) {
//Out of bounds defaults to transparent black
} else {
inColour = cpu_drawImage.getPixel(tX, tY);
}

float texColour[4] = {float(inColour.r) / float(255.0), float(inColour.g) / float(255.0), float(inColour.b) / float(255.0), float(inColour.a) / float(255.0)};

int ds = dx*dx + dy*dy;//This works nicely because numbers have magnitude 0 or 1, but won't stretch further
float weight = weights[ds];

float effectiveWeight = texColour[3] * weight;

colour[3] += effectiveWeight;
alphaWeight += weight;

colour[0] += texColour[0] * effectiveWeight;
colour[1] += texColour[1] * effectiveWeight;
colour[2] += texColour[2] * effectiveWeight;
colourWeight += effectiveWeight;

}
}

//Make certain we never divide by zero
colour[3] = alphaWeight > 0.0 ? colour[3] / alphaWeight : 0.0;
colour[0] = colourWeight > 0.0 ? colour[0] / colourWeight : colour[0];
colour[1] = colourWeight > 0.0 ? colour[1] / colourWeight : colour[1];
colour[2] = colourWeight > 0.0 ? colour[2] / colourWeight : colour[2];

sf::Color outColour(sf::Uint8(colour[0] * 255), sf::Uint8(colour[1] * 255), sf::Uint8(colour[2] * 255), sf::Uint8(colour[3] * 255));

//Apply to blur image:
cpu_blurImage.setPixel(x, y, outColour);
}
}

//copy the blurred version back to the original image:
//(could probably get double the performance if we alternated which image we drew to/from rather than doing a copy, but performance doesn't matter here: only correctness)
for (unsigned int x = 0; x < cpu_drawImage.getSize().x; ++x) {
for (unsigned int y = 0; y < cpu_drawImage.getSize().y; ++y) {
cpu_drawImage.setPixel(x, y, cpu_blurImage.getPixel(x, y));
}
}

//Push blurred image to renderable texture:
cpu_texture.update(cpu_blurImage);

//Render the cpu blurred image:
cpu_Window.clear();
cpu_Window.draw(cpu_drawSprite);
cpu_Window.display();
} else {
//Dont render Render the cpu blurred image:
cpu_Window.clear(sf::Color::Red);
cpu_Window.display();
}

//Rendering:
window.clear();
window.draw(blurSprite);
window.display();

if (iFrame % 60 == 0) {
printf("%d\n", iFrame);
}

++iFrame;
}

return EXIT_SUCCESS;
}

uniform sampler2D texture;
uniform float xResolution;
uniform float yResolution;

void main()
{
vec2 offx = vec2(xResolution, 0.0);
vec2 offy = vec2(0.0, yResolution);

float weightCentre = 1.0;//TODO: Could it be an issue of float precision when numbers are summed together and then divided at the end?
float weightAdj    = 0.5;
float weightDiag   = 0.25;

float weights[3] = {weightCentre, weightAdj, weightDiag};

//Alpha weighted colour blending: (maybe there is some built in way to achieve what I'm doing here trivially)
float alphaWeight = 0.0;
float colourWeight = 0.0;

vec4 colour = vec4(0.0, 0.0, 0.0, 0.0);

for(int dx = -1; dx <= 1; ++dx) {
for(int dy = -1; dy <= 1; ++dy) {

vec4 texColour = texture2D(texture, gl_TexCoord[0].xy + float(dx) * offx + float(dy) * offy);

int ds = dx*dx + dy*dy;//This works nicely because numbers have magnitude 0 or 1, but won't stretch further
float weight = weights[ds];

float effectiveWeight = texColour.a * weight;

colour.a += effectiveWeight;
alphaWeight += weight;

colour.rgb += texColour.rgb * effectiveWeight;
colourWeight += effectiveWeight;

}
}

//Make certain we never divide by zero
colour.a = alphaWeight > 0.0 ? colour.a / alphaWeight : 0.0;
colour.rgb = colourWeight > 0.0 ? colour.rgb / colourWeight : colour.rgb;

gl_FragColor = colour;
}

6
##### Graphics / Re: Blur shader confusion
« on: March 10, 2018, 12:29:57 am »
Is this really what you expect on the CPU?
Maybe I understood you wrong but when you just draw a square or any other shape and then apply the blur shader, the original shape (with maximum "intensity") should always be in the center and fading out around. To me it looks like your CPU version has the same problem as the GPU version but less strongly, maybe because it runs much slower.

Looking at the image you definitly forgot to clear something somewhere. Can you provide a complete minimal example? Probably you will find the error yourself when preparing this minimal example. Is the tail fading out after some time or does it stay forever?

Alternatively start from the SFML example again. Reading your initial post I think you just simply want to apply the blur shader multiple times to get it to fade out more than with only one pass?

I'm actually rendering the CPU and GPU versions in the same program, doing all the calculations in lockstep and rendering to two separate windows. That screenshot is how the windows look after about 100 frames.

The slug trail in the GPU version essentially never goes away.

I'm not just rendering a blurred image once, I am rendering an image with a blur, then rendering that resulting blurred image with a blur and so on. If you just had an initial image then it would dissolve into nothingness (or uniform intensity depending on if your boundary is lossy or not). If you draw a small shape moving then it should leave a diffuse trail that dissipates over time. I have implemented this in another language in the past.

I'll post a minimal example tomorrow. Unfortunately my test code is already quite minimal, i only have a 40 line shader and about 100 lines of main loop.

7
##### Graphics / Re: Blur shader confusion
« on: March 09, 2018, 10:02:04 pm »
Just blurring something doesn't mean it must become more transparent or spread out infinitely. It really depends on the shader used to do this and I'm not sure what the linked shader does exactly (don't know GLSL very well).

If something doesn't work the way you assumed, then more often than not, the assumption was likely wrong. For a images to dissipate over time, you'll have to reduce the alpha value over time. And for something to spread out, you'd have to move pixels out further and further.

Many blur filters will simply change the pixels around a certain pixel to a mixed value, but that way it won't really spread out and the center will never really vanish as pixels aren't weakened enough.

I totally agree with you, somehow, somewhere one of my assumptions is wrong.

I tried something else just now: I completely reimplemented the effect I'm trying to do without using a shader and doing all the work manually on the CPU drawing between sf::Images.

As you can see in the attached image, the CPU/Image version does exactly what I would expect, while the shader/RenderTextures is not. Mathematically and logically everything works as it should. There is some problem coming in somewhere in my implementation going via RenderTextures and shaders.

So somehow there is something that is managing to store a ghost image somewhere.

I have a window and two RenderTextures: these are all cleared every frame. Then I have two sprites (one for each RenderTexture) and a shader. As far as I know there is no information stored in the shader or sprited directly, so nothing to clear here. What could possibly be causing this?

8
##### Graphics / Blur shader confusion (Update: full source provided)
« on: March 09, 2018, 01:57:29 am »
I'm trying to repeatedly blur an image to get it to dissolve. I'm doing this by drawing into one sf::RenderTexture and then drawing from that into another sf::RenderTexture with a blur fragment shader, then drawing that image back to the first RenderTexture (with no shader) ready to draw again.

//drawRenderTexture starts off clear at the start of the program

drawRenderTexture.draw(drawVertices, nVertices, sf::PrimitiveType::Points);
drawRenderTexture.display();

blurRenderTexture.clear(sf::Color::Transparent);//Don't need to clear, as when using sf::BlendNone the original is entirely overdrawn
blurRenderTexture.draw(drawSprite, sf::RenderStates(sf::BlendNone, sf::Transform(), NULL, &SFMLBlur));
blurRenderTexture.display();

//Cycle texture back to image:
drawRenderTexture.clear(sf::Color::Transparent);//Don't need to clear, as when using sf::BlendNone the original is entirely overdrawn
drawRenderTexture.draw(blurSprite, sf::RenderStates(sf::BlendNone));
//TODO: Why doesn't it dissipate? Seems to get to a certain distance and minimal density, then neither get more spread out, nor get less dense...??

//Rendering:
window.clear();

window.draw(blurSprite);

window.display();

The blur fragment shader here is the one from the SFML shader examples https://github.com/SFML/SFML/blob/master/examples/shader/resources/blur.frag

I have also tried this with a shader I wrote myself, which also behaves the same way.

What I had expected would be that if I drew a white square and then repeatedly applied a blur shader every frame, then the square would spread out and become more transparent until it disappeared.

What I'm actually seeing is that the shape spreads out a bit and then stops spreading out, which really confuses me. Similarly if I draw something every frame moving around the screen, the trail the object leaves only spreads out a finite amount, then stops spreading. Even quite high alpha bits of the trail don't seem to be getting blurred.

This seems very strange, as I would either expect the blur to not work at all, and for no trail to be created, or for it to dissipate the image successfully. This strange inbetween where it manages to blur but only for a while seems insane behaviour.

If in the shader I deliberately set alpha values that are less than about 0.02 to zero then the trail disappears, although it doesn't look great as it doesn't fade out at the edges, but choosing a lower threshold doesn't seem to eliminate any pixels at all (if there was a threshold I would expect it to be at about 1.0/256.0 or something).

It feels like I have forgotten to clear some buffer somewhere, yet as far as I can see I am clearing them all, and if that was the case then chopping low alpha values in the shader would not eliminate things...

Attached png shows two successive screenshots several hundred frames apart. Note how the trail is somehow the same size and intact hundreds of frames later.

Anyone have any ideas?

9
##### Graphics / Re: Continually drawing to and blurring an image, with good performance
« on: March 02, 2018, 05:54:55 pm »
Ah, I thought you applied the blurriness on the CPU via image. So what do you do with the image exactly? If you just copy it from GPU to CPU and to GPU again, you're wasting massive performance for nothing. Just keep it as texture/render texture. If you don't want things to be removed, then don't call clear on the render texture.

Yes, my apologies, for some reason I didn't see that I could do the kind of drawing I wanted directly to a RenderTexture... I see now that in the docs there is

sf::RenderTarget::draw( const Vertex *  vertices,
std::size_t     vertexCount,
PrimitiveType   type,
const RenderStates &    states = RenderStates::Default
)

this is probably exactly what I need for this purpose. I'll give it a go this weekend and see if it works out.

EDIT: Yes, that does what I wanted and works nicely.

10
##### Graphics / Re: Continually drawing to and blurring an image, with good performance
« on: March 02, 2018, 01:10:18 pm »
Why do you need to copy the render texture back to the image? Can you not just keep the render texture and draw to it again next frame?

rough code:

sf::RenderTexture rendertexture;
sf::Sprite sprite(rendertexture.getTexture());
while(window.isOpen())
{
rendertexture.clear();
rendertexture.draw(<blurred image>);
rendertexture.display();

window.clear();
window.draw(sprite);
window.display();
}

Thanks for the reply, Jonny.

I want to copy the RenderTexture back to the image so I can draw more things there each frame, alongside the (now blurry) things that were already there. The idea with this effect is that the constant blurring dissolves things that are drawn over time, and as they are dissolving things can merge together.

I'm currently drawing to the image using setPixel() to draw arbitrary pixel information. Is there a way to do something similar directly inside a RenderTexture?

eXpl0it3r, sorry if I didn't make it clear, but I am already using a shader to achieve the blur effect, and this step does indeed have good performance.

Do you mean there is some way to combine the new pixel drawing information into a shader?

11
##### Graphics / Continually drawing to and blurring an image, with good performance
« on: March 02, 2018, 01:12:29 am »
The general idea is to achieve an effect by drawing to an image and then blurring it every frame.

At the moment I have the effect working, but I'm not getting the performance I had hoped for.

The way it works at the moment is to have a cycle of 5 objects that the image goes through:

sf::Image drawImage;
drawImage.create(window.getSize().x, window.getSize().y, sf::Color::Transparent);

sf::Texture drawBufferTexture;
drawBufferTexture.create(drawImage.getSize().x, drawImage.getSize().y);

sf::Sprite drawBufferSprite;
drawBufferSprite.setTexture(drawBufferTexture);

sf::RenderTexture drawRenderTexture;
drawRenderTexture.create(drawImage.getSize().x, drawImage.getSize().y);

sf::Sprite drawSprite;
drawSprite.setTexture(drawRenderTexture.getTexture());

The drawing is done into drawImage, and the image is then rendered to a buffered RenderTexture with a shader to do the blurring, and then the blurred image is fed back from the texture to the original image:

drawBufferTexture.update(drawImage);

drawRenderTexture.clear(sf::Color::Transparent);
drawRenderTexture.draw(drawBufferSprite, &drawBlur);
drawRenderTexture.display();

//Cycle texture back to image:
drawImage = drawRenderTexture.getTexture().copyToImage();

and finally the sprite that now contains the blurred image is rendered:

window.draw(drawSprite);

The slow bit seems to be, as expected, the cycling from the RenderTexture back into the image. I need to cycle the blurred image back so that more can be drawn to it next frame, so this needs to be achieved somehow.

Is there something I'm missing here? Is there a much faster way to do this without the copyToImage() step? Is there some way to do this all within one RenderTexture using a shader?

Typically, I would only expect to be drawing into (significantly) less than 10 percent of the image each frame if that helps.

I'm currently managing to get about 70 fps on a 1080px x 1080px image out of this using a GTX 980, and I had hoped to be able to push it to higher resolutions at higher fps. Without the copyToImage step I can easily maintain 144 fps at 2160x2160.

I managed to do this at 800x800 resolution @60fps in flash on my laptop back in ~2012, so I know in principle this effect must be doable efficiently with full access to the GPU, I'm just not sure how to achieve it.

Pages: [1]
anything