Hi,
So I'm trying to do a fair number of renders into various buffers to get my fog-of-war working, along the lines of:
+ load texture BG @ 7680,4320, RGBA
+ load sprite V with vision-fog @ 512,512, RGBA. See attachment for what this looks like.
+ create 'screen' renderTexture S @ 1920,1080 RGBA
+ create 'copy' renderTexture C @ 512,512, RGBA
// Copy the current background map to an offscreen render-texture
+ create sprite from BG call it sBG
+ set texture rect of sBG to be (x,y,1920,1080)
* draw sBG to S
// Render a mask (V) into the alpha-component of the 'screen' texture, and also render a capped-to-0x80
// version of the mask into the actual map of the area. That way we permanently know where we've been
// with the "shaded area" being explored-but-not-visible, and the "bright area" being currently-visible.
- [n times]
+ copy area of S around player [pc.x, pc.y, 512, 512] to C
- render V into S @ (pc.x,pc.y,512,512)
- use shader: resulting alpha must be MAX (V.a,C.a), resulting RGB = C.rgb
- render V into BG @ (pc.x,pc.y,512,512)
- use shader: resulting alpha must be MAX (V.a,C.a,0x80), resulting RGB = C.rgb
+ finally render S to screen.
The '-' bits I still have to do, the '+' bits I've done, and the '*' bit is drawing inverted! My fingers are twitching with the desire to just scrap this and do it in OpenCL instead, but it seems to me that OpenGL ought to be faster at drawing images, so I'm still here... Any clues why it's inverted, anyone ? As far as I can tell, I'm not setting any matrix on the render-texture or its sprite...
Should I be calling move(0,1)/scale(1,-1) on the sprite ? Are those co-ordinates in normalised space or measured in pixels ?
Cheers