Welcome, Guest. Please login or register. Did you miss your activation email?

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - Ray1184

Pages: [1]
1
Graphics / Re: [SOLVED] VertexBuffer caching for chunks rendering
« on: April 08, 2024, 10:12:07 am »
Mine was a bit different, since my world data for each world chunk had several forms. The main data was a collection of 15 million road connections between GPS points. I loaded that for all of Australia at once (since I needed to do A* pat finding on it). It was less than 200MB of data for the whole country.
If I stored that area as small tiles I'd probably stream it in. :)

Ah ok I understand, in my case tiles are 16px (with 4/5x upscaling for recreate old school pixelate fx), so big maps could reach several gb. In this case I've to think about a sort of chunk streaming or something else such map splicing.

2
Graphics / Re: VertexBuffer caching for chunks rendering
« on: April 07, 2024, 08:27:34 pm »
Ok I solved, problem was related to a wrong chunks searching. With latest implementation I don't see any performance problem even with 16k maps. Thanks to everyone

3
Graphics / Re: VertexBuffer caching for chunks rendering
« on: April 07, 2024, 01:40:14 pm »
I made a project a while ago (using SFML) that needed to display all of australia (in enough detail to watch a car driving).
I used a pool of 9 render texture tiles each as big as the screen. As the view centre crossed into a new tile, I reused 3 of the tiles to generate the next 3 tiles. If you don't cross the boundary, it's just rendering at most 4 sprites to do the ground.
For smaller tiles without a larger landblock, you could progressively render tiles into the render texture as you approach the edge, so its not rendering all of them in one hit (frame time spikes).

So, as I understood you render the screen tile and 8 nearest quads, if you move for example on left you will discard 3 tiles on the right and prepare other 3 tiles on left.
But in your case, net of rendering issues, I assume that you don't load the entire map, but you load in different steps (a sort of "streaming"?)

I also found out that my performances problem was not a rendering issues, but the structure of chunks. Currently I check for each chunk whether it fit into screen, but with million of chunks has very poor performances, I will opt for a better structure such quadtree

4
Graphics / Re: VertexBuffer caching for chunks rendering
« on: April 06, 2024, 06:16:16 pm »
Hi, thanks for your reply.
I generalized by writing "engine". I'm actually writing my own engine for my own game, not a general purpose engine.
In my game I will have some standard maps, where performances are not a problem, but I planned to add some procedurally generated open world locations, that could reachs up to 16x16k tiles (16 millions of chunks).

Maybe my question was wrong. Sure there isn't a magical number for VB caching, I actually wanted to understand if VB caching could be a good strategy for optimization and, if it is, whether there is some sort of "safety threshold" beyond which it would be better not to go.
In this case every VB will holds a chunk of 256 tiles, so 512 triangles.

You're right about the dynamic cast inside the rendering loop, in fact, I'm already thinking about an alternative solution ('til now doesn't seems the bottleneck, but I prefer to avoid it)

5
Graphics / [SOLVED] VertexBuffer caching for chunks rendering
« on: April 05, 2024, 05:03:36 pm »
Hi everyone,
I've some questione about VertexBuffer usage for my engine implementation.

My engine is basically a standard top-down tile based game engine, where map could be very large.
In order to keeping good performances, in the init phase, I split in N chunks of 16x16 tiles, and I render only the visible chunks (in this way I can keep high FPS even on low end devices).
To further improve performance, I use VertexBuffers for static chunks, uploading vertices only one time.

I created a VertexBufferProvider class, actually a simple VertexBuffer cache, working in this way:

void hpms::TilesPoolRenderingWorkflow::Render(hpms::Window* window, hpms::Drawable* item)
{
    sf::VertexBuffer* vertexBuffer = hpms::VertexBufferProvider::GetVertexBuffer(item->GetId(), sf::PrimitiveType::Triangles, 0);

    if (item->IsUpdateVertices() || item->IsForceAll())
    {
        auto* chunk = dynamic_cast<hpms::TilesPool*>(item);
        // other code...
 

If the VertexBuffer does not exists inside the cache, so the VertexBufferProvider creates and provides, otherwise just provides.
Currently the VertexBuffer cache doesn't have a size, so if I have 2000 chunks and the player walk all along the map, 2000 VB will be created.
My idea was to use a FIFO queue, just to have a reasonable  amount of VB. In this case which will be a reasonable VB cache size considering that each chunk contains 1536 vertices (16x16 TILES x 6 vertices each)?

Thanks a lot

Ray

6
General / Re: VertexArray vs Instancing questions for 2D isometric project
« on: February 22, 2024, 08:59:00 pm »
Thank you so much for your answer, in this case I will go with first solution, for sure painless.
I was also thinking of working more at a low level by implementing a sort of depth buffer in a shader, but at this point perhaps I'm complicating things unnecessarily, considering that my scene won't be that complicated

7
Hello everyone,
I was not sure whether to post a new topic here or in the apposit graphics session, but is a more logical issue then technical.
I'm developing my own engine for a 2D isometric game.
In the game I've 2 layers for rendering

- first layer is static (like floor, walls etc...), there's no interaction and no depth sorting needed. So it will be a simple VertexArray, with inside all tiles positions and texCoords
- second layer is interactive and will contains movable and all type of objects that can change drawing order based on their positions.

My doubts incur with second layer. My idea was to use sprites instead of VertexArray, because in this case I can create mores instance with the same sprite and different transformations. But in my case this is not so simple, because many of my objects are not a single image, but a chunk of tiles (base tile is 16x16px). Here the problem comes with depth sorting. Using a VertexArray allows me to sort all tiles render order and push all vertices inside the VertexArray without groing crazy (and I don't even have to think about how to solve the depth sorting problem for convex figures), but unfortunately I cannot do instancing. If I need to render 100 trees for example, I need to push 100xN vertices inside the array, instead of using one with 100 different transformations.

In a nutshell:

Using VertexArray for all interactive objects and actors (even player and npcs)

PROS:
  • depth management is much simpler
  • only 1 draw call

CONS:
  • For each instance of my abstract entity, I need to upload vertices and texCoords to GPU

Using chunk of sprites (or small VertexArrays) for each interactive objects and actors (even player and npcs)

PROS:
  • I can upload only one sprite/va per instance and using transformations

CONS:
  • more difficult to determine whether a VertexArray must be rendererd before or after another one
  • objects with convex shapes must be "sliced" in simpler objects

From this consideration I'm more likely for the first solution, but I don't want to overload the GPU too much and lose the possibility of instancing.

Do you have any advice for an hybrid solution? Split convex objects is not a problem, but understand how to compare different instances of same va/sprite chunk and above all understand if this approach could cause slowdowns.

Thanks so much

Ray

Pages: [1]
anything