October 29, 2013

Deferred Lighting: The backdoor, Part 2


In the first part of this article, we covered the a little theory of what this technique means as well as we set the foundations of code that started its execution on main. Now, the functional parts that are needed to create magic are going to be explained as follows.

When the Renderer class is allocated, also the constructor method is called, and here many preparations take place; in the following order: load each necessary CG shader, initialize the sphere lights (8) and place them across the heightmap, create the projection matrix and the root scene node to render objects, load the splash screen as a mesh, and set some configuration of the GCM. It is important to have considered the mesh "Sphere", this object will be the one in charge of draw into the scene the values of each light, in other words, the light will affect only on the area this sphere is colliding with, nothing else, process that differs from a ray tracing procedure. The code for what was explained before is here:



Renderer::Renderer(void)
{
        rotation = 0.0f;

        lightVert = new VertexShader(LIGHT_VERTEX_SHADER);
        lightFrag = new FragmentShader(LIGHT_FRAGMENT_SHADER);

        basicVert = new VertexShader(BASIC_VERTEX_SHADER);
        basicFrag = new FragmentShader(BASIC_FRAGMENT_SHADER);

        skyboxVert = new VertexShader(SKYBOX_VERTEX_SHADER);
        skyboxFrag = new FragmentShader(SKYBOX_FRAGMENT_SHADER);

        sceneVert = new VertexShader(SCENE_VERTEX_SHADER);
        sceneFrag = new FragmentShader(SCENE_FRAGMENT_SHADER);
        
        pointLightVert = new VertexShader(POINT_VERTEX_SHADER);
        pointLightFrag = new FragmentShader(POINT_FRAGMENT_SHADER);
        
        combineVert = new VertexShader(COMBINE_VERTEX_SHADER);
        combineFrag = new FragmentShader(COMBINE_FRAGMENT_SHADER);
        
        texturedVert = new VertexShader(TEXTURED_VERTEX_SHADER);
        texturedFrag = new FragmentShader(TEXTURED_FRAGMENT_SHADER);

        this->SetCurrentShader(*basicVert,*basicFrag);

    light = new Light[LIGHTS * LIGHTS];
    for(int x = 0; x < LIGHTS; ++x)
    {
        for(int z = 0; z < LIGHTS; ++z)
        {
            Light &l = light[(x * LIGHTS) + z];
            float xPos = (RAW_WIDTH * HEIGHTMAP_X
                         / ( LIGHTS - 1)) * x;
            float zPos = (RAW_HEIGHT * HEIGHTMAP_Z 
                        / ( LIGHTS - 1)) * z;
            l.position = Vector3(xPos, 200.0f, zPos);

            float r = 0.5f + (float)(rand()%129) / 128.0f;
            float g = 0.5f + (float)(rand()%129) / 128.0f;
            float b = 0.5f + (float)(rand()%129) / 128.0f;
            l.colour = Vector4(r, g, b, 1.0f);
            l.radius = (RAW_WIDTH * HEIGHTMAP_X / LIGHTS);
        }
    }

        projMatrix = Matrix4::perspective(
                     0.7853982, screen_ratio, 1.0f, 10000.0f);

        root = new SceneNode();
        root->SetTransform(Matrix4::identity());

        initMesh = Mesh::GenerateQuad();
        CellGcmTexture *initScreen = 
                        GCMRenderer::LoadTGA(INIT_SCREEN, false);
        initMesh->SetDefaultTexture(*initScreen);

        sphere = new OBJMesh(SPHERE_OBJ);
        
        cellGcmSetDepthTestEnable(CELL_GCM_TRUE);
        cellGcmSetCullFaceEnable(CELL_GCM_TRUE);
        cellGcmSetBlendEnable(CELL_GCM_TRUE);
        cellGcmSetDepthFunc(CELL_GCM_LESS);

        
        cellGcmSetDitherEnable(CELL_GCM_TRUE);
}

After the initialization, every time a new frame is processed, the RenderScene method is executed to perform all calculations. It is simple, but it implies some technique. First, it refresh the aspect ratio and screen space on the "set_viewport()" method. After, all the buffers are cleared and set to 0, this is to remove all the information from the previous frame or to flush any garbage data that comes at initialization. The "drawSkybox()" function generates a huge cube that encloses the scene and gives the illusion of a background. The last method listed, "swap_buffers()" removes the current frame on screen and replaces it with an updated frame. Then the three methods in the middle that take place are:

- fillBuffers(): This is just a simple pass, where all the objects are drawn as normal, with no affect of light whatsoever; the output of this calculation is transformed into a texture and used in the combine phase.
- drawPointLights(): Here all lights are drawn considering variables as radiance or specular, but also the normals and depths that took place in the previous stage. Consider again that each light will only affect the area of the sphere mesh.
- combineBuffers(): The color of the lights and the positions of the objects are combined to form the final screen image. Special consideration takes place with the matrices, that are set to identity because the input data is used as "textures", only having the projection matrix as ortographic to have everything managed as plane objects.

void Renderer::RenderScene()
{
        set_viewport();
        clear_buffer();

        drawSkybox();

        fillBuffers();
        drawPointLights();
        combineBuffers();
        
        swap_buffers();
}

The method textureFromSurface deserves special consideration, because of how the PS3 manages frame objects, this function transforms into a texture a GCM surface or, in OpenGL words, frame buffer. As you can see, it is used in many places in the class to transform the output of the first pass to information that can be used in the second and the third pass.

No comments:

Post a Comment