Vertex Processing

Vertex processing is a fundamental stage in the DirectX graphics pipeline. It is responsible for transforming and preparing geometric vertices for rasterization and subsequent rendering. This stage primarily involves the Vertex Shader.

The Role of Vertex Processing

Before a 3D model can be drawn onto a 2D screen, its vertices must undergo several transformations. These transformations are crucial for:

Key Transformations

Vertex processing typically involves the following series of transformations:

  1. Model Transformation: Transforms vertices from the object's local space to world space. This positions and orients the object in the 3D scene.
  2. View Transformation: Transforms vertices from world space to camera (view) space. This positions the camera in the world and aligns everything relative to the camera's viewpoint.
  3. Projection Transformation: Transforms vertices from camera space to clip space. This applies perspective (or orthographic projection) to create the illusion of depth and define the visible volume (frustum).
  4. Viewport Transformation: Transforms vertices from clip space to screen space (or normalized device coordinates). This maps the projected coordinates to the actual pixel coordinates on the render target.

The Vertex Shader

The Vertex Shader is a programmable stage that executes per-vertex. It receives vertex data (like position, normals, texture coordinates) as input and outputs transformed vertex data. Common tasks performed by a vertex shader include:

Example Vertex Shader (HLSL)

Here's a simplified example of a vertex shader written in High-Level Shading Language (HLSL):


struct VS_INPUT {
    float4 Position : POSITION;
    float2 TexCoord : TEXCOORD0;
};

struct VS_OUTPUT {
    float4 Position : SV_POSITION;
    float2 TexCoord : TEXCOORD0;
};

VS_OUTPUT main(VS_INPUT input,
               uniform float4x4 worldViewProjection)
{
    VS_OUTPUT output;
    output.Position = mul(input.Position, worldViewProjection);
    output.TexCoord = input.TexCoord;
    return output;
}
        

Vertex Data Structures

The data passed into and out of the vertex shader is defined using structures. These structures use semantic names (e.g., POSITION, TEXCOORD0) to map data to specific pipeline inputs/outputs.

Semantic Description
POSITION The vertex position in object space. This is typically the primary input for transformations.
TEXCOORDn Texture coordinates for accessing textures.
NORMAL The vertex normal vector, used for lighting calculations.
COLOR Per-vertex color information.
SV_POSITION The final output position of the vertex in homogeneous clip space. This semantic is mandatory for the vertex shader output.
Important: Vertex processing is the first programmable stage in modern DirectX versions (DirectX 10 and later). It offers significant flexibility in how geometry is manipulated before it's sent to the rest of the pipeline.

Next Steps

After vertex processing, the transformed vertices are passed to the Rasterizer stage, which determines which pixels on the screen correspond to the rendered primitives.