Vertex Processing
The vertex processing stage is a fundamental part of the modern graphics rendering pipeline. It is responsible for transforming and manipulating individual vertices that define geometric primitives (like triangles, lines, or points) before they are passed to subsequent stages for rasterization and shading. This stage is highly programmable, allowing developers to achieve a wide range of visual effects.
Conceptual diagram of the vertex processing stage.
Vertex Input
The pipeline begins by receiving vertex data. This data typically includes attributes such as:
- Position: The 3D coordinates (x, y, z) of the vertex in model space.
- Normal: A vector indicating the surface orientation at the vertex, used for lighting calculations.
- Texture Coordinates (UV): Coordinates used to sample textures.
- Color: Vertex color, which can be interpolated across the primitive.
- Tangent/Binormal: Vectors used in advanced shading techniques like normal mapping.
This data is organized into vertex buffers and can be accessed and modified by the vertex shader.
Vertex Shader
The vertex shader is a programmable unit that executes for each vertex. Its primary responsibilities include:
- Performing geometric transformations (explained below).
- Calculating lighting and other per-vertex effects.
- Generating or modifying vertex attributes that will be interpolated during rasterization (e.g., setting varying colors).
- Discarding vertices that should not be rendered.
A simplified example of a vertex shader written in HLSL might look like this:
struct VS_INPUT
{
float4 pos : POSITION;
float3 normal : NORMAL;
float2 tex : TEXCOORD0;
};
struct VS_OUTPUT
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD0;
};
VS_OUTPUT main(VS_INPUT input, uniform float4x4 worldViewProjection)
{
VS_OUTPUT output;
output.pos = mul(input.pos, worldViewProjection);
output.tex = input.tex;
return output;
}
Transformations
A key task of the vertex shader is applying a series of transformations to the vertex's position. These transformations map the vertex from its original model space to screen space. The common transformations include:
- Model Transformation: Translates, rotates, and scales the object in the world.
- View Transformation: Positions and orients the camera in the world.
- Projection Transformation: Defines the viewing frustum (perspective or orthographic) and maps 3D coordinates to 2D screen coordinates.
These are often combined into a single World-View-Projection (WVP) matrix.
Clipping
After transformations, vertices that lie outside the viewing frustum are discarded. This process, known as clipping, ensures that only visible geometry is rendered. Clipping can be done efficiently in clip space (the space after the projection transformation).
Rasterization
Once vertices are transformed and clipped, the pipeline proceeds to rasterization. This stage converts the geometric primitives (e.g., triangles) defined by the remaining vertices into a set of pixels on the screen. During rasterization, vertex attributes (like texture coordinates, colors, and interpolated normals) are interpolated across the surface of the primitive.
The interpolated values are then passed to the fragment shader (or pixel shader) for further processing.