Vertex Processing
Vertex processing is a crucial stage in the graphics rendering pipeline. It's where the individual vertices that define your 3D geometry are transformed, manipulated, and prepared for subsequent stages like rasterization. In DirectX, this is primarily handled by the Vertex Shader, a programmable stage that allows for highly customized vertex operations.
The Role of Vertex Processing
Before a 3D model can be drawn on a 2D screen, its vertices must undergo several transformations. Vertex processing accomplishes the following key tasks:
- Model Transformation: Translates, rotates, and scales the model from its local coordinate system to world space.
- View Transformation: Positions and orients the camera in world space, transforming vertices into view space (camera space).
- Projection Transformation: Projects the 3D scene onto a 2D viewing plane, typically using perspective or orthographic projection. This transforms vertices into clip space.
- Lighting Calculations: Applies lighting models to determine the color and intensity of light at each vertex based on material properties, light sources, and surface normals.
- Other Vertex Attributes: Manipulates other vertex data such as texture coordinates, colors, and normals for use in later stages.
The Vertex Shader
The Vertex Shader is a piece of code (typically written in High-Level Shading Language - HLSL) that runs on the GPU for every single vertex submitted to the graphics pipeline. Its primary input is a single vertex, and its primary output is a transformed vertex, ready for clipping and projection.
Input and Output
A typical vertex shader will receive:
- Vertex Data: Position, normal, color, texture coordinates, etc., as defined in your vertex buffer.
- Constant Buffers: Uniform variables that are the same for all vertices processed in a draw call, such as transformation matrices (world, view, projection), camera position, or lighting parameters.
The vertex shader must output at least:
- Position in Clip Space: A 4-component vector (
SV_Positionin HLSL) that represents the vertex's position after all transformations. This position is used by the clipping stage to determine which parts of the geometry are visible.
It can also output other varying data (often called "semantic data") that will be interpolated across the face of the primitive during rasterization and passed to the Fragment Shader, such as:
- World-space position for lighting calculations.
- Normals transformed to world or view space.
- Texture coordinates.
- Vertex colors.
Example HLSL Vertex Shader (Simplified)
// Define structures for input and output
struct VS_INPUT {
float4 Pos : POSITION; // Vertex position
float3 Normal : NORMAL; // Vertex normal
float2 Tex : TEXCOORD; // Texture coordinates
};
struct VS_OUTPUT {
float4 Pos : SV_POSITION; // Clip space position (required)
float3 WorldPos : POSITION_W; // World space position (for lighting)
float3 Normal : NORMAL; // Transformed normal
float2 Tex : TEXCOORD; // Texture coordinates
};
// Constant buffers for transformation matrices
cbuffer CBuffer_Matrix : register(b0) {
matrix World;
matrix View;
matrix Projection;
};
VS_OUTPUT main(VS_INPUT input) {
VS_OUTPUT output = (VS_OUTPUT)0;
// Apply transformations
float4 worldPos = mul(input.Pos, World);
float4 viewPos = mul(worldPos, View);
output.Pos = mul(viewPos, Projection);
// Pass through other data, transforming it as needed
output.WorldPos = worldPos.xyz;
output.Normal = mul(input.Normal, (float3x3)World); // Transform normal to world space
output.Tex = input.Tex;
return output;
}
SV_POSITION semantic is mandatory for the vertex shader's output position. Other semantic names (like POSITION, NORMAL, TEXCOORD) are conventions, and the exact names and their order can be defined by the developer.
Vertex Data and Attributes
The data that makes up a vertex is defined in your application and is stored in vertex buffers. Common vertex attributes include:
- Position: The 3D coordinates of the vertex in object space.
- Normal: A vector perpendicular to the surface at the vertex, used for lighting calculations.
- Color: A base color for the vertex.
- Texture Coordinates (UVs): 2D coordinates that map a point on a texture to the vertex.
These attributes are provided to the vertex shader as input. The shader can then process them, transform them, and pass them along to subsequent pipeline stages.
Transformations in Detail
The sequence of transformations applied to vertices is critical:
- Model Space to World Space: The model matrix transforms the object from its local origin to its correct position, orientation, and scale in the 3D world.
- World Space to View Space: The view matrix (derived from the camera's position and orientation) transforms all world objects relative to the camera.
- View Space to Clip Space: The projection matrix (perspective or orthographic) transforms the view frustum into a canonical view volume, typically a cube. Vertices outside this volume are clipped.
The output of the vertex shader, when in clip space, is then passed to the clipping stage. Vertices that fall outside the clipping volume are discarded. Those that remain are then projected onto the screen's near plane and are ready for the rasterization stage.
Next Steps
After vertex processing and clipping, the surviving geometry is rasterized into fragments. The next stages in the pipeline are Rasterization and Fragment Shading, where the color of each pixel is determined.