3D Without Canvas or WebGL
What this is: a small experiment showing how 3D rotation and projection map to browser rendering using only DOM elements, TypeScript, and CSS transforms. What this is not: a general-purpose 3D engine or a recommended production rendering pipeline.
Why DOM Instead of Canvas?
In this experiment, the cube is not drawn on a canvas. Each vertex is a div element, positioned using CSS transforms.
This is a deliberate constraint: no canvas, no WebGL, no engine; just enough code to make the math visible from matrix multiplication to motion on screen.
The Problem
A cube has eight vertices. Each vertex is a point in space: .
To rotate the cube, we need new coordinates for every vertex—coordinates that place each point in its rotated position.
Rotation matrices perform that transformation. In this demo, the math is done directly in TypeScript and the result is pushed into CSS transforms.
One Axis at a Time
Rotation around an axis leaves that axis unchanged. Rotating around moves points in the -plane. Rotating around moves points in the -plane.
Each axis has its own matrix.
Around X
The first row is . This preserves :
const rotationX = (vertexArray: Vertex[], angle: number): Vertex[] => {
const rotationMatrix = [
[1, 0, 0],
[0, Math.cos(angle), -Math.sin(angle)],
[0, Math.sin(angle), Math.cos(angle)]
]
return transformPointsWithMatrix(vertexArray, rotationMatrix)
}
Around Y
The middle row is . o rotation around the -axis leaves unchanged while rotating points in the -plane:
const rotationY = (vertexArray: Vertex[], angle: number): Vertex[] => {
const rotationMatrix = [
[Math.cos(angle), 0, Math.sin(angle)],
[0, 1, 0],
[-Math.sin(angle), 0, Math.cos(angle)]
]
return transformPointsWithMatrix(vertexArray, rotationMatrix)
}
Around Z
The last row is . This preserves :
const rotationZ = (vertexArray: Vertex[], angle: number): Vertex[] => {
const rotationMatrix = [
[Math.cos(angle), -Math.sin(angle), 0],
[Math.sin(angle), Math.cos(angle), 0],
[0, 0, 1]
]
return transformPointsWithMatrix(vertexArray, rotationMatrix)
}
Applying a Rotation
To rotate a vertex, multiply the matrix by its coordinates:
The result is the rotated position. In this demo, that math is done in plain TypeScript:
const matrixMultiplyVertex = (matrix: number[][], vertex: Vertex): Vertex => {
const result: number[][] = []
for (let i = 0; i < matrix.length; i++) {
let sum = 0
sum += matrix[i][0] * vertex.x
sum += matrix[i][1] * vertex.y
sum += matrix[i][2] * vertex.z
result[i] = [sum]
}
return { x: result[0][0], y: result[1][0], z: result[2][0] }
}
Combining Rotations
The cube above uses all three matrices in sequence: , then , then .
Order matters. Rotating first around then around gives a different result than then . Matrix multiplication is not commutative.
export const transformPoints = (vertexArray: Vertex[], angleX: number, angleY: number, angleZ: number, scale: number, distance: number) => {
let rotatedPoints = rotationX(vertexArray, angleX)
rotatedPoints = rotationY(rotatedPoints, angleY)
rotatedPoints = rotationZ(rotatedPoints, angleZ)
const scaledPoints = scaleXYZ(rotatedPoints, scale)
const projectedPoints = projectPoints(scaledPoints, distance)
return projectedPoints
}
The goal isn’t to build a general transform stack; it’s to keep the full pipeline visible: matrix multiplication in TypeScript, projection, and DOM updates in the browser.
From 3D to Screen
After rotation, the cube exists in three dimensions. Your screen has two.
A perspective projection maps 3D points to 2D. Points farther from the viewer appear smaller. The projection divides and by a factor that depends on :
where is the distance from the viewer to the projection plane.
const projectPoints = (vertexArray: Vertex[], distance: number): Vertex[] => {
const result: Vertex[] = []
vertexArray.forEach(vertex => {
const f = 1 / (distance - vertex.z)
const projectionMatrix = [
[f, 0, 0],
[0, f, 0],
[0, 0, 1]
]
result.push(matrixMultiplyVertex(projectionMatrix, vertex))
})
return result
}
This simplified projection works for a small demo, though in a more robust renderer you’d guard against points crossing the projection plane.
Each frame, eight vertices become eight screen positions. Those positions render as the dots you see above.
Again: no canvas, no 3D engine—just math → DOM updates.
The Render Loop
React manages the vertices as state. When mouse movement triggers a recalculation, the component re-renders with new coordinates:
{vertices.map((vertex, index) => (
<div
key={index}
className={styles.vertex}
style={{
// Template literal: backticks allow embedded expressions ${...}
// to inject computed coordinates directly into the CSS string
transform: `translate3d(
${vertex.x + center.x}px,
${vertex.y + center.y}px,
${vertex.z}px
)`
}}
/>
))}
The browser handles the actual positioning. We compute the math; CSS does the rendering.
DOM, GPU, and Seeing the Math
Using translate3d lets the browser composite vertices on the GPU. For eight tiny divs, that’s plenty: the GPU moves a few layers while the CPU focuses on the math.
This demo isn’t about “beating” WebGL. It’s about keeping the entire 3D pipeline visible, rotation & projection, so you can read the math directly in TypeScript and see the result on screen.
Here's the full chain:
- Take a 3D point
- Apply rotation matrices
- Project to 2D
- Push the result into a CSS transform