High-level GPU Compute + Rendering for Browser & Node.js.
Updated 2025-11-25 09:07:19 +01:00
Designed and implemented a GPU-accelerated MNIST training engine fully in WebGPU. Built complete forward and backward passes using 6 custom WGSL compute shaders: Dense layer forward kernels ReLU activation Softmax + cross-entropy backward Dense layer gradient kernels SGD optimizer Implemented GPU memory layouts, buffer management, shader orchestration, and dispatch scaling. Achieved ~40% training accuracy after 20 epochs on batch 64 in browser. Demonstrates ability to build GPU compute pipelines and AI runtimes from low-level operations.
Updated 2025-11-24 18:56:21 +01:00
This project is a small in-browser demonstration of key components of a transformer-style attention mechanism. It runs entirely in JavaScript using ES modules.
Updated 2025-11-18 13:16:49 +01:00
Updated 2025-11-18 12:55:15 +01:00
Updated 2025-11-18 11:02:43 +01:00