This content originally appeared on DEV Community and was authored by ndesmic
I found myself needing to make a test bed to try out some shader stuff. Making a small 3d engine that can render some basic privatives is pretty useful for these tasks. It's also just a good way to learn if you don't know how WebGL works (and it's not straightforward) so hopefully I can overlap my little need with a tutorial about how I put it together. This probably won't be terribly noteworthy as a tutorial but hopefully adds a little diversity to cross reference. It'll probably also be multiple posts as even for the basics there's a lot to cover.
Boilerplate
export class WcShaderCanvas extends HTMLElement {
static observedAttributes = ["image", "height", "width"];
#height = 720;
#width = 1280;
constructor() {
super();
this.bind(this);
}
bind(element) {
element.attachEvents = element.attachEvents.bind(element);
element.cacheDom = element.cacheDom.bind(element);
element.createShadowDom = element.createShadowDom.bind(element);
element.render = element.render.bind(element);
}
async connectedCallback() {
this.createShadowDom();
this.cacheDom();
this.attachEvents();
this.render();
}
createShadowDom() {
this.shadow = this.attachShadow({ mode: "open" });
this.shadow.innerHTML = `
<style>
:host { display: block; }
</style>
<canvas width="${this.#width}px" height="${this.#height}px"></canvas>
`;
}
cacheDom() {
this.dom = {};
this.dom.canvas = this.shadow.querySelector("canvas");
}
attachEvents() {
}
render() {
}
attributeChangedCallback(name, oldValue, newValue) {
if (newValue !== oldValue) {
this[name] = newValue;
}
}
set height(value) {
this.#height = value;
if (this.dom) {
this.dom.canvas.height = value;
}
}
set width(value) {
this.#width = value;
if (this.dom) {
this.dom.canvas.height = value;
}
}
}
customElements.define("wc-shader-canvas", WcShaderCanvas);
Basic custom element boilerplate, nothing fancy here. We setup a canvas in a shadowDOM and get a reference to it.
Initialization
WebGL has a lot ceremony to set things up. I'm going to encapsulate most of it in a method called bootGpu
:
async connectedCallback(){
this.createShadowDom();
this.cacheDom();
this.attachEvents();
await this.bootGpu(); //new
this.render();
}
async bootGpu(){
this.context = this.dom.canvas.getContext("webgl");
}
This the the first step to setting up a WebGL context. Much like with canvas we instead get a different type of context called "webgl". This context will have a lot of methods and properties and the API was designed to match that of OpenGL so it's a little painful from a web perspective. Basically everything exists in the WebGL context object so expect to pass that object around all over the place. You'll also find lots of APIs use constant int values defined here as enums.
Note: many people like to use the name gl
for the WebGL context instance. I find that a bit confusing (in other languages gl
is also often a namespace) so I call it context
instead but if you are cross referencing other material keep note of that, it's the thing you get back from canvas.createContext("webgl")
.
An overview of a WebGL program
Rendering with WebGL is not straightforward. Essentially what we'll be doing is calling a bunch of commands to load stuff onto the GPU and then tell it to run. The most basic pieces of data we need to pass are arrays of numbers representing positions, colors, UVs, normals and all sorts of other stuff and the actual shader programs that use that data to draw an image.
A note on "Spaces"
I'll use some terminology to talk about spaces. "Screen Space" is 2d coordinates from -1 to 1 with 0,0 in the center. This is where things are positioned on the screen this can also be called "Clip Space". Vertices offscreen are "clipped" meaning they either aren't drawn at all or they have to be sub-divided where the screen cuts them off. This is necessary for performance (don't drawn things you can't see) and correctness (things behind the viewer don't make sense).
Shaders
There are two types of shaders in Web GL: vertex and fragment.
Vertex Shaders
These shaders run first. They take your arrays of numbers and do some transform on them to get them into screen/clip space. Basically, you'll pass a series of 3D points and this piece of code says how they get crushed down into 2D points.
Fragment Shaders
These shaders run second. Once you have the transformed set of vertices this will run per pixel. The hardware automatically interpolates the values before pass them into this shader. So, you'll get some interpolated screen values and you'll use those to produce a color for the pixel.
Compiling Shaders
The process to compile a shader is the same for both vertex and fragment.
function compileShader(context, text, type){
const shader = context.createShader(type);
context.shaderSource(shader, text);
context.compileShader(shader);
if (!context.getShaderParameter(shader, context.COMPILE_STATUS)) {
throw new Error(`Failed to compile shader: ${context.getShaderInfoLog(shader)}`);
}
return shader;
}
First we'll take out context and make a new shader. We have to give it a type though and this is done through a magic constant int defined on the WebGL context, there's one for vertex shaders context.VERTEX_SHADER
and fragments shaders context.FRAGMENT_SHADER
. The next line tells WebGL to add the text to the shader object (we'll be seeing this sort of verbose assignment statements a lot). The the next line will try to compile it. Getting the status is a bit annoying. It doesn't just tell us the result, we have to query the context and ask for a specific property of the shader, the compile status. If it was true then it compiled successfully and if not there was an error. It won't just tell you the error either, we have to use a different method to query the shader's compile log to get the reason.
Programs
If it passed we have a valid shader object but in order to use it we need to associate it with a "program" which is the combination of a vertex shader and fragment shader. You might associate a shader with multiple programs which is why this is a separate step.
Immediately after we set get the context let's create a program:
function compileProgram(context, vertexShader, fragmentShader){
const program = context.createProgram();
context.attachShader(program, vertexShader);
context.attachShader(program, fragmentShader);
context.linkProgram(program);
if (!context.getProgramParameter(program, context.LINK_STATUS)) {
throw new Error(`Failed to compile WebGL program: ${context.getProgramInfoLog(program)}`);
}
return program;
}
This looks a lot like compiling a shader and almost redundant. Associating a shader to a program is done with context.attachShader
. To be honest I don't know what linkProgram
does exactly but it's another thing we need to do to make the program usable, you can think of it as compiling the program itself. It can also fail. I'm not sure how but I imagine passing failed shaders to a program is probably the most common way. So as with the shader we have to do a bit of a dance to get the failure reason. If it succeeds we are good to go.
async bootGpu() {
this.context = this.dom.canvas.getContext("webgl");
this.program = this.context.createProgram();
const vertexShader = compileShader(this.context, `
attribute vec3 aVertexPosition;
void main(){
gl_Position = vec4(aVertexPosition, 1.0);
}
`, this.context.VERTEX_SHADER);
const fragmentShader = compileShader(this.context, `
void main() {
gl_FragColor = vec4(1.0, 0, 0, 1.0);
}
`, this.context.FRAGMENT_SHADER);
this.program = compileProgram(this.context, vertexShader, fragmentShader)
this.context.useProgram(this.program);
}
Back in bootGpu
we can start using these functions. We'll ignore exactly what is going on in the shaders for now but the end result will be a red screen. The final line tells the WebGL context we are setting this as the active program and the next vertices we draw will use it.
Vertices
So we have our program setup, now we need the actual data, the set of points that will eventually be drawn to the screen.
createPositions() {
const positionBuffer = this.context.createBuffer();
this.context.bindBuffer(this.context.ARRAY_BUFFER, positionBuffer);
const positions = new Float32Array([
-1.0, -1.0,
1.0, -1.0,
1.0, 1.0,
-1.0, 1.0
]);
this.context.bufferData(this.context.ARRAY_BUFFER, positions, this.context.STATIC_DRAW);
const positionLocation = this.context.getAttribLocation(this.program, "aVertexPosition");
this.context.enableVertexAttribArray(positionLocation);
this.context.vertexAttribPointer(positionLocation, 2, this.context.FLOAT, false, 0, 0);
}
This method creates a series of vertices representing positions in space. There are 4 of them representing each corner of the screen. In clip space coordinates vary from -1 to 1 with 0,0 in the exact center. We will do absolutely no transforms on these vertices in the vertex shader so they will be passed as is meaning we're going to draw a rectangle over the whole screen. If we were doing 3d geometry these would probably have a Z-coordinate and in the vertex shader we'd have to use perspective transformation to squish this into 2d clip space depending on where the camera is. Anyway let's break this down.
First let's create a buffer. A buffer being just a linear array of bytes. Next we bind the buffer. This tells WebGL that this is the "active" buffer and how we're going to use it. For most things it's going to be an ARRAY_BUFFER
which just means it's used for general vertex attributes like color, position etc. The other type ELEMENT_ARRAY_BUFFER
has a special usage to let you reuse vertices in existing buffers.
Next we create a typed array of points. GPUs use explicitly typed data and we'll need to know the size which is why this is necessary, but it's easy to convert. Note that these are not grouped at all, they are just a 1-d series of points. Then we actually put the data into the buffer. It knows which buffer because we called bindBuffer
prior to this line to set positionBuffer
as the active ARRAY_BUFFER
. So we put our set of points into that buffer and we also tell WebGL how we expect to use it. If we're constantly changing the values it might optimize it differently. In this case we don't really care and are using STATIC_DRAW
which means we don't intend to change it much.
context.getAttribLocation
is an interesting one. What this does is query our program and figure out which numeric index our aVertextPosition
attribute occupies. I didn't touch on it earlier but in GLSL attributes
vary by vertex and are how we get data into the vertex shader. They are defined at the top of vertex shader, for our example:
attribute vec2 aVertexPosition; //this
varying lowp vec4 vColor;
void main(){
gl_Position = vec4(aVertexPosition, 0.0, 1.0);
vColor = vec4(1.0, 0, 0, 1.0);
}
See aVertextPosition
? That is basically a parameter and main
will be called for each element in the array, where each element is a vec2
(2-element vector). So in order to pass data in we need to know how to access that particular attribute and one way we can do that is to use getAttribLocation
using its name and WebGL will give us the index.
Then we call enableVertexAttribArray
on that attribute (using the index). Why? This just tells WebGL that we are actually using it and by default it's disabled, so again a bit of ceremony. The last bit is where everything falls into place. Since we just have a linear set of 32-bit floats we need to actually divide those up into vertices. Let's zoom in:
this.context.vertexAttribPointer(positionLocation, 2, this.context.FLOAT, false, 0, 0);
First parameter is the attribute in question. The second is the number of elements. Remember that aVertexPosition is a vec2
so this is 2 (for 3d it would probably be a vec3 etc). So every 2 values in the buffer equals one vertex. But we need to know how big the values are, in this case we used a Float32 array so we tell it that we are using floats (default 32-bits). Next is telling WebGL if the value is normalized between 0 and 1. This is for converting integers to floats between 0 and 1. We're using floats so this has no effect. Next is stride. This says how big of a step we take between elements in case we packed other data into the buffer. We did not so this is 0 (tightly packed), each set of 2 32-big values is one element and the next one immediate follows. Finally the offset. Again nothing funny going on here, we start at element 0.
This is finally enough to draw something.
Drawing
We'll render after boot:
async connectedCallback() {
this.createShadowDom();
this.cacheDom();
this.attachEvents();
await this.bootGpu();
this.render();
}
Let's look at render:
render() {
this.context.clear(this.context.COLOR_BUFFER_BIT | this.context.DEPTH_BUFFER_BIT);
this.context.drawArrays(this.context.TRIANGLES, 0, 3);
}
Thankfully not too complex after all that setup. The first line is somewhat understandable, we need to clear the framebuffer of data. In WebGL we use bit flags to refer to which things we want to clear. In this case we want to clear colors (32-bits by default) as well as the depth buffer (this doesn't matter since we don't use z-index but it's a good idea to clear as well).
Finally is the call that actually draws things drawArrays
. This will take our ARRAY_BUFFERS
, feed them into the attributes we setup in our vertex shaders, runs the vertex shader which passes it's data down to the fragment shader which returns a color for each pixel drawing our scene. In this case it looks like this:
Not too impressive but it's something. Though why the heck is it a triangle? Shouldn't it have been a square that covered the whole screen? Well drawArrays
takes 3 parameters. The privative type which can be a number of different things like separate triangles or triangles that share common vertices. You can see what those different types are here: http://www.dgp.toronto.edu/~ah/csc418/fall_2001/tut/ogl_draw.html (note that QUAD is not supported by WebGL).
We chose TRIANGLES
which means every 3 vertices are separate triangles. We only have 4 points though so we need 6 vertices to draw the 2 triangles that make up the quad. The last parameter is the number of elements which should be a multiple of 3 for triangles. We can change the type to TRIANGLE_FAN
and set the element count to 4 and it will do what we want.
We can also try TRIANGLE_STRIP
but that gives us something a bit off. You need to be careful about the order in which vertices are defined in order for it to draw correctly.
And there we have it. This is perhaps one of the simplest WebGL programs you can create but it's very deep with concepts that will take a while to really set in. The nice thing is with this boilerplate you can start adding small features to your engine.
This content originally appeared on DEV Community and was authored by ndesmic
ndesmic | Sciencx (2021-04-12T22:58:22+00:00) WebGL 3D Engine from Scratch Part 1: Drawing a Colored Quad. Retrieved from https://www.scien.cx/2021/04/12/webgl-3d-engine-from-scratch-part-1-drawing-a-colored-quad/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.