Interactive Hexagon Grid Tutorial Part 7 - WebGL and Three.js

Published: Thursday, June 27, 2024

Greetings, friends! Welcome to the final part of the "Interactive Hexagon Grid" tutorial series! Congrats on making it this far! In parts 1 through 3, we learned how to make an interactive hexagon grid using the 2D HTML canvas API and JavaScript. In parts 4 through 6, we learned how to make an interactive hexagon grid multi-pass shader using Shadertoy and GLSL. In this tutorial, we'll learn how to use the WebGL canvas API and Three.js to build an interactive hexagon grid using the power of the GPU!

What is WebGL?

The Web Graphics Library (WebGL) is a JavaScript API for rendering 2D and 3D graphics by taking advantage of hardware acceleration, commonly through the GPU or other dedicated hardware. When you see the term "hardware acceleration," it usually means that there is specialized hardware that is better at a task than the CPU.

The CPU is a very general purpose piece of hardware that is good at doing many things, but not always the best at handling intensive applications such as graphics programming. You could say the CPU is a jack of all trades, but a master of none.

Though, we're seeing a lot of surprises in GPU architectures these days. GPUs combined with large language models (LLMs) have been able to perform a variety of tasks. Will we eventually see consumer laptops that are more GPU than CPU? Will Nvidia take over the world, one GPU at a time? Only time will tell. Anyways, I digress 😅

Using WebGL, we can take advantage of the parallel processing powers the GPU provides. The code we run on GPUs is a bit different than typical programming languages such as JavaScript or Python. Though, GLSL does have syntax inspired by the C programming language.

I should note that there is another graphics API called WebGPU that is the successor to WebGL. As of this time of writing, not all major browsers support WebGPU, so WebGL is still useful to know. WebGPU's shader langauge has a Rust-like syntax versus the C-like syntax of GLSL.

We've already seen in previous tutorials that we can build fragment shaders in GLSL and execute a program for every pixel on the screen. WebGL can be thought of a close cousin to GLSL. In fact, the API for WebGL is very similar to the OpenGL ES specification.

When we used the HTML canvas in Part 1 and Part 2 of this tutorial series, we used a 2d context:

js
Copied! ⭐️
const canvas = document.getElementById('canvas')
const ctx = canvas.getContext('2d')

If we wanted to take advantage of GPU hardware, we could use a webgl or webgl2 context instead:

glsl
Copied! ⭐️
const canvas = document.getElementById('canvas')
const ctx = canvas.getContext('webgl') // You can also use 'webgl2' to access more OpenGL features

In this tutorial, we won't be touching the WebGL API directly. We'll be using a very popular JavaScript library called Three.js.

What is Three.js?

Three.js is a powerful JavaScript library that uses WebGL internally to interact with the GPU. It is essentially a wrapper around the WebGL API, and it's a really good one.

I originally wanted to make an interactive hexagon grid in pure WebGL, but it was starting to become very complicated and verbose. I didn't want to make people suffer through that, so I opted for Three.js instead. You're welcome 😂

tip
If you would like to dig deep into WebGL, please visit the WebGL Fundamentals and WebGL2 Fundamentals websites. They're amazing resources.

In this tutorial, we'll use Three.js to automatically setup a <canvas> element with a WebGL context and quickly prepare the initial boilerplate we need to start using fragment shaders. Three.js will also handle all the messy bits of setting up buffers and memory. Trust me, it's super tedious. Three.js abstracts a lot of code and hides the pain from us.

Alright! Enough talk! Time to start coding!

Starting a New Project with Vite

We will use Vite, a modern build tool that helps us quickly get started with new JavaScript projects. Vite will help us run a small web server with hot-reloading, so that we can see changes to our code immediately applied to the browser.

For this tutorial, I will assume you have Node.js and npm installed on your computer. Usually, npm is automatically installed when you install Node.js on your computer.

I am using the following versions:

  • Node v20.14.0
  • npm v10.7.0

First, let's create a new project named my-app in the current directory using Vite:

shell
Copied! ⭐️
npm create vite@latest my-app -- --template vanilla

Then, navigate to the my-app directory and install Three.js using the following command:

shell
Copied! ⭐️
npm install three

As of this time of writing, version 0.165.0 is the latest stable version, so that's the version I'm using for this tutorial.

We can run our app at any time using the following command:

shell
Copied! ⭐️
npm run dev

By default, Vite will create a simple counter app. We will be using the following files the most throughout this tutorial:

  • index.html
  • style.css
  • main.js

Setting up a 2D Scene with Three.js

In this tutorial, we will use Three.js to setup a lot of boilerplate code, so we can render our fragment shader to the screen as quick as possible. Pure WebGL code is very tedious to handle. Three.js's API is much, much simpler to use than regular WebGL code.

Let's first make changes to our index.html file. Replace its contents with the following code:

index.html
Copied! ⭐️
<!doctype html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <link rel="icon" type="image/svg+xml" href="/vite.svg" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Interactive Hexagon Grid</title>
    <script id="fragmentShader" type="x-shader/x-fragment">
      void main()
      {
          vec3 col = vec3(0, 0, 1); // blue

          gl_FragColor = vec4(col, 1.0);
      }
    </script>
  </head>
  <body>
    <div id="container"></div>
    <script type="module" src="/main.js"></script>
  </body>
</html>

The main two additions to this file were the following:

  • <script> element containing fragment shader code
  • <div> element with the id attribute set to container

We'll access the textContent of the <script> element in our JavaScript code in order to fetch our fragment shader code. Notice how we use gl_FragColor in WebGL instead of fragColor in Shadertoy.

tip
If you're using VS Code and want to add syntax highlighting to the GLSL code inside your index.html file, I recommend the WebGL GLSL Editor VS Code extension.

Next, let's setup some initial styles. We'll remove everything from the style.css file and replace the contents with the following:

style.css
Copied! ⭐️
body {
  margin: 1rem;
}

#container {
  width: 100%;
  height: 500px;
}

canvas {
  border: 3px solid blue;
}

We currently don't have a canvas element yet, but that's okay. Three.js will add one for us once we setup a scene.

Next, let's go back to the main.js file. I will walk through all the steps we need to take for making a scene in Three.js that only displays a fragment shader. We won't be doing anything 3D-related in this tutorial. We're only going to draw a plane/quad/rectangle (or whatever you want to call it) that faces us. It'll be like drawing a quad in game engines like Unity. We'll use this quad as our 2D canvas and render our fragment shader inside it.

Remove all the contents from the main.js file and add the following lines of code:

main.js
Copied! ⭐️
import './style.css';
import * as THREE from 'three';

This will import our styles and everything we need from the Three.js library. It might seem weird or illegal to import styles in JavaScript, but since we're using Vite as our build tool, it'll use some magic to inject the styles from style.css into our code.

Next, we're going to query for the <div> element we made earlier.

main.js
Copied! ⭐️
// Get the container element's info
const container = document.getElementById('container');
let containerRect = container.getBoundingClientRect();

As you may recall from Part 1 of this tutorial series, we can use getBoundingClientRect to get information about a DOM element's width and height.

Next, we'll initialize a new scene using Three.js:

main.js
Copied! ⭐️
// Create the scene
const scene = new THREE.Scene();

Then, we'll create a camera object and set its distance in the "3D" world to one. Remember, Three.js is designed for working with 3D environments (hence the name of the library), so we need to still treat the environment like it's 3D.

main.js
Copied! ⭐️
// Create the camera
const camera = new THREE.OrthographicCamera(-1, 1, 1, -1, 0.1, 10);
camera.position.z = 1;

The camera will be set to orthographic because we don't care about using a perspective camera, since we're only drawing in 2D and not 3D.

Next, we'll create a WebGL renderer. This renderer is essentially a WebGL context that is used to track the state of the canvas and draw to the canvas. We can set multiple properties on the renderer. In the code below, we're turning on antialiasing and setting the precision of floating point numbers to mediump.

main.js
Copied! ⭐️
// Create the renderer
const renderer = new THREE.WebGLRenderer({
  antialias: true,
  precision: 'mediump',
});

// Set the rendered canvas's internal width and height attributes to the size of the container
renderer.setSize(containerRect.width, containerRect.height);

Once we have the renderer created, we can add the <canvas> element to the DOM by appending it to an element. We'll append it to the <div> element with the "container" ID we created earlier.

main.js
Copied! ⭐️
// Append the renderer's canvas element to the container
container.appendChild(renderer.domElement);

In the next step, we'll create the "geometry" for our scene. As mentioned previously, our scene will just hold a simple quad/plane/rectangle that is facing us, the viewer. It's like how we switch from 3D and 2D inside the Unity editor.

main.js
Copied! ⭐️
// Create the geometry
const geometry = new THREE.PlaneGeometry(2, 2);

The two parameters we pass into PlaneGeometry are the width and height of the plane, respectively. Using a value of 2 will stretch our plane across the entire canvas.

The next step is to create a new shader material using the fragment shader code we defined in the <script> tag in the index.html file.

main.js
Copied! ⭐️
// Create shader material
const fragmentShader = document.getElementById('fragmentShader').textContent;

const material = new THREE.ShaderMaterial({
  fragmentShader: fragmentShader,
});

We'll use the geometry and the material containing our fragment shader to make a new mesh. In OpenGL and WebGL, the render pipeline requires a pass through a vertex shader and a fragment shader.

Three.js takes care of setting up a basic vertex shader, so we don't have to make one. In pure WebGL, we typically need to make shapes using triangles to form a "mesh". In Three.js, we can let the library take care of the hard work.

The following code will create a new plane/quad and add it to the scene.

main.js
Copied! ⭐️
// Create the mesh and add it to the scene
const plane = new THREE.Mesh(geometry, material);
scene.add(plane);

Finally, we'll render the Three.js scene to the canvas.

main.js
Copied! ⭐️
renderer.render(scene, camera);

Our complete code should look like the following so far:

main.js
Copied! ⭐️
import './style.css';
import * as THREE from 'three';

// Get the container element's info
const container = document.getElementById('container');
let containerRect = container.getBoundingClientRect();

// Create the scene
const scene = new THREE.Scene();

// Create the camera
const camera = new THREE.OrthographicCamera(-1, 1, 1, -1, 0.1, 10);
camera.position.z = 1;

// Create the renderer
const renderer = new THREE.WebGLRenderer({
  antialias: true,
  precision: 'mediump',
});

// Set the rendered canvas's internal width and height attributes to the size of the container
renderer.setSize(containerRect.width, containerRect.height);

// Append the renderer's canvas element to the container
container.appendChild(renderer.domElement);

// Create the geometry
const geometry = new THREE.PlaneGeometry(2, 2);

// Create shader material
const fragmentShader = document.getElementById('fragmentShader').textContent;

const material = new THREE.ShaderMaterial({
  fragmentShader: fragmentShader,
});

// Create the mesh and add it to the scene
const plane = new THREE.Mesh(geometry, material);
scene.add(plane);

renderer.render(scene, camera);

Make sure your Vite development server is running using the following command:

shell
Copied! ⭐️
npm run dev

You should see a canvas filled entirely with blue! 🟦

Nice! We basically made our own version of Shadertoy! Cool! 😎

Now that we have fragment shaders showing up in our canvas, it's time to port the multi-pass shader we made in Shadertoy to the Three.js scene.

Converting Shadertoy's Global Variables to Three.js

In the previous tutorial, we made a cool multi-pass shader using Shadertoy. How do we convert a shader like this to Three.js? First, we need to make our own versions of Shadertoy's "global" variables. This includes the two variables: iResolution and iMouse.

The iResolution variable is simply the size of the canvas's width and height properties. The iMouse variable contains the x-y coordinates of the mouse with respect to the canvas's width and height. In our code, we will call these global variables, u_resolution and u_mouse, respectively.

For variables passed into shaders in WebGL, it is a common convention to prefix them with u_ for "uniform" variables. Uniforms are parameters that we pass into our vertex and fragment shaders. We will update them using our JavaScript code.

Keep in mind that uniforms stay constant/uniform throughout the entire shader program which means we can't change them within the shader code. We can only change them using JavaScript and Three.js, and the new values will be made available in the next frame.

Let's pass the u_resolution and u_mouse variables to our fragment shader. We will add these to the uniforms property inside the material variable we created using the THREE.ShaderMaterial function.

main.js
Copied! ⭐️
const material = new THREE.ShaderMaterial({
  uniforms: {
    u_resolution: {
      value: new THREE.Vector2(containerRect.width, containerRect.height),
    },
    u_mouse: {
      value: new THREE.Vector2(mouseX, mouseY),
    },
  },
  fragmentShader: fragmentShader,
});

Our u_mouse uniform is currently referencing two new variables: mouseX and mouseY. At the top of main.js, we'll initialize these values to NaN.

main.js
Copied! ⭐️
let mouseX = NaN
let mouseY = NaN

The reason why we use NaN instead of null or undefined is because Three.js will interpret is as a non-number when the canvas initially loads. If we set them to null or undefined, Three.js will treat them like a zero, and that will make a hexagon automatically glow at the bottom-left corner of our canvas.

Plus, did you know that NaN is actually still a number in JavaScript? 🙃

js
Copied! ⭐️
typeof(NaN) === 'number' // true

We can set the values of mouseX and mouseY to actual numbers using an event handler for mousemove events.

js
Copied! ⭐️
window.addEventListener('mousemove', (e) => {
  mouseX = e.clientX - containerRect.left;
  mouseY = containerRect.height - e.clientY; // flip the y-axis

  material.uniforms.u_mouse.value.set(mouseX, mouseY);
});

In the code snippet above, we are making the mouseX and mouseY coordinates relative to the bottom-corner of the canvas. Remember, canvas elements with a webgl context use a different coordinate system than the 2d context.

The coordinate system used by the DOM is the same as the one used by 2D contexts. That's why we need to flip the y-axis. This will ensure the y-coordinate starts at zero on the bottom left-corner of the Three.js canvas and increases as we move our mouse upwards.

We can test to see if our uniform variables work by making changes to the shader code in our index.html file:

index.html
Copied! ⭐️
<!doctype html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <link rel="icon" type="image/svg+xml" href="/vite.svg" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Interactive Hexagon Grid</title>
    <script id="fragmentShader" type="x-shader/x-fragment">
      uniform vec2 u_resolution;
      uniform vec2 u_mouse;

      void main()
      {
          vec3 col = vec3(0);

          // Normalize UV coordinates (between 0 and 1)
          vec2 uv = gl_FragCoord.xy/u_resolution;
          vec2 m = u_mouse/u_resolution;

          // Add more blue as the mouse moves to the right
          col = vec3(uv, m.x);

          gl_FragColor = vec4(col, 1.0);
      }
    </script>
  </head>
  <body>
    <div id="container"></div>
    <script type="module" src="/main.js"></script>
  </body>
</html>

Nothing will happen yet because we need to re-render the Three.js scene every frame.

main.js
Copied! ⭐️
function render() {
  renderer.render(scene, camera);

  requestAnimationFrame(render)
}

render()

The code in main.js should look like the following so far:

main.js
Copied! ⭐️
import './style.css';
import * as THREE from 'three';

let mouseX = NaN
let mouseY = NaN

// Get the container element's info
const container = document.getElementById('container');
let containerRect = container.getBoundingClientRect();

// Create the scene
const scene = new THREE.Scene();

// Create the camera
const camera = new THREE.OrthographicCamera(-1, 1, 1, -1, 0.1, 10);
camera.position.z = 1;

// Create the renderer
const renderer = new THREE.WebGLRenderer({
  antialias: true,
  precision: 'mediump',
});

// Set the rendered canvas's internal width and height attributes to the size of the container
renderer.setSize(containerRect.width, containerRect.height);

// Append the renderer's canvas element to the container
container.appendChild(renderer.domElement);

// Create the geometry
const geometry = new THREE.PlaneGeometry(2, 2);

// Create shader material
const fragmentShader = document.getElementById('fragmentShader').textContent;

const material = new THREE.ShaderMaterial({
  uniforms: {
    u_resolution: {
      value: new THREE.Vector2(containerRect.width, containerRect.height),
    },
    u_mouse: {
      value: new THREE.Vector2(mouseX, mouseY),
    },
  },
  fragmentShader: fragmentShader,
});

// Create the mesh and add it to the scene
const plane = new THREE.Mesh(geometry, material);
scene.add(plane);

function render() {
  renderer.render(scene, camera);

  requestAnimationFrame(render)
}

render()

window.addEventListener('mousemove', (e) => {
  mouseX = e.clientX - containerRect.left;
  mouseY = containerRect.height - e.clientY; // flip the y-axis

  material.uniforms.u_mouse.value.set(mouseX, mouseY);
});

After running our app using npm run dev, we should be able to change the color of the canvas by moving our mouse across the x-axis. Moving the mouse to the right will add more blue to the canvas.

Colorful shader changing color as the mouse moves from left to right. As the mouse moves right, blue color is added. As the mouse moves left, blue colors is removed.

Please excuse the graininess in the gif above. That's due to compression loss when I was converting a screen recording to a gif. When the pixel colors are so close together, the compression loss really shows. Your shader canvas should look much better 😅

Alright. We ported Shadertoy's global variables into our Three.js application. The next step is to port the rest of our multi-pass shader to Three.js.

Render Textures

In order to port a multi-pass shader to Three.js, we need to take advantage of something called "render textures" in graphics programming.

When Shadertoy passes data from the "Buffer A" shader to the "Image" shader, it renders the data stored in fragColor to a texture. This texture is then sampled using the "Image" shader to grab data from "Buffer A". Textures can be used as a way to pass data between shaders.

GPUs are very efficient at handling texture data, so why not render data the camera sees to a texture so that it can be used by other shaders? In fact, once we have a render texture, we can even use our JavaScript code (via the CPU) to look at data in that texture. That's how we'll end up checking if all hexagons in our hexagon grid have been touched by the mouse.

Let's experiment with render textures to understand the tuition behind them and use them to create a multi-pass shader in Three.js.

We'll create a second fragment shader in our index.html file that will accept a new uniform called u_texture. We'll read data from the output of the first shader and modify it in our second shader.

index.html
Copied! ⭐️
<!doctype html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <link rel="icon" type="image/svg+xml" href="/vite.svg" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Interactive Hexagon Grid</title>
    <script id="fragmentShader" type="x-shader/x-fragment">
      uniform vec2 u_resolution;
      uniform vec2 u_mouse;

      void main()
      {
          vec3 col = vec3(0);

          vec2 uv = gl_FragCoord.xy/u_resolution;
          vec2 m = u_mouse/u_resolution;

          // Add more blue as the mouse moves to the right
          col = vec3(uv, m.x);

          gl_FragColor = vec4(col, 1.0);
      }
    </script>
    <script id="fragmentShader2" type="x-shader/x-fragment">
      uniform vec2 u_resolution;
      uniform sampler2D u_texture;

      void main()
      {
          vec3 col = vec3(0);

          // read the output of the first shader from the render texture
          vec3 data = texture2D(u_texture, gl_FragCoord.xy/u_resolution).xyz;

          // Remove the red component
          col = data * vec3(0, 1, 1);

          gl_FragColor = vec4(col, 1.0);
      }
    </script>
  </head>
  <body>
    <div id="container"></div>
    <script type="module" src="/main.js"></script>
  </body>
</html>

We now have two fragment shaders in our code, similar to "Buffer A" and the "Image" shader we used in Shadertoy in the previous tutorial.

Notice that the type of u_texture is sampler2D. When we use textures in shaders, we need to use "samplers". Samplers are essentially texture configurations we initialize before we can sample a texture.

There are different types of interpolation methods that can be used to sample textures when the UV coordinates fall outside the normal zero to one range. Three.js takes care of setting up samplers for us and assigns a default configuration, but we can change these settings if we'd like. For this tutorial, we'll just use the default configuration.

Next, we need to update our main.js file to create a render target and new shader material. In Three.js, we can create render textures by using the WebGLRenderTarget method.

main.js
Copied! ⭐️
// Create a render target
const renderTarget = new THREE.WebGLRenderTarget(
  containerRect.width,
  containerRect.height
);

const fragmentShader2 = document.getElementById('fragmentShader2').textContent;

const material2 = new THREE.ShaderMaterial({
  uniforms: {
    u_resolution: {
      value: new THREE.Vector2(containerRect.width, containerRect.height),
    },
    u_texture: { value: renderTarget.texture },
  },
  fragmentShader: fragmentShader2,
});

Notice how we're setting the u_texture uniform in material2 to renderTarget.texture. Make sure to include a value property for u_texture, so Three.js automatically updates u_texture every frame. If we just did u_texture: renderTarget.value, then we'd have to update the value manually in our render function.

Our render function will get a massive overhaul. We're now rendering two shader passes every frame. First, we need to set the render target to the renderTarget variable we created earlier with Three.js. Then, we need to render the scene using the first shader.

After the first render happens, the render texture will be updated. Then, we set the u_texture uniform value to the updated render texture. We'll override the scene's material to force the scene to use the second shader. The second shader can then obtain data from the render texture and modify the data before rendering the final output to the canvas.

Our render function will therefore look like the following:

main.js
Copied! ⭐️
function render() {
  // Set the render target as preparation for storing the output of 
  // the next render into a texture
  renderer.setRenderTarget(renderTarget);

  // Render the scene with the first shader
  renderer.render(scene, camera);

  // Update the scene to use the second shader
  scene.overrideMaterial = material2

  // Set the render target back to null, so the next render doesn't
  // write to the texture
  renderer.setRenderTarget(null);

  // Render the scene again with the second shader
  renderer.render(scene, camera);

  // Use the first shader in the scene again before looping
  scene.overrideMaterial = material

  requestAnimationFrame(render)
}

Using render textures, we can easily create multi-pass shaders in Three.js.

Porting Our Shadertoy Shader to Three.js

Now that we understand render textures more, it's time to port our Shadertoy code to Three.js. As you may recall from the previous tutorial, we were passing data like this:

text
Copied! ⭐️
Buffer A ---> Image
        \
         \---> Next Frame of Buffer A

In order to make a pipeline like this in Three.js, we'll use a concept called "double buffering" that uses two render targets instead of one.

Essentially, we'll pass data like the following:

text
Copied! ⭐️
Shader #1 ---> Shader #2
        \
         \---> Next Frame of Shader #1

In our code, we will setup two render targets: one for the previous frame's data, and one for the current frame's data.

main.js
Copied! ⭐️
// Create render targets
const renderTarget = new THREE.WebGLRenderTarget(
  containerRect.width,
  containerRect.height
);

const renderTarget2 = new THREE.WebGLRenderTarget(
  containerRect.width,
  containerRect.height
);

let currentRenderTarget = renderTarget;
let previousRenderTarget = renderTarget2;

Then, we'll update our materials to use u_texture in both shaders.

main.js
Copied! ⭐️
const material = new THREE.ShaderMaterial({
  uniforms: {
    u_resolution: {
      value: new THREE.Vector2(containerRect.width, containerRect.height),
    },
    u_mouse: {
      value: new THREE.Vector2(mouseX, mouseY),
    },
    u_texture: { value: null },
  },
  fragmentShader: fragmentShader,
});

const material2 = new THREE.ShaderMaterial({
  uniforms: {
    u_resolution: {
      value: new THREE.Vector2(containerRect.width, containerRect.height),
    },
    u_texture: { value: currentRenderTarget.texture },
  },
  fragmentShader: fragmentShader2,
});

We set the value of the first shader's u_texture uniform variable to null because we will manually update it in the render function to prevent an infinite feedback loop formed between the framebuffer and active texture.

Then, we'll update our render function to swap between render targets after rendering the first fragment shader. This will let us pass data from the first shader to the second shader. By using a second render target, we can also pass data from the first shader to the next frame when the first shader runs again.

main.js
Copied! ⭐️
function render() {
  // Set the first shader's texture to the output of the last frame
  material.uniforms.u_texture.value = previousRenderTarget.texture;

  // Set the render target as preparation for storing the output of
  // the next render into a texture
  renderer.setRenderTarget(currentRenderTarget);

  // Render the scene with the first shader
  renderer.render(scene, camera);

  // Swap render targets
  [currentRenderTarget, previousRenderTarget] = [
    previousRenderTarget,
    currentRenderTarget,
  ];

  // Update the scene to use the second shader
  scene.overrideMaterial = material2;

  // Set the render target back to null, so the next render doesn't
  // write to the texture
  renderer.setRenderTarget(null);

  // Render the scene again with the second shader
  renderer.render(scene, camera);

  // Use the first shader in the scene again before looping
  scene.overrideMaterial = material;

  requestAnimationFrame(render);
}

With these changes added, our main.js file should look like the following so far:

main.js
Copied! ⭐️
import './style.css';
import * as THREE from 'three';

let mouseX = NaN;
let mouseY = NaN;

// Get the container element's info
const container = document.getElementById('container');
let containerRect = container.getBoundingClientRect();

// Create the scenes
const scene = new THREE.Scene();
const scene2 = new THREE.Scene();

// Create the camera
const camera = new THREE.OrthographicCamera(-1, 1, 1, -1, 0.1, 10);
camera.position.z = 1;

// Create the renderer
const renderer = new THREE.WebGLRenderer({
  antialias: true,
  precision: 'mediump',
});

// Set the rendered canvas's internal width and height attributes to the size of the container
renderer.setSize(containerRect.width, containerRect.height);

// Append the renderer's canvas element to the container
container.appendChild(renderer.domElement);

// Create a render target
const renderTarget = new THREE.WebGLRenderTarget(
  containerRect.width,
  containerRect.height
);

const renderTarget2 = new THREE.WebGLRenderTarget(
  containerRect.width,
  containerRect.height
);

let currentRenderTarget = renderTarget;
let previousRenderTarget = renderTarget2;

// Create the geometry
const geometry = new THREE.PlaneGeometry(2, 2);

// Create shader materials
const fragmentShader = document.getElementById('fragmentShader').textContent;
const fragmentShader2 = document.getElementById('fragmentShader2').textContent;

const material = new THREE.ShaderMaterial({
  uniforms: {
    u_resolution: {
      value: new THREE.Vector2(containerRect.width, containerRect.height),
    },
    u_mouse: {
      value: new THREE.Vector2(mouseX, mouseY),
    },
    u_texture: { value: null },
  },
  fragmentShader: fragmentShader,
});

const material2 = new THREE.ShaderMaterial({
  uniforms: {
    u_resolution: {
      value: new THREE.Vector2(containerRect.width, containerRect.height),
    },
    u_texture: { value: currentRenderTarget.texture },
  },
  fragmentShader: fragmentShader2,
});

// Create the mesh and add it to the scene
const plane = new THREE.Mesh(geometry, material);
scene.add(plane);

function render() {
  // Set the first shader's texture to the output of the last frame
  material.uniforms.u_texture.value = previousRenderTarget.texture;

  // Set the render target as preparation for storing the output of
  // the next render into a texture
  renderer.setRenderTarget(currentRenderTarget);

  // Render the scene with the first shader
  renderer.render(scene, camera);

  // Swap render targets
  [currentRenderTarget, previousRenderTarget] = [
    previousRenderTarget,
    currentRenderTarget,
  ];

  // Update the scene to use the second shader
  scene.overrideMaterial = material2;

  // Set the render target back to null, so the next render doesn't
  // write to the texture
  renderer.setRenderTarget(null);

  // Render the scene again with the second shader
  renderer.render(scene, camera);

  // Use the first shader in the scene again before looping
  scene.overrideMaterial = material;

  requestAnimationFrame(render);
}

render();

window.addEventListener('mousemove', (e) => {
  mouseX = e.clientX - containerRect.left;
  mouseY = containerRect.height - e.clientY; // flip the y-axis

  material.uniforms.u_mouse.value.set(mouseX, mouseY);
});

Finally, we'll take the two fragment shaders we made in the previous tutorial and insert them into the <script> elements in our index.html file. The "Buffer A" shader should be the first fragment shader. The "Image" shader should be the second shader.

index.html
Copied! ⭐️
<!doctype html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <link rel="icon" type="image/svg+xml" href="/vite.svg" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Interactive Hexagon Grid</title>
    <script id="fragmentShader" type="x-shader/x-fragment">
      uniform vec2 u_resolution;
      uniform vec2 u_mouse;
      uniform sampler2D u_texture;
  
      const vec2 hexSide = vec2(1.7320508, 1); // proportion between two sides of 30-60-90 triangle
      const float scale = 5.; // hexagon grid scale factor
  
      // hexagonal distance
      float hexD(in vec2 p)
      {
          p = abs(p);
          
          return max(dot(p, hexSide * .5), p.y);
      }
  
      // hexagonal coordinates
      vec4 hexC(vec2 p)
      {
          // hexagon centers
          vec4 hc = floor(vec4(p, p - vec2(hexSide.x/2., .5)) / hexSide.xyxy) + .5;
          
          // rectangular grids
          vec4 rg = vec4(p - hc.xy * hexSide, p - (hc.zw + .5) * hexSide);
          
          // hexagon IDs
          return dot(rg.xy, rg.xy) < dot(rg.zw, rg.zw)
              ? vec4(rg.xy, hc.xy)
              : vec4(rg.zw, hc.zw + .5);
      }
  
      // draw hexagon grid and pass data to a render texture
      void main()
      {
          vec3 col = vec3(0);
          
          vec2 uv = (gl_FragCoord.xy - u_resolution * .5) / u_resolution.y;
          vec2 m = (u_mouse.xy - u_resolution * .5) / u_resolution.y;
          
          // read from the render texture
          vec3 data = texture2D(u_texture, gl_FragCoord.xy/u_resolution).xyz;
          
          vec4 h = hexC(uv * scale);
          vec4 hm = hexC(m * scale);
          
          float d = hexD(h.xy);
          
          float ac = 0.;
  
          float touchTimeMultiplier = data.z < 0.001 ? 0. : data.z * 0.95;
          
          if (h.zw == hm.zw) {
              ac = step(0., d);
              touchTimeMultiplier = 1.;
          }
  
          gl_FragColor = vec4(vec3(data.x + ac, d, touchTimeMultiplier), 1.0);
      }
    </script>
    <script id="fragmentShader2" type="x-shader/x-fragment">
      uniform vec2 u_resolution;
      uniform sampler2D u_texture;
  
      const vec3 hoverColor = vec3(.43, .73, .50);
      const vec3 originalColor = vec3(1);
      const vec3 targetColor = vec3(.65, .87, .93);
  
      void main()
      {
          vec3 col = vec3(0);
  
          vec2 uv = gl_FragCoord.xy/u_resolution;
          
          // read from the render texture
          vec3 data = texture2D(u_texture, gl_FragCoord.xy/u_resolution).xyz;
          
          vec3 lerpedCol = mix(targetColor, hoverColor, data.z);
          
          // Change 10 to 5 to make the lines look sharper
          float d = 1. - step(0.5 - 5./u_resolution.y, data.y);
          
          if (data.x > 0.) {
              col = vec3(d) * lerpedCol;
          } else {
              col = vec3(d) * originalColor;
          }
          
          // Shows which tiles have been hovered over already
          // col = vec3(data.x);
  
          gl_FragColor = vec4(col, 1.0);
      }
    </script>
  </head>
  <body>
    <div id="container"></div>
    <script type="module" src="/main.js"></script>
  </body>
</html>

In order to convert the Shadertoy shader to Three.js, we made the following changes:

  • fragCoord -> gl_FragCoord.xy
  • fragColor -> gl_FragColor
  • mainImage -> main
  • iResolution.xy -> u_resolution
  • iMouse.xy -> u_mouse
  • Added the u_resolution, u_mouse, and u_textures uniforms at the top of the first shader
  • Added the u_resolution and u_textures uniforms at the top of the second shader

We also made a change to the following line:

glsl
Copied! ⭐️
float d = 1. - step(0.5 - 10./u_resolution.y, data.y);

In our index.html file, we are using 5. instead of 10.:

glsl
Copied! ⭐️
float d = 1. - step(0.5 - 5./u_resolution.y, data.y);

The reason why we had to change this is due to how Shadertoy renders the canvas. It renders twice as many pixels as the actual size of the canvas. If you used Chrome DevTools to inspect the canvas in Shadertoy, it may show that the canvas is 512 x 288, but the actual resolution inside the canvas is 1024 x 576.

By rendering twice as many pixels as the actual canvas element's size, we can make things look sharper on higher resolution displays. For my display, 5./u_resolution.y looks great, but feel free to play around with the value until it looks nice on your monitor.

Make sure your index.html file and main.js file look good and then run the app using npm run dev. We should see that our interactive hexagon grid is back! You just migrated a multi-pass shader from Shadertoy to Three.js! 🎉🎉🎉

Shader canvas displaying a grid of white hexagons with black outlines. Hexagons being touched by the mouse turn green and then softly fade to light blue when the mouse leaves the hexagon.

Handling Canvas Resizing

Yay! We have an interactive hexagon grid again, but now we have to fix issues where the canvas looks funky when the browser window is resized. Multiple objects in our Three.js app depend on the size of the canvas, so we need to make sure to update all of them.

We'll create an event listener for the resize event at the very end of our main.js file. It should contain the following code:

main.js
Copied! ⭐️
window.addEventListener('resize', () => {
  containerRect = container.getBoundingClientRect();

  renderer.setSize(containerRect.width, containerRect.height);
  
  previousRenderTarget.setSize(containerRect.width, containerRect.height);
  
  currentRenderTarget.setSize(containerRect.width, containerRect.height);

  material.uniforms.u_resolution.value.set(
    containerRect.width,
    containerRect.height
  );

  material2.uniforms.u_resolution.value.set(
    containerRect.width,
    containerRect.height
  );

  material.uniforms.u_mouse.value.set(NaN, NaN);

  render();
});

As previously mentioned, we need to update every object in our Three.js app that depends on the width and height of our "container" element, the div tag with the id attribute set to container.

The renderer is responsible for creating and drawing to the canvas element, so it needs resized when the browser window changes. Each render target needs resized as well. The GPU needs to know the size of each texture before the graphics pipeline runs.

Then, we need to make sure the u_resolution is updated for both materials, so the canvas size changes are propagated to both fragment shaders.

We'll then set the u_mouse uniform back to NaN so that the mouse doesn't randomly highlight a hexagon as we're resizing the browser window.

Finally, we trigger a re-render by calling the render method so that our canvas is redrawn.

Detecting if All Hexagons Were Touched

If you're satisfied with your interactive hexagon grid, you can stop here, but if you want to do something special when all the hexagons are highlighted, then continue reading!

Detecting if all the hexagons are touched in a shader requires the use of CPU code. That is, we have to make the check using JavaScript instead of through the shader. The shader programs run for every pixel individually in parallel, so we can't check if every color in our canvas is white, for instance.

It technically is possible to do so in a shader, but it would require looping through every pixel in our canvas every frame! That would completely slow down our computer.

There is a special comment I added to the second shader in our index.html file for checking the status of all hexagons in our shader.

glsl
Copied! ⭐️
// col = vec3(data.x);

If we uncomment this line of code, then we should be able to see a black and white image that shows every hexagon that has been touched by the mouse.

Shader canvas displaying a grid of solid black hexagons which makes the whole screen appear black. As the mouse hovers over the canvas, the black hexagons turn white. Eventually, the whole screen turns white after hovering over all the hexagons.

Three.js provides a helpful method called readRenderTargetPixels that lets us read every pixel value in our render target buffer. We can make our own function called checkHexagonStatus with the following contents:

main.js
Copied! ⭐️
function checkHexagonStatus(renderTarget) {
  const width = renderTarget.width;
  const height = renderTarget.height;
  const pixelBuffer = new Uint8Array(width * height * 4);

  renderer.readRenderTargetPixels(
    renderTarget,
    0,
    0,
    width,
    height,
    pixelBuffer
  );

  // Only need to check if red value is equal to 255, since each pixel
  // is either white or black which means each color channel has the same value
  for (let i = 0; i < pixelBuffer.length; i += 4) {
    if (pixelBuffer[i] !== 255) {
      return false;
    }
  }
  return true;
}

This function reads the pixels from the render target and iterates through each pixel to detect if the first color channel, the red channel, contains a value that is not 255. That is, if our canvas contains any black still, then the function will return false. I chose to use pixelBuffer[i] !== 255 instead of pixelBuffer[i] === 0 just in case some pixel colors weren't exactly equal to zero.

Remember, black is defined by vec3(0, 0, 0) in a shader or a RGB value of (0, 0, 0) in JavaScript. That means we'll expect the red component to be zero if there is any black found in our render texture.

White is equivalent to vec3(1, 1, 1) in GLSL and a RGB value of (255, 255, 255) in JavaScript. If every pixel in our render texture is white (or at least greater than zero), then we can confidently say that all the hexagons were touched by the mouse.

The for-loop increments by 4 because the pixel values are all laid out in a flat array, similar to how we access pixel data using a 2D context. The first four elements of the pixelBuffer array represent the RGBA color channels of the first pixel in the render texture.

Let's use checkHexagonStatus at the end of our render function so that it runs every frame (thanks to requestAnimationFrame).

main.js
Copied! ⭐️
function render() {
  material.uniforms.u_texture.value = previousRenderTarget.texture;

  // Set the render target as preparation for storing the output of
  // the next render into a texture
  renderer.setRenderTarget(currentRenderTarget);

  // Render the scene with the first shader
  renderer.render(scene, camera);

  // Swap render targets
  [currentRenderTarget, previousRenderTarget] = [
    previousRenderTarget,
    currentRenderTarget,
  ];

  // Update the scene to use the second shader
  scene.overrideMaterial = material2;

  // Set the render target back to null, so the next render doesn't
  // write to the texture
  renderer.setRenderTarget(null);

  // Render the scene again with the second shader
  renderer.render(scene, camera);

  // Use the first shader in the scene again before looping
  scene.overrideMaterial = material;

  // Check if all hexagons have been touched by the mouse
  if (checkHexagonStatus(currentRenderTarget)) {
    console.log('All hexagons found! Yay! 🎉🎉🎉');
  }

  requestAnimationFrame(render);
}

If you hover over all the hexagons, then we should see our celebratory message in our browser's console! Yay!

Finished Code

Don't worry, friend. I got your back! If you want to dig into the finished code right away or make sure your code looks correct, please see the code below.

We started a new Vite project, downloaded the three npm package, and made changes to the following files:

  • index.html
  • style.css
  • main.js

Our completed code for index.html should look like the following:

index.html
Copied! ⭐️
<!doctype html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <link rel="icon" type="image/svg+xml" href="/vite.svg" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Interactive Hexagon Grid</title>
    <script id="fragmentShader" type="x-shader/x-fragment">
      uniform vec2 u_resolution;
      uniform vec2 u_mouse;
      uniform sampler2D u_texture;
  
      const vec2 hexSide = vec2(1.7320508, 1); // proportion between two sides of 30-60-90 triangle
      const float scale = 5.; // hexagon grid scale factor
  
      // hexagonal distance
      float hexD(in vec2 p)
      {
          p = abs(p);
          
          return max(dot(p, hexSide * .5), p.y);
      }
  
      // hexagonal coordinates
      vec4 hexC(vec2 p)
      {
          // hexagon centers
          vec4 hc = floor(vec4(p, p - vec2(hexSide.x/2., .5)) / hexSide.xyxy) + .5;
          
          // rectangular grids
          vec4 rg = vec4(p - hc.xy * hexSide, p - (hc.zw + .5) * hexSide);
          
          // hexagon IDs
          return dot(rg.xy, rg.xy) < dot(rg.zw, rg.zw)
              ? vec4(rg.xy, hc.xy)
              : vec4(rg.zw, hc.zw + .5);
      }
  
      // draw hexagon grid and pass data to a render texture
      void main()
      {
          vec3 col = vec3(0);
          
          vec2 uv = (gl_FragCoord.xy - u_resolution * .5) / u_resolution.y;
          vec2 m = (u_mouse.xy - u_resolution * .5) / u_resolution.y;
          
          // read from the render texture
          vec3 data = texture2D(u_texture, gl_FragCoord.xy/u_resolution).xyz;
          
          vec4 h = hexC(uv * scale);
          vec4 hm = hexC(m * scale);
          
          float d = hexD(h.xy);
          
          float ac = 0.;
  
          float touchTimeMultiplier = data.z < 0.001 ? 0. : data.z * 0.95;
          
          if (h.zw == hm.zw) {
              ac = step(0., d);
              touchTimeMultiplier = 1.;
          }
  
          gl_FragColor = vec4(vec3(data.x + ac, d, touchTimeMultiplier), 1.0);
      }
    </script>
    <script id="fragmentShader2" type="x-shader/x-fragment">
      uniform vec2 u_resolution;
      uniform sampler2D u_texture;
  
      const vec3 hoverColor = vec3(.43, .73, .50);
      const vec3 originalColor = vec3(1);
      const vec3 targetColor = vec3(.65, .87, .93);
  
      void main()
      {
          vec3 col = vec3(0);
  
          vec2 uv = gl_FragCoord.xy/u_resolution;
          
          // read from the render texture
          vec3 data = texture2D(u_texture, gl_FragCoord.xy/u_resolution).xyz;
          
          vec3 lerpedCol = mix(targetColor, hoverColor, data.z);
          
          float d = 1. - step(0.5 - 5./u_resolution.y, data.y);
          
          if (data.x > 0.) {
              col = vec3(d) * lerpedCol;
          } else {
              col = vec3(d) * originalColor;
          }
          
          // Shows which tiles have been hovered over already
          // col = vec3(data.x);
  
          gl_FragColor = vec4(col, 1.0);
      }
    </script>
  </head>
  <body>
    <div id="container"></div>
    <script type="module" src="/main.js"></script>
  </body>
</html>

The completed code for our stylesheet, style.css should contain the following:

style.css
Copied! ⭐️
body {
  margin: 1rem;
}

#container {
  width: 100%;
  height: 500px;
}

canvas {
  border: 3px solid blue;
}

The completed code for main.js, our Three.js app, should contain the following:

main.js
Copied! ⭐️
import './style.css';
import * as THREE from 'three';

let mouseX = NaN;
let mouseY = NaN;

// Get the container element's info
const container = document.getElementById('container');
let containerRect = container.getBoundingClientRect();

// Create the scenes
const scene = new THREE.Scene();

// Create the camera
const camera = new THREE.OrthographicCamera(-1, 1, 1, -1, 0.1, 10);
camera.position.z = 1;

// Create the renderer
const renderer = new THREE.WebGLRenderer({
  antialias: true,
  precision: 'mediump',
});

// Set the rendered canvas's internal width and height attributes to the size of the container
renderer.setSize(containerRect.width, containerRect.height);

// Append the renderer's canvas element to the container
container.appendChild(renderer.domElement);

// Create a render target
const renderTarget = new THREE.WebGLRenderTarget(
  containerRect.width,
  containerRect.height
);

const renderTarget2 = new THREE.WebGLRenderTarget(
  containerRect.width,
  containerRect.height
);

let currentRenderTarget = renderTarget;
let previousRenderTarget = renderTarget2;

// Create the geometry
const geometry = new THREE.PlaneGeometry(2, 2);

// Create shader materials
const fragmentShader = document.getElementById('fragmentShader').textContent;
const fragmentShader2 = document.getElementById('fragmentShader2').textContent;

const material = new THREE.ShaderMaterial({
  uniforms: {
    u_resolution: {
      value: new THREE.Vector2(containerRect.width, containerRect.height),
    },
    u_mouse: {
      value: new THREE.Vector2(mouseX, mouseY),
    },
    u_texture: { value: null },
  },
  fragmentShader: fragmentShader,
});

const material2 = new THREE.ShaderMaterial({
  uniforms: {
    u_resolution: {
      value: new THREE.Vector2(containerRect.width, containerRect.height),
    },
    u_texture: { value: currentRenderTarget.texture },
  },
  fragmentShader: fragmentShader2,
});

// Create the mesh and add it to the scene
const plane = new THREE.Mesh(geometry, material);
scene.add(plane);

// Function for checking if all hexagons were touched or not by the mouse
function checkHexagonStatus(renderTarget) {
  const width = renderTarget.width;
  const height = renderTarget.height;
  const pixelBuffer = new Uint8Array(width * height * 4);

  renderer.readRenderTargetPixels(
    renderTarget,
    0,
    0,
    width,
    height,
    pixelBuffer
  );

  // Only need to check if red value is equal to 255, since each pixel
  // is either white or black which means each color channel has the same value
  for (let i = 0; i < pixelBuffer.length; i += 4) {
    if (pixelBuffer[i] !== 255) {
      return false;
    }
  }
  return true;
}

function render() {
  material.uniforms.u_texture.value = previousRenderTarget.texture;

  // Set the render target as preparation for storing the output of
  // the next render into a texture
  renderer.setRenderTarget(currentRenderTarget);

  // Render the scene with the first shader
  renderer.render(scene, camera);

  // Swap render targets
  [currentRenderTarget, previousRenderTarget] = [
    previousRenderTarget,
    currentRenderTarget,
  ];

  // Update the scene to use the second shader
  scene.overrideMaterial = material2;

  // Set the render target back to null, so the next render doesn't
  // write to the texture
  renderer.setRenderTarget(null);

  // Render the scene again with the second shader
  renderer.render(scene, camera);

  // Use the first shader in the scene again before looping
  scene.overrideMaterial = material;

  // Check if all hexagons have been touched by the mouse
  if (checkHexagonStatus(currentRenderTarget)) {
    console.log('All hexagons found! Yay! 🎉🎉🎉');
  }

  requestAnimationFrame(render);
}

render();

window.addEventListener('mousemove', (e) => {
  mouseX = e.clientX - containerRect.left;
  mouseY = containerRect.height - e.clientY; // flip the y-axis

  material.uniforms.u_mouse.value.set(mouseX, mouseY);
});

window.addEventListener('resize', () => {
  containerRect = container.getBoundingClientRect();
  renderer.setSize(containerRect.width, containerRect.height);
  previousRenderTarget.setSize(containerRect.width, containerRect.height);
  currentRenderTarget.setSize(containerRect.width, containerRect.height);

  material.uniforms.u_resolution.value.set(
    containerRect.width,
    containerRect.height
  );

  material2.uniforms.u_resolution.value.set(
    containerRect.width,
    containerRect.height
  );

  material.uniforms.u_mouse.value.set(NaN, NaN);

  render();
});

Conclusion

Congrats, friend! You made it to the end of my "interactive hexagon grid" tutorial series. In this tutorial series, we learned how to make an interactive grid using the 2D HTML canvas API, Shadertoy, and Three.js. Three different ways to make an interactive grid!

Right away, we can see how powerful WebGL is compared to the 2D HTML canvas API implementation. We can be certain that the user can access all hexagons without having to do weird math or hacks like what we had to do in Part 3 of this tutorial series.

The canvas can be resized to any width or height, and we'll still be able to detect if all hexagons have been touched by the mouse. We also don't have to create a new object in memory for every single hexagon. This helps improve performance in our web browser, but it does put more work on the GPU. Software development is all about tradeoffs! Whether you use the 2D canvas API or WebGL is up to you and your needs.

By completing this tutorial series, our JavaScript, GLSL, WebGL, Three.js, linear algebra, and computer graphics skills just got a boost! Level up that résumé! 🌟

If you found this tutorial series helpful, please consider donating to my cause! 💚💚💚

This tutorial series took over a month of research, planning, and writing. I would be eternally grateful if you donated to help keep this website running and my heart filled with determination! Or, should I say "determi-nathan" 😂

Sorry, you're on a website called "inspir-nathan". Of course I'm going to make puns! Until next time, happy coding and see you in the next tutorial! 🙂

Resources