The author wrote another articlehttps://segmentfault.com/a/11…This paper introduces the implementation of the text editor “simple poem” based on canvas, in which the text is rendered by webgl. This paper mainly describes the process of obtaining font data by canvas, stroke segmentation and analysis, and rendering effect by webgl.
introduction
It is easy to draw text with canvas native API, but the function of text beautification provided by native API is very limited. If you want to draw artistic characters other than strokes and gradients, and you don’t need to spend time and effort to make a special font library, using webgl for rendering is a good choice.
This article focuses on how toUsing canvas native API to obtain text pixel data, and do stroke segmentation, edge search, normal calculation and other processing. Finally, the information is transferred into the shader to achieve the basic illumination of threedimensional text.
The advantage of using canvas native API to obtain text pixel information is that it can draw any font supported by browser without making additional font files; the disadvantage is that it has high time complexity in data processing for some advanced requirements (such as stroke segmentation). But for personal projects, this is a faster way to make custom WordArt effect.
Finally, the effect is achieved
This paper focuses on the text data processing, so only a relatively simple rendering effect is used, but with these data, it is easy to design a more cool text art effect.
Source code of “simple poem” editor:https://github.com/moyuer1992…
Preview address:https://moyuer1992.github.io/…
The core code of word processing is as followshttps://github.com/moyuer1992…
Webgl rendering core code:https://github.com/moyuer1992…
Canvas get font pixels
Getting pixel information of text is the first step.
We use an off screen canvas to draw basic text. Set the font size to size, set size = 200 in the project, and set the canvas side length and font size to be the same. Here, the larger the size is set, the more accurate the pixel information will be obtained. Of course, the cost is that it will take longer. If we pursue speed, we can reduce the size.
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
ctx.font = size + 'px ' + ( options.font 'official script');
ctx.fillStyle = 'black';
ctx.textAlign = 'center';
ctx.textBaseline = 'middle';
ctx.fillText(text, width / 2, height / 2);
Get pixel information:
var imageData = ctx.getImageData(0, 0, ctx.canvas.width, ctx.canvas.height);
var data = imageData.data;
Well, the data variable is the final pixel data we get. Now let’s look at the data structure of data
As you can see, the result is an array of 200x200x4. The 200×200 canvas has a total of 40000 pixels, and the color on each pixel is represented by four values. Due to the use of black coloring, the first three digits must be 0. The fourth bit represents transparency. For pixels without color, its value is 0. For pixels with color, its value is greater than zero. Therefore, if we want to judge whether the text has a value on line j and column I, we only need to judge data [(J ctx.canvas.width + i) 4 + 3] is greater than zero.
Therefore, we can write a function to determine whether there is color in a certain position
var hasPixel = function (j, i) {
//Row J, column I
if (i < 0  j < 0) {
return false;
}
return !!data[(j * ctx.canvas.width + i) * 4 + 3];
};
Stroke segmentation
Next, we need to segment the strokes. In fact, it is a process of searching for connected domain: regard the text as an image, find all connected parts of the image, and each part is a stroke.
For the idea of finding connected domain, please refer to this article
The algorithm is divided into several steps

Scan the image line by line and record the connected segment of each line.

Label each connected segment. For the first line, label the connected segments from 1. If it is not the first line, it will judge whether it is connected with a connected segment of the previous line. If it is, it will be given the label of the connected segment.

If a connected segment is connected with two connected segments in the previous row at the same time, the association pair is recorded.

All the association pairs are merged (that is, the process of merging and searching), and the unique label of each connected domain is obtained.
The following is the core code, and the key variables are defined as follows:

g: The twodimensional array of width * height indicates which connected domain each pixel belongs to. A value of 0 means that the pixel is not on the text and is transparent.

e: Two dimensional array of width * height, indicating whether each pixel is an image edge.

Markmap: Record Association pairs.

CNT: the total number of tags before the association pair is merged.
progressive scanning:
for (var j = 0; j < ctx.canvas.height; j += grid) {
g.push([]);
e.push([]);
for (var i = 0; i < ctx.canvas.width; i += grid) {
var value = 0;
var isEdge = false;
if (hasPixel(j, i)) {
value = markPoint(j, i);
}
e[j][i] = isEdge;
g[j][i] = value;
}
}
Mark:
var markPoint = function (j, i) {
var value = 0;
if (i > 0 && hasPixel(j, i  1)) {
//Connected to the left
value = g[j][i  1];
} else {
value = ++cnt;
}
if ( j > 0 && hasPixel(j  1, i) && ( i === 0  !hasPixel(j  1, i  1) ) ) {
//Connected to the top and not connected to the top left (i.e. connected to the previous row for the first time)
if (g[j  1][i] !== value) {
markMap.push([g[j  1][i], value]);
}
}
if ( !hasPixel(j, i  1) ) {
//The beginning of the line
if ( hasPixel(j  1, i  1) && g[j  1][i  1] !== value) {
//Connected to upper left
markMap.push([g[j  1][i  1], value]);
}
}
if ( !hasPixel(j, i + 1) ) {
//End of line
if ( hasPixel(j  1, i + 1) && g[j  1][i + 1] !== value) {
//Connected to upper right
markMap.push([g[j  1][i + 1], value]);
}
}
return value;
};
So far, the whole image is traversed, and steps 13 of the algorithm have been completed. Next, we need to classify the markers according to the association information in the markmap, and finally form an image with pixels with the same markers in the same connected domain (that is, the same stroke).
To classify tag Association pairs is a parallel search problem. The core code is as follows:
for (var i = 0; i < cnt; i++) {
markArr[i] = i;
}
var findFather = function (n) {
if (markArr[n] === n) {
return n;
} else {
markArr[n] = findFather(markArr[n]);
return markArr[n];
}
}
for (i = 0; i < markMap.length; i++) {
var a = markMap[i][0];
var b = markMap[i][3];
var f1 = findFather(a);
var f2 = findFather(b);
if (f1 !== f2) {
markArr[f2] = f1;
}
}
Finally, we get the markarr array, which records the final class mark corresponding to each original mark.
For example: let the image array marked in the previous step be g; if markarr [3] = 1 and mark [5] = 1, then all the pixels with values of 3 and 5 in G eventually belong to a connected domain marked as 1.
According to the mark arr array, we can get the final connected domain segmentation data.
Text outline search
After getting the segmented image data, we can gl.POINTS Webgl is used for rendering, and different colors can be set for different strokes. But that’s not what we need. We want to render the text into a threedimensional model, which means we need to convert the twodimensional lattice into threedimensional graphics.
Suppose that the text has n strokes, then the data we have can be regarded as n connected lattice. First of all, we need to transform the N text lattice into n twodimensional plane graphics. In webgl, all faces must be made up of triangles. This means that we need to transform a lattice into a set of adjacent triangles.
Maybe the first idea you think of is to connect every three adjacent pixels to form a triangle. This is indeed a method, but because there are too many pixels, this method takes a long time and is not recommended.
Our solution to this problem is as follows:

Find the outline of each stroke (that is, each connected domain) and store it in the array in clockwise order.

At this time, the contour of each connected domain can be regarded as a polygon, which can be divided into several triangles by classical triangulation algorithm.
The algorithm of contour search can also refer to this article
The general idea is to find the first empty pixel above as the starting point of the outer contour, record the entry direction as 6 (directly above), find the next connected pixel along the clockwise direction, record the entry direction, and so on, until the end point coincides with the starting point.
Next, we need to judge whether there is hollowing out, so we need to find the inner contour point, find the first point with empty pixels below and not on any contour, as the starting point of the inner contour, and record the entry as 2 (directly below). The next steps are the same as finding the outer contour.
Note that the image may not have only one inner contour, so it needs to be judged by loop. If there is no such pixel, there is no inner contour.
Through the previous data processing, we can easily judge whether a pixel is above the contour: just judge whether there are non empty pixels around. But the key problem is that the triangulation algorithm requires the vertices of the “polygon” to be arranged in order. In this way, the core logic is actually how to sort contour pixels clockwise.
The method of sequential contour search for a single connected domain is as follows:
Variable definition:

v: Current connected domain tag number

g: The twodimensional array of width * height indicates which connected domain each pixel belongs to. A value of 0 means that the pixel is not on the text and is transparent. If the value is V, the pixel is in the current connected domain.

e: Two dimensional array of width * height, indicating whether each pixel is an image edge.

Entryrecord: array of entry direction markers

Rs: final contour result

Holes: if there is an inner contour, it is the starting point of the inner contour (the inner contour point is at the end of the array. If there are multiple inner contours, you only need to record the starting position of the inner contour. This is to adapt to the parameter setting of earcut in triangulation Library, which will be discussed later)
code:
function orderEdge (g, e, v, gap) {
v++;
var rs = [];
var entryRecord = [];
var start = findOuterContourEntry(g, v);
var next = start;
var end = false;
rs.push(start);
entryRecord.push(6);
var holes = [];
var mark;
var holeMark = 2;
e[start[1]][start[0]] = holeMark;
var process = function (i, j) {
if (i < 0  i >= g[0].length  j < 0  j >= g.length) {
return false;
}
if (g[j][i] !== v  tmp) {
return false;
}
e[j][i] = holeMark;
tmp = [i, j]
rs.push(tmp);
mark = true;
return true;
}
var map = [
(i,j) => {return {'i': i + 1, 'j': j}},
(i,j) => {return {'i': i + 1, 'j': j + 1}},
(i,j) => {return {'i': i, 'j': j +1}},
(i,j) => {return {'i': i  1, 'j': j + 1}},
(i,j) => {return {'i': i  1, 'j': j}},
(i,j) => {return {'i': i  1, 'j': j  1}},
(i,j) => {return {'i': i, 'j': j  1}},
(i,j) => {return {'i': i + 1, 'j': j  1}},
];
var convertEntry = function (index) {
var arr = [4, 5, 6, 7, 0, 1, 2, 3];
return arr[index];
}
while (!end) {
var i = next[0];
var j = next[1];
var tmp = null;
var entryIndex = entryRecord[entryRecord.length  1];
for (var c = 0; c < 8; c++) {
var index = ((entryIndex + 1) + c) % 8;
var hasNext = process(map[index](i, j).i, map[index](i, j).j);
if (hasNext) {
entryIndex = convertEntry(index);
break;
}
}
if (tmp) {
next = tmp;
if ((next[0] === start[0]) && (next[1] === start[1])) {
var innerEntry = findInnerContourEntry(g, v, e);
if (innerEntry) {
next = start = innerEntry;
e[start[1]][start[0]] = holeMark;
rs.push(next);
entryRecord.push(entryIndex);
entryIndex = 2;
holes.push(rs.length  1);
holeMark++;
} else {
end = true;
}
}
} else {
rs.splice(rs.length  1, 1);
entryIndex = convertEntry(entryRecord.splice(entryRecord.length  1, 1)[0]);
next = rs[rs.length  1];
}
entryRecord.push(entryIndex);
}
return [rs, holes];
}
function findOuterContourEntry (g, v) {
var start = [1, 1];
for (var j = 0; j < g.length; j++) {
for (var i = 0; i < g[0].length; i++) {
if (g[j][i] === v) {
start = [i, j];
return start;
}
}
}
return start;
}
function findInnerContourEntry (g, v, e) {
var start = false;
for (var j = 0; j < g.length; j++) {
for (var i = 0; i < g[0].length; i++) {
if (g[j][i] === v && (g[j + 1] && g[j + 1][i] === 0)) {
var isInContours = false;
if (typeof(e[j][i]) === 'number') {
isInContours = true;
}
if (!isInContours) {
start = [i, j];
return start;
}
}
}
}
return start;
}
In order to check the search of inner contour, we find a text with ring connected domain to test
See everything OK, then this step is done.
Triangulation structural plane
For the process of triangulation, we use the open source library earcut for processing. Earcut project address:
The array of triangles is calculated by earcut
var triangles = earcut(flatten(points), holes);
For each triangle, the coordinates of three vertices need to be set when entering the shader, and the normal vector of the triangle plane needs to be calculated at the same time. For a triangle composed of three vertices a, B and C, the normal vector is calculated as follows:
var normal = cross(subtract(b, a), subtract(c, a));
The establishment of three dimensional model of characters
Now we have only one side of the text. Since we want to make threedimensional text, we need to calculate the front, back and side of the text at the same time.
The front and back are easy to get:
for (var n = 0; n < triangles.length; n += 3) {
var a = points[triangles[n]];
var b = points[triangles[n + 1]];
var c = points[triangles[n + 2]];
//=====Font front data=====
triangle(vec3(a[0], a[1], z), vec3(b[0], b[1], z), vec3(c[0], c[1], z), index);
//=====Font back data=====
triangle(vec3(a[0], a[1], z2), vec3(b[0], b[1], z2), vec3(c[0], c[1], z2), index);
}
The emphasis is on the construction of the side, where both the inner and outer contours need to be considered. The front and back sides of each group of adjacent points on the contour can form a rectangle, and the rectangle can be divided into two triangles to obtain the side structure. The code is as follows:
var holesMap = [];
var last = 0;
if (holes.length) {
for (var holeIndex = 0; holeIndex < holes.length; holeIndex++) {
holesMap.push([last, holes[holeIndex]  1]);
last = holes[holeIndex];
}
}
holesMap.push([last, points.length  1]);
for (var i = 0; i < holesMap.length; i++) {
var startAt = holesMap[i][0];
var endAt = holesMap[i][1];
for (var j = startAt; j < endAt; j++) {
triangle(vec3(points[j][0], points[j][1], z), vec3(points[j][0], points[j][1], z2), vec3(points[j+1][0], points[j+1][1], z), index);
triangle(vec3(points[j][0], points[j][1], z2), vec3(points[j+1][0], points[j+1][1], z2), vec3(points[j+1][0], points[j+1][1], z), index);
}
triangle(vec3(points[startAt][0], points[startAt][1], z), vec3(points[startAt][0], points[startAt][1], z2), vec3(points[endAt][0], points[endAt][1], z), index);
triangle(vec3(points[startAt][0], points[startAt][1], z2), vec3(points[endAt][0], points[endAt][1], z2), vec3(points[endAt][0], points[endAt][1], z), index);
}
Webgl rendering
So far, we have processed all the necessary data. Next, we need to pass the useful parameters to the vertex shader.
The parameters passed into the vertex shader are defined as follows:
attribute vec3 vPosition;
attribute vec4 vNormal;
uniform vec4 ambientProduct, diffuseProduct, specularProduct;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform vec4 lightPosition;
uniform float shininess;
uniform mat3 normalMatrix;
The output variables from vertex shaders to fragment shaders are defined as follows:
varying vec4 fColor;
Vertex shader key code:
vec4 aPosition = vec4(vPosition, 1.0);
……
gl_Position = projectionMatrix * modelViewMatrix * aPosition;
fColor = ambient + diffuse +specular;
Key code of chip shader:
gl_FragColor = fColor;
followup
The rendering of a threedimensional Chinese character has been completed. You must think this effect is not cool enough. Maybe you want to add some animation to it. Don’t worry. The next article will introduce a text effect and animation design.