Detailed use in Chinese word segmentation based on Jieba package in Python

Time:2021-4-15

Detailed use of Chinese word segmentation based on Jieba package in python (1)

01. Preface

Previous articles also used some Jieba participles, but they are basically in the skin. Now we will analyze them in the existing Python environmentOfficial documentsDo some understanding and specific introduction. The main content of this paper is also obtained from the official website documents.

02. Introduction of Jieba

02.1 What

“jieba” (Chinese for “to stutter”)Chiese text segmention:built to be the best Python Chinse word segmenmtation module.
“Jieba” Chinese word segmentation: the best Python Chinese word segmentation component

02.2 features

  • Three word segmentation modes are supported
    Precise pattern, trying to cut the sentence most accurately, is suitable for text analysis;
    The whole mode scans all the words that can be formed into words in the sentence, which is very fast, but can not solve the ambiguity;
    Search engine mode, on the basis of precise mode, segments long words again to improve recall rate, which is suitable for engine word segmentation.
  • Support traditional Chinese word segmentation
  • Support custom dictionary
  • MIT licensing agreement

02.3 installation and use

In view of the fact that the organizations providing various packages are gradually abandoning the maintenance of Python 2, it is strongly recommended to use Python 3. The installation of Jieba participle is also very simple.
Automatic installation mode: PIP install Jieba (window environment) PIP3 install Jieba (Linux Environment);
How to use: import Jieba

The algorithm involved in 4.02

  • Based on prefix dictionary, efficient word graph scanning is realized, and all possible word formation conditions of Chinese characters in sentences are generatedDirected acyclic graph (DAG)
  • Dynamic programming is used to find the maximum probability path and find the maximum segmentation combination based on word frequency
  • For unknown words, HMM (hidden Markov) model based on Chinese character word formation ability is adopted, and Viterbi algorithm is used

03. Main functions

03.01 participle

  • jieba.cutMethod takes three input parameters: the string to be segmented; cut_ All parameter is used to control whether to use full mode; HMM parameter is used to control whether to use HMM model
  • jieba.cut_for_searchMethod takes two parameters: the string to be segmented and whether to use HMM model. This method is suitable for word segmentation of inverted index in search engine, and the granularity is fine
  • The string to be segmented can be Unicode or UTF-8 string or GBK string. Note: it is not recommended to input GBK string directly. It may be decoded into UTF-8 unexpectedly
  • jieba.cutAnd jieba.cut_ for_ The structure returned by search is an iterative generator. You can use the for loop to get each word (Unicode) after word segmentation, or use the
  • jieba.lcutAndjieba.lcut_for_searchReturn to list directly
  • jieba.Tokenizer(dictionary=DEFAULT_DICT)New user-defined word segmentation, which can be used to use different dictionaries at the same time.jieba.dtIt is the default word participator, and all global word segmentation correlation functions are the mapping of the word participator.
    Code instance
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Date    : 2018-05-05 22:15:13
# @Author  : JackPI ([email protected])
# @Link    : https://blog.csdn.net/meiqi0538
# @Version : $Id$
import jieba

seg_ list =  jieba.cut "I came to Tsinghua University in Beijing," cut_ all=True)
Print ("full mode): +" / ". Join (SEG_ List) # full mode

seg_ list =  jieba.cut "I came to Tsinghua University in Beijing," cut_ all=False)
Print ("precision mode: +" / ". Join (SEG)_ List) # exact pattern

seg_ list =  jieba.cut ("he came to Netease Hangyan building"); the default is the precise mode
print(", ".join(seg_list))

seg_ list =  jieba.cut_ for_ Search ("Xiaoming graduated from the Institute of computing, Chinese Academy of Sciences, and then studied at Kyoto University in Japan")? Search engine model
print(", ".join(seg_list))

 

Output results

Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\JACKPI~1\AppData\Local\Temp\jieba.cache
Loading model cost 1.026 seconds.
Prefix dict has been built succesfully.
Full mode: I / come to / Beijing / Tsinghua / Tsinghua University / Huada / University
Precise mode: I / come to / Beijing / Tsinghua University
He, came, came, Netease, hang Yan, mansion
Xiao Ming, master, graduated from, China, science, college, Academy of Sciences, Chinese Academy of Sciences, Institute of computing, after, in, Japan, Kyoto, University, Kyoto University, Japan
[Finished in 1.7s]

 

03.02 add custom dictionary

  • Developers can specify their own custom dictionaries to include words that are not in the Jieba thesaurus. Although Jieba has the ability to recognize new words, adding new words by itself can ensure higher accuracy
  • Usage:jieba.load_userdict(file_name) # file_ Name is the path of the file class object or custom dictionary
  • Dictionary format anddict.txtEach line is divided into three parts: words, word frequency (can be omitted), part of speech (can be omitted), separated by spaces, and the order can not be reversed. file_ If name is a file opened by path or binary mode, the file must be UTF-8 encoding.
  • When the word frequency is omitted, the word frequency of the word can be separated by using the automatic calculation method
    Example of adding a custom dictionary
Innovation office 3 I
Cloud computing 5
Caitlin NZ
Taichung

 

  • Change word splitter (default is jieba.dt )TMP of_ Dir and cache_ File property, which can respectively specify the folder where the cache file is located and its file name for the restricted file system.
    Use cases
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Date    : 2018-05-05 22:15:13
# @Author  : JackPI ([email protected])
# @Link    : https://blog.csdn.net/meiqi0538
# @Version : $Id$
#Import Jieba package
import jieba
#Management system path
import sys
sys.path.append("../")
#Get custom dictionary
jieba.load_userdict("userdict.txt")
#Import part of speech tagging package
import jieba.posseg as pseg

#Add words
jieba.add_ Word ('graphene ')
jieba.add_ Word ('caitlin ')
#Delete words
jieba.del_ Word ('user defined word ')
#Test data of tuple type
test_sent = (
"Li Xiaofu is the director of innovation office and an expert in cloud computing; what is Bayi Shuanglu
"For example, I entered a title with" Hanyu appreciation "and added this word to the user-defined thesaurus as n-category
"Taichung" should not be cut. Graphene can be separated from Mac, and Caitlin can be separated from Mac. "
)
#Default participle
words = jieba.cut(test_sent)
Print ('/'. Join (words)) #

print("="*40)
#Part of speech tagging
result = pseg.cut(test_sent)
#Use the for loop to separate the separated words and their parts of speech with / and add, and spaces
for w in result:
    print(w.word, "/", w.flag, ", ", end=' ')

print("\n" + "="*40)

#Segmentation of English
terms = jieba.cut('easy_install is great')
print('/'.join(terms))
#Segmentation of English and Chinese characters
terms =  jieba.cut ('python's regular expressions are easy to use ')
print('/'.join(terms))

print("="*40)
# test frequency tune
testlist = [
('It's a nice day ', ('today','weather '),
('If you put it in post, there will be an error. ', ('middle','will '),
There is a traitor among us,
]

for sent, seg in testlist:
    print('/'.join(jieba.cut(sent, HMM=False)))
    word = ''.join(seg)
    print('%s Before: %s, After: %s' % (word, jieba.get_FREQ(word), jieba.suggest_freq(seg, True)))
    print('/'.join(jieba.cut(sent, HMM=False)))
    print("-"*40)

 

 

result

Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\JACKPI~1\AppData\Local\Temp\jieba.cache
Loading model cost 1.063 seconds.
Prefix dict has been built succesfully.
Li Xiaofu / yes / Innovation Office / Director / yes / Cloud Computing / aspect / Expert /; / what / yes / Bayi Shuanglu/
/For example / I / input / A / with / "/ Hanyu appreciation /" / Title /, / in / custom / Thesaurus / in / also / added / this / word is / N / class/
/"/ Taichung /" / correct / should / won't / be / cut /. /MAC / up / separable / "/ graphene /" /; / at this time / again / separable / come / Caitlin /.
========================================
Li Xiaofu / NR, is / V, innovation office / I, Director / B, ye / D, is / V, cloud computing / x, aspect / N, of / UJ, expert / N,; / x, / x, what / R, is / V, Bayi Shuanglu / NZ,  
 /X, e.g. / V, I / R, input / V, one / m, with / V, "/ x, Hanyu appreciation / NZ," / x, of / UJ, title / N,, / x, in / P, custom / L, thesaurus / N, middle / F, also / D, add / V, Le / UL, this / R, word / N, for / P, N / Eng, class / Q,  
 /X, "/ x, Taichung / s," / x, correct / AD, should / V, no / D, will / V, be / P, cut / AD. /X, MAC / Eng, up / F, separable / V, "/ x, graphene / x," / x,; / x, at this time / C, and / D, separable / C, separable / V, Lai / ZG, Caitlin / NZ, Le / UL. / x ,  
========================================
easy_install/ /is/ /great
Python // of / regular expressions / yes / easy to use / of
========================================
It's a nice day
Today's weather is before: 3, after: 0
Today / weather / nice
----------------------------------------
If / put / post / will / error /.
Before: 763, after: 494
If / put / post / in / will / go wrong /.
----------------------------------------
We are / in / out / out / one / traitor
Before: 3, after: 3
We are / in / out / out / one / traitor
----------------------------------------
[Finished in 2.6s]

 

03.02 adjustment dictionary

  • Useadd_word(word, freq=None, tag=None)Anddel_word(word)The dictionary can be dynamically modified in the program.
  • Usesuggest_freq(segment, tune=True)The frequency of a single word can be adjusted so that it can (or cannot) be separated.
  • Note: the automatically calculated word frequency may be invalid when using HMM new word discovery function.
>>> print('/'.join( jieba.cut ('If you put it in post, there will be an error. ', HMM=False)))
If / put / post / will / error /.
>>>  jieba.suggest_ Freq (('middle ',' will '), true)
494
>>> print('/'.join( jieba.cut ('If you put it in post, there will be an error. ', HMM=False)))
If / put / post / in / will / go wrong /.
>>> print('/'.join( jieba.cut ('"Taichung" right should not be cut, HMM = false)))
"/ Taiwan / China /" / correct / should / won't / be / cut
>>>  jieba.suggest_ Freq ('taichung ', true)
69
>>> print('/'.join( jieba.cut ('"Taichung" right should not be cut, HMM = false)))
"/ Taichung /" / correct / should / won't / be / cut

Detailed use in Chinese word segmentation based on Jieba package in python (2)

02. Keyword extraction

02.01 keyword extraction based on TF-IDF algorithm

import jieba.analyse

 

  • jieba.analyse.extract_tags(sentence, topK=20, withWeight=False,
    allowPOS=())
    What needs to be explained are:
    1. Sentence is the text to be extracted
    2. TOPK is to return several keywords with the largest TF / IDF weight, and the default value is 20
    3. Withweight is whether to return the keyword weight value together. The default value is false
    4. Allowpos only includes the words of the specified part of speech, and the default value is empty, that is, it is not filtered
  • jieba.analyse.TFIDF (idf_ Path = none) create a new TFIDF instance_ Path is the IDF frequency file

Code examples

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# @Date    : 2018-05-05 22:15:13
# @Author  : JackPI ([email protected])
# @Link    : https://blog.csdn.net/meiqi0538
# @Version : $Id$
import jieba
import jieba.analyse
#Read the file, return a string, use UTF-8 encoding to read, the document is located in this Python and directory
Content = open ('name of the people. TXT ','r', encoding '='utf-8'). Read ()
tags = jieba.analyse.extract_tags(content,topK=10) 
print(",".join(tags))

 

Running results

Building prefix dict from the default dictionary ...
Dumping model to file cache C:\Users\JACKPI~1\AppData\Local\Temp\jieba.cache
Loading model cost 1.280 seconds.
Prefix dict has been built succesfully.
Hou Liang, Li Dakang, Gao Yuliang, Qi Tongwei, Gao Xiaoqin, Ruijin, Chen Hai, teacher, Ding Yizhen, Chenggong
[Finished in 5.9s]

 

The IDF text corpus used in keyword extraction can be switched to the path of custom corpus

  • Usage: jieba.analyse.set_ idf_ path(file_ name) # file_ Name is the path of custom corpus
  • Examples of custom corpus
    Labor protection 13.900677652 labor protection 13.900677652 biochemistry 13.900677652 biochemistry 13.900677652 osabel 13.900677652 osabel 13.900677652 expedition member 13.900677652 expedition member 13.900677652 on post 11.5027823792 on post 11.5027823792 reverse gear 12.2912397395 reverse gear 12.2912397395 compile 9.21854642485 compile 9.21854642485 butterfly stroke 11.1926274509 outsourcing 11.8212361103
  • Examples of usage
import jieba
import jieba.analyse
#Read the file, return a string, use UTF-8 encoding to read, the document is located in this Python and directory
content  = open('idf.txt.big','r',encoding='utf-8').read()
tags = jieba.analyse.extract_tags(content, topK=10)
print(",".join(tags))

 

result

Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\JACKPI~1\AppData\Local\Temp\jieba.cache
Loading model cost 1.186 seconds.
Prefix dict has been built succesfully.
13.2075304714,13.900677652,12.8020653633,12.5143832909,12.2912397395,12.1089181827,11.9547675029,11.8212361103,11.7034530746,11.598092559
[Finished in 20.9s]

 

The stop words text corpus used in keyword extraction can be switched to the path of custom corpus

  • Usage: jieba.analyse.set_ stop_ words(file_ name) # file_ Name is the path of custom corpus
  • Custom corpus example:
!
"
#
$
%
&
'
(
)
*
+
,
-
--
.
..
...
......
...................
./
1
reporter
number
year
month
day
Time
branch
second
/
//
0
1
2
3
4

 

  • Examples of usage
import jieba
import jieba.analyse
#Read the file, return a string, use UTF-8 encoding to read, the document is located in this Python and directory
Content = open (u'name of the people. TXT ','r', encoding '='utf-8'). Read ()
jieba.analyse.set_stop_words("stopwords.txt")
tags = jieba.analyse.extract_tags(content, topK=10)
print(",".join(tags))

 

result

Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\JACKPI~1\AppData\Local\Temp\jieba.cache
Loading model cost 1.316 seconds.
Prefix dict has been built succesfully.
Hou Liang, Li Dakang, Gao Yuliang, Qi Tongwei, Gao Xiaoqin, Ruijin, Chen Hai, teacher, Ding Yizhen, Chenggong
[Finished in 5.2s]

 

Examples of keyword weight values returned with keywords

import jieba
import jieba.analyse
#Read the file, return a string, use UTF-8 encoding to read, the document is located in this Python and directory
Content = open (u'name of the people. TXT ','r', encoding '='utf-8'). Read ()
jieba.analyse.set_stop_words("stopwords.txt")
tags = jieba.analyse.extract_tags(content, topK=10,withWeight=True)
for tag in tags:
	print("tag:%s\t\t weight:%f"%(tag[0],tag[1]))

 

result

Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\JACKPI~1\AppData\Local\Temp\jieba.cache
Loading model cost 1.115 seconds.
Prefix dict has been built succesfully.
Tag: Hou Liang weight:0.257260
Tag: Li Dakang weight:0.143901
Tag: Gao Yuliang weight:0.108856
Tag: Qi Tongwei weight:0.098479
Tag: Gao Xiaoqin weight:0.062259
Tag: Ruijin weight:0.060405
Tag: Chen Hai weight:0.054036
Tag: teacher weight:0.051980
Tag: Ding Yizhen weight:0.049729
Success: tag weight:0.046647
[Finished in 5.3s]

 

02.02 part of speech tagging

  • jieba.posseg.POSTokenizer (tokenizer = none) new user-defined word segmentation, tokenizer
    Parameter to specify the jieba.Tokenizer Word segmentation. jieba.posseg.dt Tagging word participator for default part of speech.
  • The part of speech of each word after sentence segmentation is marked, and the marking method compatible with ICTCLAS is used.
  • Examples of usage
>>> import jieba.posseg as pseg
>>> words =  pseg.cut (I love Tiananmen in Beijing)
>>> for word, flag in words:
...    print('%s %s' % (word, flag))
...
My R
Love v
Beijing NS
Tiananmen Square

 

Part of speech comparison table

Part of speech coding Part of speech name annotation
Ag Morphemes Adjective morpheme. The adjective code is a, and the morpheme code G is preceded by A.
a Formative words Take the first letter of the English adjective objective.
ad Adverbial words An adjective used directly as an adverbial. Adjective code a and adverb code D are combined.
an Noun and adjective Adjectives with noun function. Adjective code a and noun code n are combined.
b Distinguishing words Take the initial consonant of the Chinese character “BIE”.
c conjunction Take the first letter of the English conjunction conjunction.
dg Paramorpheme Adverbial morphemes. Adverb code is D, morpheme code G is preceded by D.
d adverb Take the second letter of adverb, because the first letter has been used as an adjective.
e interjection Take the first letter of English exclamation.
f Locative words Take the Chinese character “Fang”
g morpheme Most morphemes can be used as the “root” of compound words and the initial consonant of Chinese character “root”.
h Preceding component Take the first letter of English head.
i idiom Take the first letter of the English idiom idiom.
j Abbreviations Take the initial consonant of the Chinese character “Jian”.
k Followed by component  
l idiom Idioms have not yet become idioms, a bit “temporary”, take the initials of “Lin”.
m numeral Take the third letter of the English numeral, N, u has other uses.
Ng Noun morpheme Nominal morphemes. The noun code is n, and the morpheme code G is preceded by n.
n noun Take the first letter of the English noun noun.
nr name The noun code n and the initials of “Ren” are combined.
ns place name Noun code n and locative code s are combined.
nt Institutional groups The initial consonant of “Tuan” is t, and the noun codes N and T are combined.
nz Other proper names The first letter of the initial consonant of “Zhuan” is Z, and the noun codes N and Z are combined.
o an onomatopoeia Take the first letter of onomatopoeia.
p preposition Take the first letter of the preposition prepositional.
q classifier Take the first letter of English quantity.
r pronoun Take the second letter of pronoun pronoun, because P has been used in preposition.
s place Take the first letter of English space.
tg Tense morpheme Time part of speech morpheme. The time word code is t, and the morpheme code G is preceded by T.
t Time words Take the first letter of English time.
u auxiliary word Take English auxiliary
vg Verb morpheme Verb morpheme. The verb code is v. Precede the morpheme code g with a v.
v verb Take the first letter of verb verb verb
vd coverb A verb that acts directly as an adverbial. The codes of verbs and adverbs are put together.
vn Noun verb The function of nouns is to refer to verbs. The codes of verbs and nouns are put together.
w punctuation  
x Non morpheme words A non morpheme word is just a symbol. The letter X is usually used to represent the unknown number and symbol.
y statement label designator Take the initial consonant of Chinese character “Yu”.
z State words Take the first letter of the initial consonant of the Chinese character “Xing”
un Unknown words Unrecognized words and user-defined phrases. Take the first two letters of unkonwn. (non Peking University standard, defined in CSW participle)

02.03 parallel participle

  • Principle: after dividing the target text by lines, assign each line of text to multiple Python processes for parallel word segmentation, and then merge the results, so as to obtain a considerable increase in the speed of word segmentation
  • Based on the multiprocessing module of python, windows is not supported at present
  • usage
    jieba.enable_ Parallel (4) # turn on the parallel word segmentation mode, and the parameter is the number of parallel processes jieba.disable_ Parallel () # turn off parallel word segmentation mode
    Official use cases
import sys
import time
sys.path.append("../../")
import jieba

jieba.enable_parallel()

url = sys.argv[1]
content = open(url,"rb").read()
t1 = time.time()
words = "/ ".join(jieba.cut(content))

t2 = time.time()
tm_cost = t2-t1

log_f = open("1.log","wb")
log_f.write(words.encode('utf-8'))

print('speed %s bytes/second' % (len(content)/tm_cost))

 

  • Note: parallel word segmentation only supports default word segmentation jieba.dt And jieba.posseg.dt .

02.04 tokenize: returns the starting and ending positions of words in the original text

Note that the input parameter only accepts Unicode
Default mode

import jieba
import jieba.analyse
result =  jieba.tokenize Yonghe clothing accessories Co., Ltd
for tk in result:
    print("word %s\t\t start: %d \t\t end:%d" % (tk[0],tk[1],tk[2]))

 

result

Building prefix dict from the default dictionary ...
Loading model from cache C:\Users\JACKPI~1\AppData\Local\Temp\jieba.cache
Loading model cost 1.054 seconds.
Prefix dict has been built succesfully.
Word start: 0 end:2
Word start: 2 end:4
Word start: 4 end:6
Word Ltd.? Start: 6 end:10
[Finished in 3.3s]

 

  • search mode
result =  jieba.tokenize (u'yonghe clothing accessories Co., Ltd., mode ='search ')
for tk in result:
    print("word %s\t\t start: %d \t\t end:%d" % (tk[0],tk[1],tk[2]))

 

result

Word Yonghe start: 0 end:2
Word clothing start: 2 end:4
Word jewelry start: 4 end:6
Word start: 6 end:8
Word company start: 8 end:10
Word limited start: 6 end:10

 

#02.05 Chinese analyzer for whoosh search engine

  • Quote: from jieba.analyse import ChineseAnalyzer
  • Official cases
# -*- coding: UTF-8 -*-
from __future__ import unicode_literals
import sys,os
sys.path.append("../")
from whoosh.index import create_in,open_dir
from whoosh.fields import *
from whoosh.qparser import QueryParser

from jieba.analyse import ChineseAnalyzer

analyzer = ChineseAnalyzer()

schema = Schema(title=TEXT(stored=True), path=ID(stored=True), content=TEXT(stored=True, analyzer=analyzer))
if not os.path.exists("tmp"):
    os.mkdir("tmp")

ix = create_in("tmp", schema) # for create new index
#ix = open_dir("tmp") # for read only
writer = ix.writer()

writer.add_document(
    title="document1",
    path="/a",
    content="This is the first document we’ve added!"
)

writer.add_document(
    title="document2",
    path="/b",
    Content: "the second one your Chinese test is even more interesting
)

writer.add_document(
    title="document3",
    path="/c",
    Content = "buy fruit and then go to the Expo Garden. "
)

writer.add_document(
    title="document4",
    path="/c",
    Content: "after going through the subordinate departments every month, the director of industry and information technology department should personally explain the installation work of 24 port switch and other technical devices."
)

writer.add_document(
    title="document4",
    path="/c",
    Let's exchange. "
)

writer.commit()
searcher = ix.searcher()
parser = QueryParser("content", schema=ix.schema)

For keyword in:
    print("result of ",keyword)
    q = parser.parse(keyword)
    results = searcher.search(q)
    for hit in results:
        print(hit.highlights("content"))
    print("="*10)

For t in analyzer ("my good friend is Li Ming; I love Beijing Tiananmen; IBM and Microsoft; I have a dream. This is integrating and interested me a lot"):
    print(t.text)

 

03. Delayed loading

Jieba uses delay loading, import Jieba and jieba.Tokenizer () will not immediately trigger the loading of dictionaries. Only when it is necessary can we start loading dictionaries to build prefix dictionaries. If you want to initialize Jieba manually, you can also initialize it manually.

import jieba
jieba.initialize () manual initialization (optional)

 

Official use cases

#encoding=utf-8
from __future__ import print_function
import sys
sys.path.append("../")
import jieba

def cuttest(test_sent):
    result = jieba.cut(test_sent)
    print("  ".join(result))

def testcase():
    It's a dark night without a finger. My name is monkey king, I love Beijing, I love Python and C + +. ")
    I don't like Japanese kimonos. ")
    The thunder monkey returns to the world. ")
    "Every month, the director of the industry and Information Technology Bureau will personally explain the installation work of 24 port switches and other technical devices through the subordinate departments."
    "I need low rent housing"
    Cutest ("Yonghe clothing accessories Co., Ltd.")
    I love Tiananmen in Beijing
    cuttest("abc")
    "Hidden Markov"
    Cutest ("thunder monkey is a good website")
    
if __name__ == "__main__":
    testcase()
    jieba.set_dictionary("foobar.txt")
    print("================================")
    testcase()

 

 

04. Other dictionaries

1. Dictionary file with small memoryhttps://github.com/fxsjy/jieba/raw/master/extra_dict/dict.txt.small
2. Support the dictionary file with better traditional word segmentationhttps://github.com/fxsjy/jieba/raw/master/extra_dict/dict.txt.big
Download the dictionary you need, and then cover Jieba/ dict.txt Or usejieba.set_dictionary('data/dict.txt.big')

 

Recommended Today

I want to discuss canvas 2D and webgl with you

background This article is included inData visualization and graphicsspecial column As mentioned above, I wrote my first column and realized a simple program for the cognition of graphics and visualization. It was originally intended that the follow-up sequence has focused on the algorithm and rendering direction. However, based on the students’ feedback on obscure problems […]