One hot encoding of text words

Time:2021-1-20

Word > letter > vector

Neural network is based on mathematics, so it is more sensitive to numbers. No matter what kind of feature data is, it needs to be fed into the neural network in the form of vector, whether it is picture, text, audio or video.

One hot coding is a common coding method. In multi classification recognition, the label fed into the neural network is the unique hot code. For example, there are 10 categories in handwritten numeral recognition. If a picture label is 6, the unique hot code is: 0 000 000 1 000

 

The following shows how to encode a word on hot

#Alphabet
word_id = {'a': 0, 'b': 1, 'c': 2, 'd': 3, 'e': 4, 'f': 5, 'g': 6, 'h': 7, 'i': 8, 'j': 9,'k': 10, 'l': 11, 'm': 12, 'n': 13, 'o': 14,'p': 15, 'q': 16, 'r': 17, 's': 18, 't': 19,'u': 20, 'v': 21, 'w': 22, 'x': 23, 'y': 24, 'z': 25}

#Words to encode
word = 'china'

#On hot coding
arr = np.zeros((len(word),len(word_id)))
for k,w in enumerate(word):
    arr[k][word_id[w]] = 1

print(arr)

Print results:

[[0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]

 

Text > word > vector

Text coding is different from word coding. Word coding takes 26 letters as the mapping dictionary. Text coding needs to map the dictionary with words as the unit, because words are semantic. In the actual scene, it often captures the meaning expressed by the text, rather than the letter composition of the text itself.

#Text to encode
text = 'I am Chinese, I love China'
total_num = len(text.replace(',',' ').split())

#Mapping dictionary
word_id = {}
sentences = text.split(',')
for line in sentences:
    for word in line.split():
        if word not in word_id:
            word_id[word] = len(word_id)

print(word_id)

#On hot coding
arr = np.zeros((len(sentences),total_num,len(word_id)))
for k,v in enumerate(sentences):
    for kk,vv in enumerate(v.split()):
        arr[k][kk][word_id[vv]] = 1

print(arr)

Print results:

{‘I’: 0, ‘am’: 1, ‘Chinese’: 2, ‘love’: 3, ‘China’: 4}
[[[1. 0. 0. 0. 0.]
[0. 1. 0. 0. 0.]
[0. 0. 1. 0. 0.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]]

[[1. 0. 0. 0. 0.]
[0. 0. 0. 1. 0.]
[0. 0. 0. 0. 1.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0.]]]