Tag:Tokenizer

  • Build an open source project 13 – install IK word breaker and zookeeper

    Time:2020-11-27

    1、 Installing the IK word breaker Download IK word breaker plug-in wget https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v6.4.2/elasticsearch-analysis-ik- Using Linux to download will be very slow, so I went to GitHub and downloaded it in advance. Now I will start to install it [[email protected] ~]# mkdir /opt/elasticsearch/elasticsearch-6.4.2/plugins/elasticsearch-analysis-ik-6.4.2 [[email protected] ~]# cd /opt/elasticsearch/elasticsearch-6.4.2/plugins/elasticsearch-analysis-ik-6.4.2 [[email protected] elasticsearch-analysis-ik-6.4.2]# unzip elasticsearch-analysis-ik-6.4.2.tar.gz Decompression means that the IK […]

  • Elastic installation

    Time:2020-10-26

      1、 Introduction to elastic search                         2Elasticsearch installation 1. Remember to install with docker docker pull elasticsearch:7.2.0 2. Start es docker run –name elasticsearch -p 9200:9200 -p 9300:9300 -e “discovery.type=single-node” -d elasticsearch:7.2.0 3.Modify configuration to solve cross domain access problem -> docker exec […]

  • Docker installs elasticsearch and kibana

    Time:2020-10-1

    1. Install elasticsearch Pull elasticsearch image docker pull elasticsearch Create a handover mode network docker network create elasticsearch_net install docker run -d –name elasticsearch -p 9200:9200 -p 9300:9300 –network elasticsearch_net -v elasticsearch_volume:/root -e “privileged=true” -e “discovery.type=single-node” elasticsearch #- D background operation #– name elasticsearch container name #- P 9200:9200 – P 9300:9300 map port # […]

  • Django whoosh search engine uses Jieba participle

    Time:2020-9-23

    Django version: 3.0.4Python package preparation: pip install django-haystack pip install jieba Use Jieba participle 1. CD to the haystack package in site packages, create and edit it ChineseAnalyzer.py file #(Note: PIP installed Django haystack, but the folder name of the actual package is haystack) cd /usr/local/lib/python3.8/site-packages/haystack/backends/ #Create and edit ChineseAnalyzer.py file vim ChineseAnalyzer.py   2. […]

  • Laravel + elasticsearch for Chinese search

    Time:2020-6-12

    Elasticsearch Elasticsearch is an open-source search engine based on Apache Lucene (TM). No matter in the open-source or proprietary field, Lucene can be regarded as the most advanced, the best performance and the most comprehensive search engine library so far. However, Lucene is just a library. To make it work, you need to use Java […]

  • Chinese word segmentation – IOS comes with word segmentation cfstringtokenizer

    Time:2020-4-13

    Chinese word segmentation – IOS comes with word segmentation cfstringtokenizer Preface 1. When dealing with the simple and complex transformation, the simplest way is to carry out the simple and complex transformation word by word, but for the case of one simple and many complex, one many simple, it needs to combine the semantic, phrase […]

  • Building a multi node elastic search cluster on centos7

    Time:2020-3-11

    The brain map of this paper is as follows: There are 747 words in the article. It takes about 2 minutes to read! Summary Recently, I learned elastic search. Since I learned it, how can I play without a real cluster? So I have to build one myself! Note:This article starts fromMy personal blog: codesheep, […]