Elasticserach learning record (I)

Time:2021-9-17

Note that all versions of these things should be the same

Elasticsearch installation

JDK at least 1.8 elasticsearch client, interface tools

be careful

The number of bits of JDK must be the same as that of CPU, or JNA error will be reported

Download address

https://www.elastic.co/cn/Shearch and kibabn

https://github.com/medcl/elas…IK participle

https://github.com/mobz/elast…Head plug-in download

These are available for decompression

Elasticearch launch

Click elasticsearch.bat under bin \

If the computer memory is not large enough, remember to modify config \ jvm.options before starting

-Xms256m

   -Xmx256m

visithttp://127.0.0.1:9200/

Head plug-in

There must be nodejs because it is written by the front end

  1. Cnpm install installation dependency
  2. cnpm run start

Note that if port 9100 is occupied

netstat -ano | findstr “9100”

Then go to the task manager and close it in the details

Start successful accesshttp://localhost:9100/

9200 and 9100 are cross domain

Cross domain needs to be configured in elasticearch.yaml

http.cors.enabled: true
http.cors.allow-origin: "*"

Then restart the ES service

Head is equivalent to Navicat

kibana

This is also a standard front-end project

bin\kibana.bat

http://localhost:5601

Find development tools

Sinicization!!!!

Change the configuration of kibana.yaml

i18n.locale: “zh-CN”

IK word splitter (Chinese) elasticsreach is not Chinese by default

In plugins in elasticsreach

There are two algorithms

  1. ik_ Smart default word splitter can understand words or words that are not repeated
  2. ik_ max_ Word is the most fine-grained poor into the thesaurus

    GET _analyze
    {
      "analyzer": "ik_smart",
      "Text": "how handsome I like it"
    }
    
    GET _analyze
    {
      "analyzer": "ik_max_word",
      "Text": "how handsome I like it"
    }

You need to add your own words to the dictionary

IKAnalyzer.cfg.xml

Write your own kb.dic

Then add in the configuration file

<entry key="ext_dict">kb.dic</entry>

Restart es

Rest style

1. Create an index and insert data into the request body

Put / index name / type name (optional) / document ID
{
    JSON request body
}

add to

Create an index that specifies the field type but does not insert information

PUT /test2
{
  "mappings": {
    "properties": {
      "name":{
        "type": "text"
      },
      "age": {
        "type": "long"
      },
      "birthday":{
        "type": "date"
      }
    }
  }
}

Get information

GET test2

If not, it is the default type_ Doc keyword is inseparable

PUT /test3/_doc/1
{
  "Name": "how handsome",
  "age": 18,
  "birthday": "1999-10-20"
}
GET _ Cat / health database status
GET _ Cat / indices? V database information

modify

1. Direct violence put

PUT /test3/_ Doc / 1 {"name": "how handsome 123", "age": 18, "birthday": "1999-10-20"} but the index information will change the version and status

2、POST

POST /test3/_doc/1/_update{  "doc":{    "name":"shuaikb"  }}

3. Delete index

Delete test1 what indexes or documents are deleted according to your request

Condition query

GET /test3/_ Search? Q = name: How handsome / "_score": 0.5753642, this_ Score is the matching degree. The higher the matching degree, the higher the score

Complex search

GET /test3/_ Search {"query": {"match": {"name": "how handsome"}}} name will show how handsome it is. "Hits" contains index information and query results get / test3/_ Search {"query": {"match": {"name": "how handsome"}, "_source": ["name", "age"]} only name and age are displayed in the query data. After that, all the methods of operating es in Java are the key sorting "sort": [{"age": {"order": "desc"}}] what fields are used to sort desc descending ASC ascending paging "from": 0, The first data starts with "size": 1 how many items are returned

Boolean query

GET /test3/_ Search {"query": {"bool": {"must": [{"match": {"name": "how handsome"}}, {"match": {"age": 18}}]}}} exactly match multi condition query must be equivalent to and should be equivalent to ornot= GET /test3/_ Search {"query": {"bool": {"must": [{"match": {"name": "how handsome"}}], "filter": [{"range": {"age": {"GTE": 10, "LTE": 17}}}}} Filter can filter data GT > GTE > = LT < LTE < = EQ
GET /test3/_ Search {"query": {"match": {"tag": "male technology"}}} tag multiple attributes of the query are separated by spaces. During the query, the exact query match is performed through the inverted index, and the word splitter will be used for parsing (first analyze the document, and then query through the analysis document!)

Two types

  1. Text can be parsed by word splitter
  2. The keyword cannot be parsed by the word breaker
GET _ Analyze {"analyzer": "standard", "text": "how handsome shuaikb name1"} as long as the word breaker is not a keyword, it will not be split into get testdb/_ Search {"query": {"term": {"desc": {"value": "shuaikb desc"}}}} fields that exactly match the keyword type will not be parsed by the word splitter. You will find that shuaikb desc2 will not be queried

Exact query of multiple value matching

GET testdb/_search{  "query": {    "bool": {      "should": [        {          "term": {            "t1": {              "value": "22"            }          }        },        {          "term": {            "t1": {              "value": "33"            }          }        }      ]    }  }}

Highlight query

GET testdb/_ Search {"query": {"match": {"name": "handsome"}}, "highlight": {"fields": {"name": {}} "many < EM > handsome < / EM > Oh shuaikb Name2" this < EM > is the highlighted HTML custom search highlight condition get testdb/_ Search {"query": {"match": {"name": "handsome"}}, "highlight": {"pre_tags": "< p class ='key 'style ='color: Red >", "post_tags": "< / P >", "fields": {"name": {}}}

Spirngboot integration

Native dependencies < repositories > < repository > < ID > es snapshots < / ID > < name > elasticsearch snapshot repo < / name > < URL > https://snapshots.elastic.co/maven/ < / url > < / repository > < / repositories > actually, we import dependencies < dependency > < groupid > org.springframework. Boot < / groupid > < factid > spring-boot-starter-data-e Lasticsearch < / artifactid > < / dependency > note that the dependent version must be consistent with your own local es version, so you need to customize the version dependent initialization resthighlevelclient client = new resthighlevelclient (restclient. Builder (New httphost ("localhost", 9200, "HTTP"), new httphost ("localhost", 9201, "HTTP")); Remember to close client. Close()@ Bean    public RestHighLevelClient restHighLevelClient(){        RestHighLevelClient client =new RestHighLevelClient(                RestClient.builder(                        new HttpHost("localhost", 9200, "http"),                        new HttpHost("localhost", 9201, "http")));        return client;    }

API

Index operation

@Test public void createindex() throws IOException {// create index createindexrequest requst = new createindexrequest ("shuaikb_index"); // execute the request to get the response createindexresponse createindexresponse = client. Industries(). Create (requst, requestoptions. Default); system.out.println (createindexresponse);} @ test void testexisindex() throws IOException {// judge that getindexrequest request = new getindexrequest ("shuaikb_index"); Boolean exists = client. Indexes(). Exists (request, requestoptions. Default); system. Out. Println (exists);} @ test void testdeleteindex() Throws IOException {// delete index deleteindexrequest request = new deleteindexrequest ("shuaikb_index"); knowledgedresponse delete = client. Indexes(). Delete (request, requestoptions. Default); system. Out. Println (delete);}

Document operation

@Test void testadddocument() throws IOException {// add document dog dog = new dog(); dog.setname ("how handsome"); dog.setage (12); indexrequest requst = new indexrequest ("shuaikb_index"); // rule put / shuaikb_index / _doc / 1 requst.id ("1"); requst.timeout (timevalue.timevalueseconds (1)) ; requst. Timeout ("1s"); // put our data into the request JSON requst. Source (JSON. Tojsonstring (dog), xcontenttype. JSON); // the client sends the request indexresponse indexresponse = client.index (requst, requestoptions. Default); system.out.println (indexresponse. Tostring()); system.out.println (indexresponse. Status());} @ test void testexisdocument() throws IOException {// judge whether the document has getrequest getrequest = new getrequest ("shuaikb_index", "1"); // do not get the context of the returned _sourcegetrequest.fetchsourcecontext (New fetchsourcecontext (false)); getrequest.storedfields ("_noe_") ; Boolean exists = client.exists (getrequest, requestoptions. Default); system.out.println (exists);} @ test void testgetdocument() throws IOException {// get document information getrequest getrequest = new getrequest ("shuaikb_index", "1"); getresponse documentfields = client.get (getrequest, requestoptions. Default); system. Out. Println (documentfields. Getsourceastring()); system. Out. Println (documentfields);} @ test void testupdatedocument() throws IOException {// update document information updaterequest updaterequest = new updaterequest ("shuaikb_index", "1"); updaterequest.timeout ("1s") ; dog dog = new dog ("what do you say", 20); updaterequest.doc (json.tojsonstring (dog), xcontenttype. JSON); client.update (updaterequest, requestoptions. Default);} @ test void testdeletedocument() throws IOException {// delete document information deleterequest deleterequest = new deleterequest ("shuaikb_index", "1") ;        deleteRequest.timeout("1s");        DeleteResponse delete = client.delete(deleteRequest, RequestOptions.DEFAULT);        System.out.println(delete);    }

Massive data operation

//Massive data operation @ test void testbulkrequest() throws IOException {bulkrequest bulkrequest = new bulkrequest(); bulkrequest.timeout ("10s"); ArrayList < dog > dogarraylist = new ArrayList < dog > (); dogarraylist.add (new dog ("test1", 1)); dogarraylist.add (new dog ("test2", 1)); dogarraylist.add (new dog ("test2", 1)); dogarraylist.add (new dog ("test2", 1)); dogarraylist.add (new dog( "test3",1));        dogArrayList.add(new Dog("test4",1));        dogArrayList.add(new Dog("shuaikb1",1));        dogArrayList.add(new Dog("shuaikb2",1));        dogArrayList.add(new Dog("shuaikb3",1));        dogArrayList.add(new Dog("shuaikb4",1));        for (int i = 0; i <dogArrayList.size() ; i++) {            bulkRequest.add(new IndexRequest(" Shuaikb_index "). ID (" "+ (I + 1)) // without specifying an ID, a complex ID. source (json.tojsonstring (dogarraylist. Get (I)), xcontenttype. JSON)) will be randomly generated;} bulkresponse bulk = client.bulk (bulkrequest, requestoptions. Default); system.out.println (bulk. Hasfailures()); // false is returned, indicating success} //Query // searchrequest search request // searchsourcebuilder condition construction // highlightbuilder construction highlights // matchallquerybuilders query all // termquerybuilder query @ test void testsearch() throws IOException {searchrequest searchrequest = new searchrequest ("shuaikb_index") ; searchsourcebuilder searchsourcebuilder = new searchsourcebuilder(); // exact termquerybuilder termquerybuilder = querybuilders.termquery ("name", "test1"); // all // matchallquerybuilder matchallquerybuilder = querybuilders. Matchallquery(); searchsourcebuilder.query (termquerybuilder) ;        searchSourceBuilder.from();        searchSourceBuilder.size();        searchSourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS));        searchRequest.source(searchSourceBuilder);        SearchResponse search = client.search(searchRequest, RequestOptions.DEFAULT);        System.out.println(JSON.toJSONString(search));        for  (SearchHit hit : search.getHits().getHits()) {            System.out.println(hit.getSourceAsMap());        }    }

Recommended Today

Supervisor

Supervisor [note] Supervisor – H view supervisor command help Supervisorctl – H view supervisorctl command help Supervisorctl help view the action command of supervisorctl Supervisorctl help any action to view the use of this action 1. Introduction Supervisor is a process control system. Generally speaking, it can monitor your process. If the process exits abnormally, […]