blob: 3118b0dfb2b3e9deecb29051cb6e2492dea3e986 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
[[analysis-tokenizers]]
== Tokenizers
Tokenizers are used to break a string down into a stream of terms
or tokens. A simple tokenizer might split the string up into terms
wherever it encounters whitespace or punctuation.
Elasticsearch has a number of built in tokenizers which can be
used to build <<analysis-custom-analyzer,custom analyzers>>.
include::tokenizers/standard-tokenizer.asciidoc[]
include::tokenizers/edgengram-tokenizer.asciidoc[]
include::tokenizers/keyword-tokenizer.asciidoc[]
include::tokenizers/letter-tokenizer.asciidoc[]
include::tokenizers/lowercase-tokenizer.asciidoc[]
include::tokenizers/ngram-tokenizer.asciidoc[]
include::tokenizers/whitespace-tokenizer.asciidoc[]
include::tokenizers/pattern-tokenizer.asciidoc[]
include::tokenizers/uaxurlemail-tokenizer.asciidoc[]
include::tokenizers/pathhierarchy-tokenizer.asciidoc[]
|