summaryrefslogtreecommitdiff
path: root/docs/reference/search/request/search-type.asciidoc
blob: 6f4c7bae79e437b700810d27f5c12cf77395e6b6 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
[[search-request-search-type]]
=== Search Type

There are different execution paths that can be done when executing a
distributed search. The distributed search operation needs to be
scattered to all the relevant shards and then all the results are
gathered back. When doing scatter/gather type execution, there are
several ways to do that, specifically with search engines.

One of the questions when executing a distributed search is how much
results to retrieve from each shard. For example, if we have 10 shards,
the 1st shard might hold the most relevant results from 0 till 10, with
other shards results ranking below it. For this reason, when executing a
request, we will need to get results from 0 till 10 from all shards,
sort them, and then return the results if we want to insure correct
results.

Another question, which relates to search engine, is the fact that each
shard stands on its own. When a query is executed on a specific shard,
it does not take into account term frequencies and other search engine
information from the other shards. If we want to support accurate
ranking, we would need to first execute the query against all shards and
gather the relevant term frequencies, and then, based on it, execute the
query.

Also, because of the need to sort the results, getting back a large
document set, or even scrolling it, while maintaing the correct sorting
behavior can be a very expensive operation. For large result set
scrolling without sorting, the `scan` search type (explained below) is
also available.

Elasticsearch is very flexible and allows to control the type of search
to execute on a *per search request* basis. The type can be configured
by setting the *search_type* parameter in the query string. The types
are:

[[query-and-fetch]]
==== Query And Fetch

Parameter value: *query_and_fetch*.

The most naive (and possibly fastest) implementation is to simply
execute the query on all relevant shards and return the results. Each
shard returns `size` results. Since each shard already returns `size`
hits, this type actually returns `size` times `number of shards` results
back to the caller.

[[query-then-fetch]]
==== Query Then Fetch

Parameter value: *query_then_fetch*.

The query is executed against all shards, but only enough information is
returned (*not the document content*). The results are then sorted and
ranked, and based on it, *only the relevant shards* are asked for the
actual document content. The return number of hits is exactly as
specified in `size`, since they are the only ones that are fetched. This
is very handy when the index has a lot of shards (not replicas, shard id
groups).

NOTE: This is the default setting, if you do not specify a `search_type` 
      in your request.

[[dfs-query-and-fetch]]
==== Dfs, Query And Fetch

Parameter value: *dfs_query_and_fetch*.

Same as "Query And Fetch", except for an initial scatter phase which
goes and computes the distributed term frequencies for more accurate
scoring.

[[dfs-query-then-fetch]]
==== Dfs, Query Then Fetch

Parameter value: *dfs_query_then_fetch*.

Same as "Query Then Fetch", except for an initial scatter phase which
goes and computes the distributed term frequencies for more accurate
scoring.

[[count]]
==== Count

Parameter value: *count*.

A special search type that returns the count that matched the search
request without any docs (represented in `total_hits`), and possibly,
including facets as well. In general, this is preferable to the `count`
API as it provides more options.

[[scan]]
==== Scan

Parameter value: *scan*.

The `scan` search type allows to efficiently scroll a large result set.
It's used first by executing a search request with scrolling and a
query:

[source,js]
--------------------------------------------------
curl -XGET 'localhost:9200/_search?search_type=scan&scroll=10m&size=50' -d '
{
    "query" : {
        "match_all" : {}
    }
}
'
--------------------------------------------------

The `scroll` parameter control the keep alive time of the scrolling
request and initiates the scrolling process. The timeout applies per
round trip (i.e. between the previous scan scroll request, to the next).

The response will include no hits, with two important results, the
`total_hits` will include the total hits that match the query, and the
`scroll_id` that allows to start the scroll process. From this stage,
the `_search/scroll` endpoint should be used to scroll the hits, feeding
the next scroll request with the previous search result `scroll_id`. For
example:

[source,js]
--------------------------------------------------
curl -XGET 'localhost:9200/_search/scroll?scroll=10m' -d 'c2NhbjsxOjBLMzdpWEtqU2IyZHlmVURPeFJOZnc7MzowSzM3aVhLalNiMmR5ZlVET3hSTmZ3OzU6MEszN2lYS2pTYjJkeWZVRE94Uk5mdzsyOjBLMzdpWEtqU2IyZHlmVURPeFJOZnc7NDowSzM3aVhLalNiMmR5ZlVET3hSTmZ3Ow=='
--------------------------------------------------

Scroll requests will include a number of hits equal to the size
multiplied by the number of primary shards.

The "breaking" condition out of a scroll is when no hits has been
returned. The total_hits will be maintained between scroll requests.

Note, scan search type does not support sorting (either on score or a
field) or faceting.

[[clear-scroll]]
===== Clear scroll api

Besides consuming the scroll search until no hits has been returned a scroll
search can also be aborted by deleting the `scroll_id`. This can be done via
the clear scroll api. When the `scroll_id` has been deleted all the
resources to keep the view open will be released. Example usage:

[source,js]
--------------------------------------------------
curl -XDELETE 'localhost:9200/_search/scroll/c2NhbjsxOjBLMzdpWEtqU2IyZHlmVURPeFJOZnc7MzowSzM3aVhLalNiMmR5ZlVET3hSTmZ3OzU6MEszN2lYS2pTYjJkeWZVRE94Uk5mdzsyOjBLMzdpWEtqU2IyZHlmVURPeFJOZnc7NDowSzM3aVhLalNiMmR5ZlVET3hSTmZ3Ow=='
--------------------------------------------------

Multiple scroll ids can be specified in a comma separated manner.
If all scroll ids need to be cleared the reserved `_all` value can used instead of an actual `scroll_id`:

[source,js]
--------------------------------------------------
curl -XDELETE 'localhost:9200/_search/scroll/_all'
--------------------------------------------------