summaryrefslogtreecommitdiff
path: root/docs/java-api
diff options
context:
space:
mode:
Diffstat (limited to 'docs/java-api')
-rw-r--r--docs/java-api/bulk.asciidoc38
-rw-r--r--docs/java-api/client.asciidoc187
-rw-r--r--docs/java-api/count.asciidoc38
-rw-r--r--docs/java-api/delete-by-query.asciidoc21
-rw-r--r--docs/java-api/delete.asciidoc40
-rw-r--r--docs/java-api/facets.asciidoc494
-rw-r--r--docs/java-api/get.asciidoc38
-rw-r--r--docs/java-api/index.asciidoc61
-rw-r--r--docs/java-api/index_.asciidoc205
-rw-r--r--docs/java-api/percolate.asciidoc50
-rw-r--r--docs/java-api/query-dsl-filters.asciidoc462
-rw-r--r--docs/java-api/query-dsl-queries.asciidoc450
-rw-r--r--docs/java-api/search.asciidoc140
13 files changed, 2224 insertions, 0 deletions
diff --git a/docs/java-api/bulk.asciidoc b/docs/java-api/bulk.asciidoc
new file mode 100644
index 0000000..9b53d3a
--- /dev/null
+++ b/docs/java-api/bulk.asciidoc
@@ -0,0 +1,38 @@
+[[bulk]]
+== Bulk API
+
+The bulk API allows one to index and delete several documents in a
+single request. Here is a sample usage:
+
+[source,java]
+--------------------------------------------------
+import static org.elasticsearch.common.xcontent.XContentFactory.*;
+
+BulkRequestBuilder bulkRequest = client.prepareBulk();
+
+// either use client#prepare, or use Requests# to directly build index/delete requests
+bulkRequest.add(client.prepareIndex("twitter", "tweet", "1")
+ .setSource(jsonBuilder()
+ .startObject()
+ .field("user", "kimchy")
+ .field("postDate", new Date())
+ .field("message", "trying out Elasticsearch")
+ .endObject()
+ )
+ );
+
+bulkRequest.add(client.prepareIndex("twitter", "tweet", "2")
+ .setSource(jsonBuilder()
+ .startObject()
+ .field("user", "kimchy")
+ .field("postDate", new Date())
+ .field("message", "another post")
+ .endObject()
+ )
+ );
+
+BulkResponse bulkResponse = bulkRequest.execute().actionGet();
+if (bulkResponse.hasFailures()) {
+ // process failures by iterating through each bulk response item
+}
+--------------------------------------------------
diff --git a/docs/java-api/client.asciidoc b/docs/java-api/client.asciidoc
new file mode 100644
index 0000000..57f55ec
--- /dev/null
+++ b/docs/java-api/client.asciidoc
@@ -0,0 +1,187 @@
+[[client]]
+== Client
+
+You can use the *java client* in multiple ways:
+
+* Perform standard <<index_,index>>, <<get,get>>,
+ <<delete,delete>> and <<search,search>> operations on an
+ existing cluster
+* Perform administrative tasks on a running cluster
+* Start full nodes when you want to run Elasticsearch embedded in your
+ own application or when you want to launch unit or integration tests
+
+Obtaining an elasticsearch `Client` is simple. The most common way to
+get a client is by:
+
+1. creating an embedded link:#nodeclient[`Node`] that acts as a node
+within a cluster
+2. requesting a `Client` from your embedded `Node`.
+
+Another manner is by creating a link:#transport-client[`TransportClient`]
+that connects to a cluster.
+
+*Important:*
+
+______________________________________________________________________________________________________________________________________________________________
+Please note that you are encouraged to use the same version on client
+and cluster sides. You may hit some incompatibilities issues when mixing
+major versions.
+______________________________________________________________________________________________________________________________________________________________
+
+
+[[node-client]]
+=== Node Client
+
+Instantiating a node based client is the simplest way to get a `Client`
+that can execute operations against elasticsearch.
+
+[source,java]
+--------------------------------------------------
+import static org.elasticsearch.node.NodeBuilder.*;
+
+// on startup
+
+Node node = nodeBuilder().node();
+Client client = node.client();
+
+// on shutdown
+
+node.close();
+--------------------------------------------------
+
+When you start a `Node`, it joins an elasticsearch cluster. You can have
+different clusters by simple setting the `cluster.name` setting, or
+explicitly using the `clusterName` method on the builder.
+
+You can define `cluster.name` in `/src/main/resources/elasticsearch.yml`
+dir in your project. As long as `elasticsearch.yml` is present in the
+classloader, it will be used when you start your node.
+
+[source,java]
+--------------------------------------------------
+cluster.name=yourclustername
+--------------------------------------------------
+
+Or in Java:
+
+[source,java]
+--------------------------------------------------
+Node node = nodeBuilder().clusterName("yourclustername").node();
+Client client = node.client();
+--------------------------------------------------
+
+The benefit of using the `Client` is the fact that operations are
+automatically routed to the node(s) the operations need to be executed
+on, without performing a "double hop". For example, the index operation
+will automatically be executed on the shard that it will end up existing
+at.
+
+When you start a `Node`, the most important decision is whether it
+should hold data or not. In other words, should indices and shards be
+allocated to it. Many times we would like to have the clients just be
+clients, without shards being allocated to them. This is simple to
+configure by setting either `node.data` setting to `false` or
+`node.client` to `true` (the `NodeBuilder` respective helper methods on
+it):
+
+[source,java]
+--------------------------------------------------
+import static org.elasticsearch.node.NodeBuilder.*;
+
+// on startup
+
+Node node = nodeBuilder().client(true).node();
+Client client = node.client();
+
+// on shutdown
+
+node.close();
+--------------------------------------------------
+
+Another common usage is to start the `Node` and use the `Client` in
+unit/integration tests. In such a case, we would like to start a "local"
+`Node` (with a "local" discovery and transport). Again, this is just a
+matter of a simple setting when starting the `Node`. Note, "local" here
+means local on the JVM (well, actually class loader) level, meaning that
+two *local* servers started within the same JVM will discover themselves
+and form a cluster.
+
+[source,java]
+--------------------------------------------------
+import static org.elasticsearch.node.NodeBuilder.*;
+
+// on startup
+
+Node node = nodeBuilder().local(true).node();
+Client client = node.client();
+
+// on shutdown
+
+node.close();
+--------------------------------------------------
+
+
+[[transport-client]]
+=== Transport Client
+
+The `TransportClient` connects remotely to an elasticsearch cluster
+using the transport module. It does not join the cluster, but simply
+gets one or more initial transport addresses and communicates with them
+in round robin fashion on each action (though most actions will probably
+be "two hop" operations).
+
+[source,java]
+--------------------------------------------------
+// on startup
+
+Client client = new TransportClient()
+ .addTransportAddress(new InetSocketTransportAddress("host1", 9300))
+ .addTransportAddress(new InetSocketTransportAddress("host2", 9300));
+
+// on shutdown
+
+client.close();
+--------------------------------------------------
+
+Note that you have to set the cluster name if you use one different to
+"elasticsearch":
+
+[source,java]
+--------------------------------------------------
+Settings settings = ImmutableSettings.settingsBuilder()
+ .put("cluster.name", "myClusterName").build();
+Client client = new TransportClient(settings);
+//Add transport addresses and do something with the client...
+--------------------------------------------------
+
+Or using `elasticsearch.yml` file as shown in the link:#nodeclient[Node
+Client section]
+
+The client allows to sniff the rest of the cluster, and add those into
+its list of machines to use. In this case, note that the ip addresses
+used will be the ones that the other nodes were started with (the
+"publish" address). In order to enable it, set the
+`client.transport.sniff` to `true`:
+
+[source,java]
+--------------------------------------------------
+Settings settings = ImmutableSettings.settingsBuilder()
+ .put("client.transport.sniff", true).build();
+TransportClient client = new TransportClient(settings);
+--------------------------------------------------
+
+Other transport client level settings include:
+
+[cols="<,<",options="header",]
+|=======================================================================
+|Parameter |Description
+|`client.transport.ignore_cluster_name` |Set to `true` to ignore cluster
+name validation of connected nodes. (since 0.19.4)
+
+|`client.transport.ping_timeout` |The time to wait for a ping response
+from a node. Defaults to `5s`.
+
+|`client.transport.nodes_sampler_interval` |How often to sample / ping
+the nodes listed and connected. Defaults to `5s`.
+|=======================================================================
+
diff --git a/docs/java-api/count.asciidoc b/docs/java-api/count.asciidoc
new file mode 100644
index 0000000..a18ad75
--- /dev/null
+++ b/docs/java-api/count.asciidoc
@@ -0,0 +1,38 @@
+[[count]]
+== Count API
+
+The count API allows to easily execute a query and get the number of
+matches for that query. It can be executed across one or more indices
+and across one or more types. The query can be provided using the
+{ref}/query-dsl.html[Query DSL].
+
+[source,java]
+--------------------------------------------------
+import static org.elasticsearch.index.query.xcontent.FilterBuilders.*;
+import static org.elasticsearch.index.query.xcontent.QueryBuilders.*;
+
+CountResponse response = client.prepareCount("test")
+ .setQuery(termQuery("_type", "type1"))
+ .execute()
+ .actionGet();
+--------------------------------------------------
+
+For more information on the count operation, check out the REST
+{ref}/search-count.html[count] docs.
+
+
+=== Operation Threading
+
+The count API allows to set the threading model the operation will be
+performed when the actual execution of the API is performed on the same
+node (the API is executed on a shard that is allocated on the same
+server).
+
+There are three threading modes.The `NO_THREADS` mode means that the
+count operation will be executed on the calling thread. The
+`SINGLE_THREAD` mode means that the count operation will be executed on
+a single different thread for all local shards. The `THREAD_PER_SHARD`
+mode means that the count operation will be executed on a different
+thread for each local shard.
+
+The default mode is `SINGLE_THREAD`.
diff --git a/docs/java-api/delete-by-query.asciidoc b/docs/java-api/delete-by-query.asciidoc
new file mode 100644
index 0000000..3eab109
--- /dev/null
+++ b/docs/java-api/delete-by-query.asciidoc
@@ -0,0 +1,21 @@
+[[delete-by-query]]
+== Delete By Query API
+
+The delete by query API allows to delete documents from one or more
+indices and one or more types based on a <<query-dsl-queries,query>>. Here
+is an example:
+
+[source,java]
+--------------------------------------------------
+import static org.elasticsearch.index.query.FilterBuilders.*;
+import static org.elasticsearch.index.query.QueryBuilders.*;
+
+DeleteByQueryResponse response = client.prepareDeleteByQuery("test")
+ .setQuery(termQuery("_type", "type1"))
+ .execute()
+ .actionGet();
+--------------------------------------------------
+
+For more information on the delete by query operation, check out the
+{ref}/docs-delete-by-query.html[delete_by_query API]
+docs.
diff --git a/docs/java-api/delete.asciidoc b/docs/java-api/delete.asciidoc
new file mode 100644
index 0000000..409b5be
--- /dev/null
+++ b/docs/java-api/delete.asciidoc
@@ -0,0 +1,40 @@
+[[delete]]
+== Delete API
+
+The delete API allows to delete a typed JSON document from a specific
+index based on its id. The following example deletes the JSON document
+from an index called twitter, under a type called tweet, with id valued
+1:
+
+[source,java]
+--------------------------------------------------
+DeleteResponse response = client.prepareDelete("twitter", "tweet", "1")
+ .execute()
+ .actionGet();
+--------------------------------------------------
+
+For more information on the delete operation, check out the
+{ref}/docs-delete.html[delete API] docs.
+
+
+[[operation-threading]]
+=== Operation Threading
+
+The delete API allows to set the threading model the operation will be
+performed when the actual execution of the API is performed on the same
+node (the API is executed on a shard that is allocated on the same
+server).
+
+The options are to execute the operation on a different thread, or to
+execute it on the calling thread (note that the API is still async). By
+default, `operationThreaded` is set to `true` which means the operation
+is executed on a different thread. Here is an example that sets it to
+`false`:
+
+[source,java]
+--------------------------------------------------
+DeleteResponse response = client.prepareDelete("twitter", "tweet", "1")
+ .setOperationThreaded(false)
+ .execute()
+ .actionGet();
+--------------------------------------------------
diff --git a/docs/java-api/facets.asciidoc b/docs/java-api/facets.asciidoc
new file mode 100644
index 0000000..34353c6
--- /dev/null
+++ b/docs/java-api/facets.asciidoc
@@ -0,0 +1,494 @@
+[[java-facets]]
+== Facets
+
+Elasticsearch provides a full Java API to play with facets. See the
+{ref}/search-facets.html[Facets guide].
+
+Use the factory for facet builders (`FacetBuilders`) and add each facet
+you want to compute when querying and add it to your search request:
+
+[source,java]
+--------------------------------------------------
+SearchResponse sr = node.client().prepareSearch()
+ .setQuery( /* your query */ )
+ .addFacet( /* add a facet */ )
+ .execute().actionGet();
+--------------------------------------------------
+
+Note that you can add more than one facet. See
+{ref}/search-search.html[Search Java API] for details.
+
+To build facet requests, use `FacetBuilders` helpers. Just import them
+in your class:
+
+[source,java]
+--------------------------------------------------
+import org.elasticsearch.search.facet.FacetBuilders.*;
+--------------------------------------------------
+
+
+=== Facets
+
+
+[[java-facet-terms]]
+==== Terms Facet
+
+Here is how you can use
+{ref}/search-facets-terms-facet.html[Terms Facet]
+with Java API.
+
+
+===== Prepare facet request
+
+Here is an example on how to create the facet request:
+
+[source,java]
+--------------------------------------------------
+FacetBuilders.termsFacet("f")
+ .field("brand")
+ .size(10);
+--------------------------------------------------
+
+
+===== Use facet response
+
+Import Facet definition classes:
+
+[source,java]
+--------------------------------------------------
+import org.elasticsearch.search.facet.terms.*;
+--------------------------------------------------
+
+[source,java]
+--------------------------------------------------
+// sr is here your SearchResponse object
+TermsFacet f = (TermsFacet) sr.getFacets().facetsAsMap().get("f");
+
+f.getTotalCount(); // Total terms doc count
+f.getOtherCount(); // Not shown terms doc count
+f.getMissingCount(); // Without term doc count
+
+// For each entry
+for (TermsFacet.Entry entry : f) {
+ entry.getTerm(); // Term
+ entry.getCount(); // Doc count
+}
+--------------------------------------------------
+
+
+[[java-facet-range]]
+==== Range Facet
+
+Here is how you can use
+{ref}/search-facets-range-facet.html[Range Facet]
+with Java API.
+
+
+===== Prepare facet request
+
+Here is an example on how to create the facet request:
+
+[source,java]
+--------------------------------------------------
+FacetBuilders.rangeFacet("f")
+ .field("price") // Field to compute on
+ .addUnboundedFrom(3) // from -infinity to 3 (excluded)
+ .addRange(3, 6) // from 3 to 6 (excluded)
+ .addUnboundedTo(6); // from 6 to +infinity
+--------------------------------------------------
+
+
+===== Use facet response
+
+Import Facet definition classes:
+
+[source,java]
+--------------------------------------------------
+import org.elasticsearch.search.facet.range.*;
+--------------------------------------------------
+
+[source,java]
+--------------------------------------------------
+// sr is here your SearchResponse object
+RangeFacet f = (RangeFacet) sr.getFacets().facetsAsMap().get("f");
+
+// For each entry
+for (RangeFacet.Entry entry : f) {
+ entry.getFrom(); // Range from requested
+ entry.getTo(); // Range to requested
+ entry.getCount(); // Doc count
+ entry.getMin(); // Min value
+ entry.getMax(); // Max value
+ entry.getMean(); // Mean
+ entry.getTotal(); // Sum of values
+}
+--------------------------------------------------
+
+
+[[histogram]]
+==== Histogram Facet
+
+Here is how you can use
+{ref}/search-facets-histogram-facet.html[Histogram
+Facet] with Java API.
+
+
+===== Prepare facet request
+
+Here is an example on how to create the facet request:
+
+[source,java]
+--------------------------------------------------
+HistogramFacetBuilder facet = FacetBuilders.histogramFacet("f")
+ .field("price")
+ .interval(1);
+--------------------------------------------------
+
+
+===== Use facet response
+
+Import Facet definition classes:
+
+[source,java]
+--------------------------------------------------
+import org.elasticsearch.search.facet.histogram.*;
+--------------------------------------------------
+
+[source,java]
+--------------------------------------------------
+// sr is here your SearchResponse object
+HistogramFacet f = (HistogramFacet) sr.getFacets().facetsAsMap().get("f");
+
+// For each entry
+for (HistogramFacet.Entry entry : f) {
+ entry.getKey(); // Key (X-Axis)
+ entry.getCount(); // Doc count (Y-Axis)
+}
+--------------------------------------------------
+
+
+[[date-histogram]]
+==== Date Histogram Facet
+
+Here is how you can use
+{ref}/search-facets-date-histogram-facet.html[Date
+Histogram Facet] with Java API.
+
+
+===== Prepare facet request
+
+Here is an example on how to create the facet request:
+
+[source,java]
+--------------------------------------------------
+FacetBuilders.dateHistogramFacet("f")
+ .field("date") // Your date field
+ .interval("year"); // You can also use "quarter", "month", "week", "day",
+ // "hour" and "minute" or notation like "1.5h" or "2w"
+--------------------------------------------------
+
+
+===== Use facet response
+
+Import Facet definition classes:
+
+[source,java]
+--------------------------------------------------
+import org.elasticsearch.search.facet.datehistogram.*;
+--------------------------------------------------
+
+[source,java]
+--------------------------------------------------
+// sr is here your SearchResponse object
+DateHistogramFacet f = (DateHistogramFacet) sr.getFacets().facetsAsMap().get("f");
+
+// For each entry
+for (DateHistogramFacet.Entry entry : f) {
+ entry.getTime(); // Date in ms since epoch (X-Axis)
+ entry.getCount(); // Doc count (Y-Axis)
+}
+--------------------------------------------------
+
+
+[[filter]]
+==== Filter Facet (not facet filter)
+
+Here is how you can use
+{ref}/search-facets-filter-facet.html[Filter Facet]
+with Java API.
+
+If you are looking on how to apply a filter to a facet, have a look at
+link:#facet-filter[facet filter] using Java API.
+
+
+===== Prepare facet request
+
+Here is an example on how to create the facet request:
+
+[source,java]
+--------------------------------------------------
+FacetBuilders.filterFacet("f",
+ FilterBuilders.termFilter("brand", "heineken")); // Your Filter here
+--------------------------------------------------
+
+See <<query-dsl-filters,Filters>> to
+learn how to build filters using Java.
+
+
+===== Use facet response
+
+Import Facet definition classes:
+
+[source,java]
+--------------------------------------------------
+import org.elasticsearch.search.facet.filter.*;
+--------------------------------------------------
+
+[source,java]
+--------------------------------------------------
+// sr is here your SearchResponse object
+FilterFacet f = (FilterFacet) sr.getFacets().facetsAsMap().get("f");
+
+f.getCount(); // Number of docs that matched
+--------------------------------------------------
+
+
+[[query]]
+==== Query Facet
+
+Here is how you can use
+{ref}/search-facets-query-facet.html[Query Facet]
+with Java API.
+
+
+===== Prepare facet request
+
+Here is an example on how to create the facet request:
+
+[source,java]
+--------------------------------------------------
+FacetBuilders.queryFacet("f",
+ QueryBuilders.matchQuery("brand", "heineken"));
+--------------------------------------------------
+
+
+===== Use facet response
+
+Import Facet definition classes:
+
+[source,java]
+--------------------------------------------------
+import org.elasticsearch.search.facet.query.*;
+--------------------------------------------------
+
+[source,java]
+--------------------------------------------------
+// sr is here your SearchResponse object
+QueryFacet f = (QueryFacet) sr.getFacets().facetsAsMap().get("f");
+
+f.getCount(); // Number of docs that matched
+--------------------------------------------------
+
+See <<query-dsl-queries,Queries>> to
+learn how to build queries using Java.
+
+
+[[statistical]]
+==== Statistical
+
+Here is how you can use
+{ref}/search-facets-statistical-facet.html[Statistical
+Facet] with Java API.
+
+
+===== Prepare facet request
+
+Here is an example on how to create the facet request:
+
+[source,java]
+--------------------------------------------------
+FacetBuilders.statisticalFacet("f")
+ .field("price");
+--------------------------------------------------
+
+
+===== Use facet response
+
+Import Facet definition classes:
+
+[source,java]
+--------------------------------------------------
+import org.elasticsearch.search.facet.statistical.*;
+--------------------------------------------------
+
+[source,java]
+--------------------------------------------------
+// sr is here your SearchResponse object
+StatisticalFacet f = (StatisticalFacet) sr.getFacets().facetsAsMap().get("f");
+
+f.getCount(); // Doc count
+f.getMin(); // Min value
+f.getMax(); // Max value
+f.getMean(); // Mean
+f.getTotal(); // Sum of values
+f.getStdDeviation(); // Standard Deviation
+f.getSumOfSquares(); // Sum of Squares
+f.getVariance(); // Variance
+--------------------------------------------------
+
+
+[[terms-stats]]
+==== Terms Stats Facet
+
+Here is how you can use
+{ref}/search-facets-terms-stats-facet.html[Terms
+Stats Facet] with Java API.
+
+
+===== Prepare facet request
+
+Here is an example on how to create the facet request:
+
+[source,java]
+--------------------------------------------------
+FacetBuilders.termsStatsFacet("f")
+ .keyField("brand")
+ .valueField("price");
+--------------------------------------------------
+
+
+===== Use facet response
+
+Import Facet definition classes:
+
+[source,java]
+--------------------------------------------------
+import org.elasticsearch.search.facet.termsstats.*;
+--------------------------------------------------
+
+[source,java]
+--------------------------------------------------
+// sr is here your SearchResponse object
+TermsStatsFacet f = (TermsStatsFacet) sr.getFacets().facetsAsMap().get("f");
+f.getTotalCount(); // Total terms doc count
+f.getOtherCount(); // Not shown terms doc count
+f.getMissingCount(); // Without term doc count
+
+// For each entry
+for (TermsStatsFacet.Entry entry : f) {
+ entry.getTerm(); // Term
+ entry.getCount(); // Doc count
+ entry.getMin(); // Min value
+ entry.getMax(); // Max value
+ entry.getMean(); // Mean
+ entry.getTotal(); // Sum of values
+}
+--------------------------------------------------
+
+
+[[geo-distance]]
+==== Geo Distance Facet
+
+Here is how you can use
+{ref}/search-facets-geo-distance-facet.html[Geo
+Distance Facet] with Java API.
+
+
+===== Prepare facet request
+
+Here is an example on how to create the facet request:
+
+[source,java]
+--------------------------------------------------
+FacetBuilders.geoDistanceFacet("f")
+ .field("pin.location") // Field containing coordinates we want to compare with
+ .point(40, -70) // Point from where we start (0)
+ .addUnboundedFrom(10) // 0 to 10 km (excluded)
+ .addRange(10, 20) // 10 to 20 km (excluded)
+ .addRange(20, 100) // 20 to 100 km (excluded)
+ .addUnboundedTo(100) // from 100 km to infinity (and beyond ;-) )
+ .unit(DistanceUnit.KILOMETERS); // All distances are in kilometers. Can be MILES
+--------------------------------------------------
+
+
+===== Use facet response
+
+Import Facet definition classes:
+
+[source,java]
+--------------------------------------------------
+import org.elasticsearch.search.facet.geodistance.*;
+--------------------------------------------------
+
+[source,java]
+--------------------------------------------------
+// sr is here your SearchResponse object
+GeoDistanceFacet f = (GeoDistanceFacet) sr.getFacets().facetsAsMap().get("f");
+
+// For each entry
+for (GeoDistanceFacet.Entry entry : f) {
+ entry.getFrom(); // Distance from requested
+ entry.getTo(); // Distance to requested
+ entry.getCount(); // Doc count
+ entry.getMin(); // Min value
+ entry.getMax(); // Max value
+ entry.getTotal(); // Sum of values
+ entry.getMean(); // Mean
+}
+--------------------------------------------------
+
+
+[[facet-filter]]
+=== Facet filters (not Filter Facet)
+
+By default, facets are applied on the query resultset whatever filters
+exists or are.
+
+If you need to compute facets with the same filters or even with other
+filters, you can add the filter to any facet using
+`AbstractFacetBuilder#facetFilter(FilterBuilder)` method:
+
+[source,java]
+--------------------------------------------------
+FacetBuilders
+ .termsFacet("f").field("brand") // Your facet
+ .facetFilter( // Your filter here
+ FilterBuilders.termFilter("colour", "pale")
+ );
+--------------------------------------------------
+
+For example, you can reuse the same filter you created for your query:
+
+[source,java]
+--------------------------------------------------
+// A common filter
+FilterBuilder filter = FilterBuilders.termFilter("colour", "pale");
+
+TermsFacetBuilder facet = FacetBuilders.termsFacet("f")
+ .field("brand")
+ .facetFilter(filter); // We apply it to the facet
+
+SearchResponse sr = node.client().prepareSearch()
+ .setQuery(QueryBuilders.matchAllQuery())
+ .setFilter(filter) // We apply it to the query
+ .addFacet(facet)
+ .execute().actionGet();
+--------------------------------------------------
+
+See documentation on how to build
+<<query-dsl-filters,Filters>>.
+
+
+[[scope]]
+=== Scope
+
+By default, facets are computed within the query resultset. But, you can
+compute facets from all documents in the index whatever the query is,
+using `global` parameter:
+
+[source,java]
+--------------------------------------------------
+TermsFacetBuilder facet = FacetBuilders.termsFacet("f")
+ .field("brand")
+ .global(true);
+--------------------------------------------------
diff --git a/docs/java-api/get.asciidoc b/docs/java-api/get.asciidoc
new file mode 100644
index 0000000..c87dbef
--- /dev/null
+++ b/docs/java-api/get.asciidoc
@@ -0,0 +1,38 @@
+[[get]]
+== Get API
+
+The get API allows to get a typed JSON document from the index based on
+its id. The following example gets a JSON document from an index called
+twitter, under a type called tweet, with id valued 1:
+
+[source,java]
+--------------------------------------------------
+GetResponse response = client.prepareGet("twitter", "tweet", "1")
+ .execute()
+ .actionGet();
+--------------------------------------------------
+
+For more information on the get operation, check out the REST
+{ref}/docs-get.html[get] docs.
+
+
+=== Operation Threading
+
+The get API allows to set the threading model the operation will be
+performed when the actual execution of the API is performed on the same
+node (the API is executed on a shard that is allocated on the same
+server).
+
+The options are to execute the operation on a different thread, or to
+execute it on the calling thread (note that the API is still async). By
+default, `operationThreaded` is set to `true` which means the operation
+is executed on a different thread. Here is an example that sets it to
+`false`:
+
+[source,java]
+--------------------------------------------------
+GetResponse response = client.prepareGet("twitter", "tweet", "1")
+ .setOperationThreaded(false)
+ .execute()
+ .actionGet();
+--------------------------------------------------
diff --git a/docs/java-api/index.asciidoc b/docs/java-api/index.asciidoc
new file mode 100644
index 0000000..a9e3015
--- /dev/null
+++ b/docs/java-api/index.asciidoc
@@ -0,0 +1,61 @@
+[[java-api]]
+= Java API
+:ref: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current
+
+[preface]
+== Preface
+This section describes the Java API that elasticsearch provides. All
+elasticsearch operations are executed using a
+<<client,Client>> object. All
+operations are completely asynchronous in nature (either accepts a
+listener, or return a future).
+
+Additionally, operations on a client may be accumulated and executed in
+<<bulk,Bulk>>.
+
+Note, all the APIs are exposed through the
+Java API (actually, the Java API is used internally to execute them).
+
+
+== Maven Repository
+
+Elasticsearch is hosted on
+http://search.maven.org/#search%7Cga%7C1%7Ca%3A%22elasticsearch%22[Maven
+Central].
+
+For example, you can define the latest version in your `pom.xml` file:
+
+[source,xml]
+--------------------------------------------------
+<dependency>
+ <groupId>org.elasticsearch</groupId>
+ <artifactId>elasticsearch</artifactId>
+ <version>${es.version}</version>
+</dependency>
+--------------------------------------------------
+
+
+include::client.asciidoc[]
+
+include::index_.asciidoc[]
+
+include::get.asciidoc[]
+
+include::delete.asciidoc[]
+
+include::bulk.asciidoc[]
+
+include::search.asciidoc[]
+
+include::count.asciidoc[]
+
+include::delete-by-query.asciidoc[]
+
+include::facets.asciidoc[]
+
+include::percolate.asciidoc[]
+
+include::query-dsl-queries.asciidoc[]
+
+include::query-dsl-filters.asciidoc[]
+
diff --git a/docs/java-api/index_.asciidoc b/docs/java-api/index_.asciidoc
new file mode 100644
index 0000000..5ad9ef5
--- /dev/null
+++ b/docs/java-api/index_.asciidoc
@@ -0,0 +1,205 @@
+[[index_]]
+== Index API
+
+The index API allows one to index a typed JSON document into a specific
+index and make it searchable.
+
+
+[[generate]]
+=== Generate JSON document
+
+There are different way of generating JSON document:
+
+* Manually (aka do it yourself) using native `byte[]` or as a `String`
+
+* Using `Map` that will be automatically converted to its JSON
+equivalent
+
+* Using a third party library to serialize your beans such as
+http://wiki.fasterxml.com/JacksonHome[Jackson]
+
+* Using built-in helpers XContentFactory.jsonBuilder()
+
+Internally, each type is converted to `byte[]` (so a String is converted
+to a `byte[]`). Therefore, if the object is in this form already, then
+use it. The `jsonBuilder` is highly optimized JSON generator that
+directly constructs a `byte[]`.
+
+
+==== Do It Yourself
+
+Nothing really difficult here but note that you will have to encode
+dates regarding to the
+{ref}/mapping-date-format.html[Date Format].
+
+[source,java]
+--------------------------------------------------
+String json = "{" +
+ "\"user\":\"kimchy\"," +
+ "\"postDate\":\"2013-01-30\"," +
+ "\"message\":\"trying out Elasticsearch\"" +
+ "}";
+--------------------------------------------------
+
+
+[[using-map]]
+==== Using Map
+
+Map is a key:values pair collection. It represents very well a JSON
+structure:
+
+[source,java]
+--------------------------------------------------
+Map<String, Object> json = new HashMap<String, Object>();
+json.put("user","kimchy");
+json.put("postDate",new Date());
+json.put("message","trying out Elasticsearch");
+--------------------------------------------------
+
+
+[[beans]]
+==== Serialize your beans
+
+Elasticsearch already use Jackson but shade it under
+`org.elasticsearch.common.jackson` package. +
+ So, you can add your own Jackson version in your `pom.xml` file or in
+your classpath. See http://wiki.fasterxml.com/JacksonDownload[Jackson
+Download Page].
+
+For example:
+
+[source,xml]
+--------------------------------------------------
+<dependency>
+ <groupId>com.fasterxml.jackson.core</groupId>
+ <artifactId>jackson-databind</artifactId>
+ <version>2.1.3</version>
+</dependency>
+--------------------------------------------------
+
+Then, you can start serializing your beans to JSON:
+
+[source,java]
+--------------------------------------------------
+import com.fasterxml.jackson.databind.*;
+
+// instance a json mapper
+ObjectMapper mapper = new ObjectMapper(); // create once, reuse
+
+// generate json
+String json = mapper.writeValueAsString(yourbeaninstance);
+--------------------------------------------------
+
+
+[[helpers]]
+==== Use Elasticsearch helpers
+
+Elasticsearch provides built-in helpers to generate JSON content.
+
+[source,java]
+--------------------------------------------------
+import static org.elasticsearch.common.xcontent.XContentFactory.*;
+
+XContentBuilder builder = jsonBuilder()
+ .startObject()
+ .field("user", "kimchy")
+ .field("postDate", new Date())
+ .field("message", "trying out Elasticsearch")
+ .endObject()
+--------------------------------------------------
+
+Note that you can also add arrays with `startArray(String)` and
+`endArray()` methods. By the way, `field` method +
+ accept many object types. You can pass directly numbers, dates and even
+other XContentBuilder objects.
+
+If you need to see the generated JSON content, you can use the
+`string()` method.
+
+[source,java]
+--------------------------------------------------
+String json = builder.string();
+--------------------------------------------------
+
+
+[[index-doc]]
+=== Index document
+
+The following example indexes a JSON document into an index called
+twitter, under a type called tweet, with id valued 1:
+
+[source,java]
+--------------------------------------------------
+import static org.elasticsearch.common.xcontent.XContentFactory.*;
+
+IndexResponse response = client.prepareIndex("twitter", "tweet", "1")
+ .setSource(jsonBuilder()
+ .startObject()
+ .field("user", "kimchy")
+ .field("postDate", new Date())
+ .field("message", "trying out Elasticsearch")
+ .endObject()
+ )
+ .execute()
+ .actionGet();
+--------------------------------------------------
+
+Note that you can also index your documents as JSON String and that you
+don't have to give an ID:
+
+[source,java]
+--------------------------------------------------
+String json = "{" +
+ "\"user\":\"kimchy\"," +
+ "\"postDate\":\"2013-01-30\"," +
+ "\"message\":\"trying out Elasticsearch\"" +
+ "}";
+
+IndexResponse response = client.prepareIndex("twitter", "tweet")
+ .setSource(json)
+ .execute()
+ .actionGet();
+--------------------------------------------------
+
+`IndexResponse` object will give you report:
+
+[source,java]
+--------------------------------------------------
+// Index name
+String _index = response.index();
+// Type name
+String _type = response.type();
+// Document ID (generated or not)
+String _id = response.id();
+// Version (if it's the first time you index this document, you will get: 1)
+long _version = response.version();
+--------------------------------------------------
+
+If you use percolation while indexing, `IndexResponse` object will give
+you percolator that have matched:
+
+[source,java]
+--------------------------------------------------
+IndexResponse response = client.prepareIndex("twitter", "tweet", "1")
+ .setSource(json)
+ .execute()
+ .actionGet();
+
+List<String> matches = response.matches();
+--------------------------------------------------
+
+For more information on the index operation, check out the REST
+{ref}/docs-index_.html[index] docs.
+
+
+=== Operation Threading
+
+The index API allows to set the threading model the operation will be
+performed when the actual execution of the API is performed on the same
+node (the API is executed on a shard that is allocated on the same
+server).
+
+The options are to execute the operation on a different thread, or to
+execute it on the calling thread (note that the API is still async). By
+default, `operationThreaded` is set to `true` which means the operation
+is executed on a different thread.
diff --git a/docs/java-api/percolate.asciidoc b/docs/java-api/percolate.asciidoc
new file mode 100644
index 0000000..f0187fd
--- /dev/null
+++ b/docs/java-api/percolate.asciidoc
@@ -0,0 +1,50 @@
+[[percolate]]
+== Percolate API
+
+The percolator allows to register queries against an index, and then
+send `percolate` requests which include a doc, and getting back the
+queries that match on that doc out of the set of registered queries.
+
+Read the main {ref}/search-percolate.html[percolate]
+documentation before reading this guide.
+
+[source,java]
+--------------------------------------------------
+//This is the query we're registering in the percolator
+QueryBuilder qb = termQuery("content", "amazing");
+
+//Index the query = register it in the percolator
+client.prepareIndex("myIndexName", ".percolator", "myDesignatedQueryName")
+ .setSource(jsonBuilder()
+ .startObject()
+ .field("query", qb) // Register the query
+ .endObject())
+ .setRefresh(true) // Needed when the query shall be available immediately
+ .execute().actionGet();
+--------------------------------------------------
+
+This indexes the above term query under the name
+*myDesignatedQueryName*.
+
+In order to check a document against the registered queries, use this
+code:
+
+[source,java]
+--------------------------------------------------
+//Build a document to check against the percolator
+XContentBuilder docBuilder = XContentFactory.jsonBuilder().startObject();
+docBuilder.field("doc").startObject(); //This is needed to designate the document
+docBuilder.field("content", "This is amazing!");
+docBuilder.endObject(); //End of the doc field
+docBuilder.endObject(); //End of the JSON root object
+//Percolate
+PercolateResponse response = client.preparePercolate()
+ .setIndices("myIndexName")
+ .setDocumentType("myDocumentType")
+ .setSource(docBuilder).execute().actionGet();
+//Iterate over the results
+for(PercolateResponse.Match match : response) {
+ //Handle the result which is the name of
+ //the query in the percolator
+}
+--------------------------------------------------
diff --git a/docs/java-api/query-dsl-filters.asciidoc b/docs/java-api/query-dsl-filters.asciidoc
new file mode 100644
index 0000000..2164ee8
--- /dev/null
+++ b/docs/java-api/query-dsl-filters.asciidoc
@@ -0,0 +1,462 @@
+[[query-dsl-filters]]
+== Query DSL - Filters
+
+elasticsearch provides a full Java query dsl in a similar manner to the
+REST {ref}/query-dsl.html[Query DSL]. The factory for filter
+builders is `FilterBuilders`.
+
+Once your query is ready, you can use the <<search,Search API>>.
+
+See also how to build <<query-dsl-queries,Queries>>.
+
+To use `FilterBuilders` just import them in your class:
+
+[source,java]
+--------------------------------------------------
+import org.elasticsearch.index.query.FilterBuilders.*;
+--------------------------------------------------
+
+Note that you can easily print (aka debug) JSON generated queries using
+`toString()` method on `FilterBuilder` object.
+
+
+[[and-filter]]
+=== And Filter
+
+See {ref}/query-dsl-and-filter.html[And Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.andFilter(
+ FilterBuilders.rangeFilter("postDate").from("2010-03-01").to("2010-04-01"),
+ FilterBuilders.prefixFilter("name.second", "ba")
+ );
+--------------------------------------------------
+
+Note that you can cache the result using
+`AndFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
+
+
+[[bool-filter]]
+=== Bool Filter
+
+See {ref}/query-dsl-bool-filter.html[Bool Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.boolFilter()
+ .must(FilterBuilders.termFilter("tag", "wow"))
+ .mustNot(FilterBuilders.rangeFilter("age").from("10").to("20"))
+ .should(FilterBuilders.termFilter("tag", "sometag"))
+ .should(FilterBuilders.termFilter("tag", "sometagtag"));
+--------------------------------------------------
+
+Note that you can cache the result using
+`BoolFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
+
+
+[[exists-filter]]
+=== Exists Filter
+
+See {ref}/query-dsl-exists-filter.html[Exists Filter].
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.existsFilter("user");
+--------------------------------------------------
+
+
+[[ids-filter]]
+=== Ids Filter
+
+See {ref}/query-dsl-ids-filter.html[IDs Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.idsFilter("my_type", "type2").addIds("1", "4", "100");
+
+// Type is optional
+FilterBuilders.idsFilter().addIds("1", "4", "100");
+--------------------------------------------------
+
+
+[[limit-filter]]
+=== Limit Filter
+
+See {ref}/query-dsl-limit-filter.html[Limit Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.limitFilter(100);
+--------------------------------------------------
+
+
+[[type-filter]]
+=== Type Filter
+
+See {ref}/query-dsl-type-filter.html[Type Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.typeFilter("my_type");
+--------------------------------------------------
+
+
+[[geo-bbox-filter]]
+=== Geo Bounding Box Filter
+
+See {ref}/query-dsl-geo-bounding-box-filter.html[Geo
+Bounding Box Filter]
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.geoBoundingBoxFilter("pin.location")
+ .topLeft(40.73, -74.1)
+ .bottomRight(40.717, -73.99);
+--------------------------------------------------
+
+Note that you can cache the result using
+`GeoBoundingBoxFilterBuilder#cache(boolean)` method. See
+<<query-dsl-filters-caching>>.
+
+
+[[geo-distance-filter]]
+=== GeoDistance Filter
+
+See {ref}/query-dsl-geo-distance-filter.html[Geo
+Distance Filter]
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.geoDistanceFilter("pin.location")
+ .point(40, -70)
+ .distance(200, DistanceUnit.KILOMETERS)
+ .optimizeBbox("memory") // Can be also "indexed" or "none"
+ .geoDistance(GeoDistance.ARC); // Or GeoDistance.PLANE
+--------------------------------------------------
+
+Note that you can cache the result using
+`GeoDistanceFilterBuilder#cache(boolean)` method. See
+<<query-dsl-filters-caching>>.
+
+
+[[geo-distance-range-filter]]
+=== Geo Distance Range Filter
+
+See {ref}/query-dsl-geo-distance-range-filter.html[Geo
+Distance Range Filter]
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.geoDistanceRangeFilter("pin.location")
+ .point(40, -70)
+ .from("200km")
+ .to("400km")
+ .includeLower(true)
+ .includeUpper(false)
+ .optimizeBbox("memory") // Can be also "indexed" or "none"
+ .geoDistance(GeoDistance.ARC); // Or GeoDistance.PLANE
+--------------------------------------------------
+
+Note that you can cache the result using
+`GeoDistanceRangeFilterBuilder#cache(boolean)` method. See
+<<query-dsl-filters-caching>>.
+
+
+[[geo-poly-filter]]
+=== Geo Polygon Filter
+
+See {ref}/query-dsl-geo-polygon-filter.html[Geo Polygon
+Filter]
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.geoPolygonFilter("pin.location")
+ .addPoint(40, -70)
+ .addPoint(30, -80)
+ .addPoint(20, -90);
+--------------------------------------------------
+
+Note that you can cache the result using
+`GeoPolygonFilterBuilder#cache(boolean)` method. See
+<<query-dsl-filters-caching>>.
+
+
+[[geo-shape-filter]]
+=== Geo Shape Filter
+
+See {ref}/query-dsl-geo-shape-filter.html[Geo Shape
+Filter]
+
+Note: the `geo_shape` type uses `Spatial4J` and `JTS`, both of which are
+optional dependencies. Consequently you must add `Spatial4J` and `JTS`
+to your classpath in order to use this type:
+
+[source,xml]
+-----------------------------------------------
+<dependency>
+ <groupId>com.spatial4j</groupId>
+ <artifactId>spatial4j</artifactId>
+ <version>0.3</version>
+</dependency>
+
+<dependency>
+ <groupId>com.vividsolutions</groupId>
+ <artifactId>jts</artifactId>
+ <version>1.12</version>
+ <exclusions>
+ <exclusion>
+ <groupId>xerces</groupId>
+ <artifactId>xercesImpl</artifactId>
+ </exclusion>
+ </exclusions>
+</dependency>
+-----------------------------------------------
+
+[source,java]
+--------------------------------------------------
+// Import Spatial4J shapes
+import com.spatial4j.core.context.SpatialContext;
+import com.spatial4j.core.shape.Shape;
+import com.spatial4j.core.shape.impl.RectangleImpl;
+
+// Also import ShapeRelation
+import org.elasticsearch.common.geo.ShapeRelation;
+--------------------------------------------------
+
+[source,java]
+--------------------------------------------------
+// Shape within another
+filter = FilterBuilders.geoShapeFilter("location",
+ new RectangleImpl(0,10,0,10,SpatialContext.GEO))
+ .relation(ShapeRelation.WITHIN);
+
+// Intersect shapes
+filter = FilterBuilders.geoShapeFilter("location",
+ new PointImpl(0, 0, SpatialContext.GEO))
+ .relation(ShapeRelation.INTERSECTS);
+
+// Using pre-indexed shapes
+filter = FilterBuilders.geoShapeFilter("location", "New Zealand", "countries")
+ .relation(ShapeRelation.DISJOINT);
+--------------------------------------------------
+
+
+[[has-child-parent-filter]]
+=== Has Child / Has Parent Filters
+
+See:
+ * {ref}/query-dsl-has-child-filter.html[Has Child Filter]
+ * {ref}/query-dsl-has-parent-filter.html[Has Parent Filter]
+
+[source,java]
+--------------------------------------------------
+// Has Child
+QFilterBuilders.hasChildFilter("blog_tag",
+ QueryBuilders.termQuery("tag", "something"));
+
+// Has Parent
+QFilterBuilders.hasParentFilter("blog",
+ QueryBuilders.termQuery("tag", "something"));
+--------------------------------------------------
+
+
+[[match-all-filter]]
+=== Match All Filter
+
+See {ref}/query-dsl-match-all-filter.html[Match All Filter]
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.matchAllFilter();
+--------------------------------------------------
+
+
+[[missing-filter]]
+=== Missing Filter
+
+See {ref}/query-dsl-missing-filter.html[Missing Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.missingFilter("user")
+ .existence(true)
+ .nullValue(true);
+--------------------------------------------------
+
+
+[[not-filter]]
+=== Not Filter
+
+See {ref}/query-dsl-not-filter.html[Not Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.notFilter(
+ FilterBuilders.rangeFilter("price").from("1").to("2"));
+--------------------------------------------------
+
+
+[[or-filter]]
+=== Or Filter
+
+See {ref}/query-dsl-or-filter.html[Or Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.orFilter(
+ FilterBuilders.termFilter("name.second", "banon"),
+ FilterBuilders.termFilter("name.nick", "kimchy")
+ );
+--------------------------------------------------
+
+Note that you can cache the result using
+`OrFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
+
+
+[[prefix-filter]]
+=== Prefix Filter
+
+See {ref}/query-dsl-prefix-filter.html[Prefix Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.prefixFilter("user", "ki");
+--------------------------------------------------
+
+Note that you can cache the result using
+`PrefixFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
+
+
+[[query-filter]]
+=== Query Filter
+
+See {ref}/query-dsl-query-filter.html[Query Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.queryFilter(
+ QueryBuilders.queryString("this AND that OR thus")
+ );
+--------------------------------------------------
+
+Note that you can cache the result using
+`QueryFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
+
+
+[[range-filter]]
+=== Range Filter
+
+See {ref}/query-dsl-range-filter.html[Range Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.rangeFilter("age")
+ .from("10")
+ .to("20")
+ .includeLower(true)
+ .includeUpper(false);
+
+// A simplified form using gte, gt, lt or lte
+FilterBuilders.rangeFilter("age")
+ .gte("10")
+ .lt("20");
+--------------------------------------------------
+
+Note that you can ask not to cache the result using
+`RangeFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
+
+
+[[script-filter]]
+=== Script Filter
+
+See {ref}/query-dsl-script-filter.html[Script Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilder filter = FilterBuilders.scriptFilter(
+ "doc['age'].value > param1"
+ ).addParam("param1", 10);
+--------------------------------------------------
+
+Note that you can cache the result using
+`ScriptFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
+
+
+[[term-filter]]
+=== Term Filter
+
+See {ref}/query-dsl-term-filter.html[Term Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.termFilter("user", "kimchy");
+--------------------------------------------------
+
+Note that you can ask not to cache the result using
+`TermFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
+
+
+[[terms-filter]]
+=== Terms Filter
+
+See {ref}/query-dsl-terms-filter.html[Terms Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.termsFilter("user", "kimchy", "elasticsearch")
+ .execution("plain"); // Optional, can be also "bool", "and" or "or"
+ // or "bool_nocache", "and_nocache" or "or_nocache"
+--------------------------------------------------
+
+Note that you can ask not to cache the result using
+`TermsFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
+
+
+[[nested-filter]]
+=== Nested Filter
+
+See {ref}/query-dsl-nested-filter.html[Nested Filter]
+
+
+[source,java]
+--------------------------------------------------
+FilterBuilders.nestedFilter("obj1",
+ QueryBuilders.boolQuery()
+ .must(QueryBuilders.matchQuery("obj1.name", "blue"))
+ .must(QueryBuilders.rangeQuery("obj1.count").gt(5))
+ );
+--------------------------------------------------
+
+Note that you can ask not to cache the result using
+`NestedFilterBuilder#cache(boolean)` method. See <<query-dsl-filters-caching>>.
+
+[[query-dsl-filters-caching]]
+=== Caching
+
+By default, some filters are cached or not cached. You can have a fine
+tuning control using `cache(boolean)` method when exists. For example:
+
+[source,java]
+--------------------------------------------------
+FilterBuilder filter = FilterBuilders.andFilter(
+ FilterBuilders.rangeFilter("postDate").from("2010-03-01").to("2010-04-01"),
+ FilterBuilders.prefixFilter("name.second", "ba")
+ )
+ .cache(true);
+--------------------------------------------------
diff --git a/docs/java-api/query-dsl-queries.asciidoc b/docs/java-api/query-dsl-queries.asciidoc
new file mode 100644
index 0000000..5760753
--- /dev/null
+++ b/docs/java-api/query-dsl-queries.asciidoc
@@ -0,0 +1,450 @@
+[[query-dsl-queries]]
+== Query DSL - Queries
+
+elasticsearch provides a full Java query dsl in a similar manner to the
+REST {ref}/query-dsl.html[Query DSL]. The factory for query
+builders is `QueryBuilders`. Once your query is ready, you can use the
+<<search,Search API>>.
+
+See also how to build <<query-dsl-filters,Filters>>
+
+To use `QueryBuilders` just import them in your class:
+
+[source,java]
+--------------------------------------------------
+import org.elasticsearch.index.query.QueryBuilders.*;
+--------------------------------------------------
+
+Note that you can easily print (aka debug) JSON generated queries using
+`toString()` method on `QueryBuilder` object.
+
+The `QueryBuilder` can then be used with any API that accepts a query,
+such as `count` and `search`.
+
+
+[[match]]
+=== Match Query
+
+See {ref}/query-dsl-match-query.html[Match Query]
+
+
+[source,java]
+--------------------------------------------------
+QueryBuilder qb = QueryBuilders.matchQuery("name", "kimchy elasticsearch");
+--------------------------------------------------
+
+
+[[multimatch]]
+=== MultiMatch Query
+
+See {ref}/query-dsl-multi-match-query.html[MultiMatch
+Query]
+
+[source,java]
+--------------------------------------------------
+QueryBuilder qb = QueryBuilders.multiMatchQuery(
+ "kimchy elasticsearch", // Text you are looking for
+ "user", "message" // Fields you query on
+ );
+--------------------------------------------------
+
+
+[[bool]]
+=== Boolean Query
+
+See {ref}/query-dsl-bool-query.html[Boolean Query]
+
+
+[source,java]
+--------------------------------------------------
+QueryBuilder qb = QueryBuilders
+ .boolQuery()
+ .must(termQuery("content", "test1"))
+ .must(termQuery("content", "test4"))
+ .mustNot(termQuery("content", "test2"))
+ .should(termQuery("content", "test3"));
+--------------------------------------------------
+
+
+[[boosting]]
+=== Boosting Query
+
+See {ref}/query-dsl-boosting-query.html[Boosting Query]
+
+
+[source,java]
+--------------------------------------------------
+QueryBuilders.boostingQuery()
+ .positive(QueryBuilders.termQuery("name","kimchy"))
+ .negative(QueryBuilders.termQuery("name","dadoonet"))
+ .negativeBoost(0.2f);
+--------------------------------------------------
+
+
+[[ids]]
+=== IDs Query
+
+See {ref}/query-dsl-ids-query.html[IDs Query]
+
+
+[source,java]
+--------------------------------------------------
+QueryBuilders.idsQuery().ids("1", "2");
+--------------------------------------------------
+
+[[constant-score]]
+=== Constant Score Query
+
+See {ref}/query-dsl-constant-score-query.html[Constant
+Score Query]
+
+[source,java]
+--------------------------------------------------
+// Using with Filters
+QueryBuilders.constantScoreQuery(FilterBuilders.termFilter("name","kimchy"))
+ .boost(2.0f);
+
+// With Queries
+QueryBuilders.constantScoreQuery(QueryBuilders.termQuery("name","kimchy"))
+ .boost(2.0f);
+--------------------------------------------------
+
+
+[[dismax]]
+=== Disjunction Max Query
+
+See {ref}/query-dsl-dis-max-query.html[Disjunction Max
+Query]
+
+[source,java]
+--------------------------------------------------
+QueryBuilders.disMaxQuery()
+ .add(QueryBuilders.termQuery("name","kimchy")) // Your queries
+ .add(QueryBuilders.termQuery("name","elasticsearch")) // Your queries
+ .boost(1.2f)
+ .tieBreaker(0.7f);
+--------------------------------------------------
+
+
+[[flt]]
+=== Fuzzy Like This (Field) Query (flt and flt_field)
+
+See:
+ * {ref}/query-dsl-flt-query.html[Fuzzy Like This Query]
+ * {ref}/query-dsl-flt-field-query.html[Fuzzy Like This Field Query]
+
+[source,java]
+--------------------------------------------------
+// flt Query
+QueryBuilders.fuzzyLikeThisQuery("name.first", "name.last") // Fields
+ .likeText("text like this one") // Text
+ .maxQueryTerms(12); // Max num of Terms
+ // in generated queries
+
+// flt_field Query
+QueryBuilders.fuzzyLikeThisFieldQuery("name.first") // Only on single field
+ .likeText("text like this one")
+ .maxQueryTerms(12);
+--------------------------------------------------
+
+
+[[fuzzy]]
+=== FuzzyQuery
+
+See {ref}/query-dsl-fuzzy-query.html[Fuzzy Query]
+
+
+[source,java]
+--------------------------------------------------
+QueryBuilder qb = QueryBuilders.fuzzyQuery("name", "kimzhy");
+--------------------------------------------------
+
+
+[[has-child-parent]]
+=== Has Child / Has Parent
+
+See:
+ * {ref}/query-dsl-has-child-query.html[Has Child Query]
+ * {ref}/query-dsl-has-parent-query.html[Has Parent]
+
+[source,java]
+--------------------------------------------------
+// Has Child
+QueryBuilders.hasChildQuery("blog_tag",
+ QueryBuilders.termQuery("tag","something"))
+
+// Has Parent
+QueryBuilders.hasParentQuery("blog",
+ QueryBuilders.termQuery("tag","something"));
+--------------------------------------------------
+
+
+[[match-all]]
+=== MatchAll Query
+
+See {ref}/query-dsl-match-all-query.html[Match All
+Query]
+
+[source,java]
+--------------------------------------------------
+QueryBuilder qb = QueryBuilders.matchAllQuery();
+--------------------------------------------------
+
+
+[[mlt]]
+=== More Like This (Field) Query (mlt and mlt_field)
+
+See:
+ * {ref}/query-dsl-mlt-query.html[More Like This Query]
+ * {ref}/query-dsl-mlt-field-query.html[More Like This Field Query]
+
+[source,java]
+--------------------------------------------------
+// mlt Query
+QueryBuilders.moreLikeThisQuery("name.first", "name.last") // Fields
+ .likeText("text like this one") // Text
+ .minTermFreq(1) // Ignore Threshold
+ .maxQueryTerms(12); // Max num of Terms
+ // in generated queries
+
+// mlt_field Query
+QueryBuilders.moreLikeThisFieldQuery("name.first") // Only on single field
+ .likeText("text like this one")
+ .minTermFreq(1)
+ .maxQueryTerms(12);
+--------------------------------------------------
+
+
+[[prefix]]
+=== Prefix Query
+
+See {ref}/query-dsl-prefix-query.html[Prefix Query]
+
+[source,java]
+--------------------------------------------------
+QueryBuilders.prefixQuery("brand", "heine");
+--------------------------------------------------
+
+
+[[query-string]]
+=== QueryString Query
+
+See {ref}/query-dsl-query-string-query.html[QueryString Query]
+
+[source,java]
+--------------------------------------------------
+QueryBuilder qb = QueryBuilders.queryString("+kimchy -elasticsearch");
+--------------------------------------------------
+
+
+[[java-range]]
+=== Range Query
+
+See {ref}/query-dsl-range-query.html[Range Query]
+
+[source,java]
+--------------------------------------------------
+QueryBuilder qb = QueryBuilders
+ .rangeQuery("price")
+ .from(5)
+ .to(10)
+ .includeLower(true)
+ .includeUpper(false);
+--------------------------------------------------
+
+
+=== Span Queries (first, near, not, or, term)
+
+See:
+ * {ref}/query-dsl-span-first-query.html[Span First Query]
+ * {ref}/query-dsl-span-near-query.html[Span Near Query]
+ * {ref}/query-dsl-span-not-query.html[Span Not Query]
+ * {ref}/query-dsl-span-or-query.html[Span Or Query]
+ * {ref}/query-dsl-span-term-query.html[Span Term Query]
+
+[source,java]
+--------------------------------------------------
+// Span First
+QueryBuilders.spanFirstQuery(
+ QueryBuilders.spanTermQuery("user", "kimchy"), // Query
+ 3 // Max End position
+ );
+
+// Span Near
+QueryBuilders.spanNearQuery()
+ .clause(QueryBuilders.spanTermQuery("field","value1")) // Span Term Queries
+ .clause(QueryBuilders.spanTermQuery("field","value2"))
+ .clause(QueryBuilders.spanTermQuery("field","value3"))
+ .slop(12) // Slop factor
+ .inOrder(false)
+ .collectPayloads(false);
+
+// Span Not
+QueryBuilders.spanNotQuery()
+ .include(QueryBuilders.spanTermQuery("field","value1"))
+ .exclude(QueryBuilders.spanTermQuery("field","value2"));
+
+// Span Or
+QueryBuilders.spanOrQuery()
+ .clause(QueryBuilders.spanTermQuery("field","value1"))
+ .clause(QueryBuilders.spanTermQuery("field","value2"))
+ .clause(QueryBuilders.spanTermQuery("field","value3"));
+
+// Span Term
+QueryBuilders.spanTermQuery("user","kimchy");
+--------------------------------------------------
+
+
+[[term]]
+=== Term Query
+
+See {ref}/query-dsl-term-query.html[Term Query]
+
+[source,java]
+--------------------------------------------------
+QueryBuilder qb = QueryBuilders.termQuery("name", "kimchy");
+--------------------------------------------------
+
+
+[[java-terms]]
+=== Terms Query
+
+See {ref}/query-dsl-terms-query.html[Terms Query]
+
+[source,java]
+--------------------------------------------------
+QueryBuilders.termsQuery("tags", // field
+ "blue", "pill") // values
+ .minimumMatch(1); // How many terms must match
+--------------------------------------------------
+
+
+[[top-children]]
+=== Top Children Query
+
+See {ref}/query-dsl-top-children-query.html[Top Children Query]
+
+[source,java]
+--------------------------------------------------
+QueryBuilders.topChildrenQuery(
+ "blog_tag", // field
+ QueryBuilders.termQuery("tag", "something") // Query
+ )
+ .score("max") // max, sum or avg
+ .factor(5)
+ .incrementalFactor(2);
+--------------------------------------------------
+
+
+[[wildcard]]
+=== Wildcard Query
+
+See {ref}/query-dsl-wildcard-query.html[Wildcard Query]
+
+
+[source,java]
+--------------------------------------------------
+QueryBuilders.wildcardQuery("user", "k?mc*");
+--------------------------------------------------
+
+
+[[nested]]
+=== Nested Query
+
+See {ref}/query-dsl-nested-query.html[Nested Query]
+
+
+[source,java]
+--------------------------------------------------
+QueryBuilders.nestedQuery("obj1", // Path
+ QueryBuilders.boolQuery() // Your query
+ .must(QueryBuilders.matchQuery("obj1.name", "blue"))
+ .must(QueryBuilders.rangeQuery("obj1.count").gt(5))
+ )
+ .scoreMode("avg"); // max, total, avg or none
+--------------------------------------------------
+
+
+
+[[indices]]
+=== Indices Query
+
+See {ref}/query-dsl-indices-query.html[Indices Query]
+
+
+[source,java]
+--------------------------------------------------
+// Using another query when no match for the main one
+QueryBuilders.indicesQuery(
+ QueryBuilders.termQuery("tag", "wow"),
+ "index1", "index2"
+ )
+ .noMatchQuery(QueryBuilders.termQuery("tag", "kow"));
+
+// Using all (match all) or none (match no documents)
+QueryBuilders.indicesQuery(
+ QueryBuilders.termQuery("tag", "wow"),
+ "index1", "index2"
+ )
+ .noMatchQuery("all"); // all or none
+--------------------------------------------------
+
+
+[[geo-shape]]
+=== GeoShape Query
+
+See {ref}/query-dsl-geo-shape-query.html[GeoShape Query]
+
+
+Note: the `geo_shape` type uses `Spatial4J` and `JTS`, both of which are
+optional dependencies. Consequently you must add `Spatial4J` and `JTS`
+to your classpath in order to use this type:
+
+[source,java]
+--------------------------------------------------
+<dependency>
+ <groupId>com.spatial4j</groupId>
+ <artifactId>spatial4j</artifactId>
+ <version>0.3</version>
+</dependency>
+
+<dependency>
+ <groupId>com.vividsolutions</groupId>
+ <artifactId>jts</artifactId>
+ <version>1.12</version>
+ <exclusions>
+ <exclusion>
+ <groupId>xerces</groupId>
+ <artifactId>xercesImpl</artifactId>
+ </exclusion>
+ </exclusions>
+</dependency>
+--------------------------------------------------
+
+[source,java]
+--------------------------------------------------
+// Import Spatial4J shapes
+import com.spatial4j.core.context.SpatialContext;
+import com.spatial4j.core.shape.Shape;
+import com.spatial4j.core.shape.impl.RectangleImpl;
+
+// Also import ShapeRelation
+import org.elasticsearch.common.geo.ShapeRelation;
+--------------------------------------------------
+
+[source,java]
+--------------------------------------------------
+// Shape within another
+QueryBuilders.geoShapeQuery("location",
+ new RectangleImpl(0,10,0,10,SpatialContext.GEO))
+ .relation(ShapeRelation.WITHIN);
+
+// Intersect shapes
+QueryBuilders.geoShapeQuery("location",
+ new PointImpl(0, 0, SpatialContext.GEO))
+ .relation(ShapeRelation.INTERSECTS);
+
+// Using pre-indexed shapes
+QueryBuilders.geoShapeQuery("location", "New Zealand", "countries")
+ .relation(ShapeRelation.DISJOINT);
+--------------------------------------------------
diff --git a/docs/java-api/search.asciidoc b/docs/java-api/search.asciidoc
new file mode 100644
index 0000000..5cb27b4
--- /dev/null
+++ b/docs/java-api/search.asciidoc
@@ -0,0 +1,140 @@
+[[search]]
+== Search API
+
+The search API allows to execute a search query and get back search hits
+that match the query. It can be executed across one or more indices and
+across one or more types. The query can either be provided using the
+<<query-dsl-queries,query Java API>> or
+the <<query-dsl-filters,filter Java API>>.
+The body of the search request is built using the
+`SearchSourceBuilder`. Here is an example:
+
+[source,java]
+--------------------------------------------------
+import org.elasticsearch.action.search.SearchResponse;
+import org.elasticsearch.action.search.SearchType;
+import org.elasticsearch.index.query.FilterBuilders.*;
+import org.elasticsearch.index.query.QueryBuilders.*;
+--------------------------------------------------
+
+[source,java]
+--------------------------------------------------
+SearchResponse response = client.prepareSearch("index1", "index2")
+ .setTypes("type1", "type2")
+ .setSearchType(SearchType.DFS_QUERY_THEN_FETCH)
+ .setQuery(QueryBuilders.termQuery("multi", "test")) // Query
+ .setPostFilter(FilterBuilders.rangeFilter("age").from(12).to(18)) // Filter
+ .setFrom(0).setSize(60).setExplain(true)
+ .execute()
+ .actionGet();
+--------------------------------------------------
+
+Note that all parameters are optional. Here is the smallest search call
+you can write:
+
+[source,java]
+--------------------------------------------------
+// MatchAll on the whole cluster with all default options
+SearchResponse response = client.prepareSearch().execute().actionGet();
+--------------------------------------------------
+
+For more information on the search operation, check out the REST
+{ref}/search.html[search] docs.
+
+
+[[scrolling]]
+=== Using scrolls in Java
+
+Read the {ref}/search-request-scroll.html[scroll documentation]
+first!
+
+[source,java]
+--------------------------------------------------
+import static org.elasticsearch.index.query.FilterBuilders.*;
+import static org.elasticsearch.index.query.QueryBuilders.*;
+
+QueryBuilder qb = termQuery("multi", "test");
+
+SearchResponse scrollResp = client.prepareSearch(test)
+ .setSearchType(SearchType.SCAN)
+ .setScroll(new TimeValue(60000))
+ .setQuery(qb)
+ .setSize(100).execute().actionGet(); //100 hits per shard will be returned for each scroll
+//Scroll until no hits are returned
+while (true) {
+ scrollResp = client.prepareSearchScroll(scrollResp.getScrollId()).setScroll(new TimeValue(600000)).execute().actionGet();
+ for (SearchHit hit : scrollResp.getHits()) {
+ //Handle the hit...
+ }
+ //Break condition: No hits are returned
+ if (scrollResp.getHits().getHits().length == 0) {
+ break;
+ }
+}
+--------------------------------------------------
+
+
+=== Operation Threading
+
+The search API allows to set the threading model the operation will be
+performed when the actual execution of the API is performed on the same
+node (the API is executed on a shard that is allocated on the same
+server).
+
+There are three threading modes.The `NO_THREADS` mode means that the
+search operation will be executed on the calling thread. The
+`SINGLE_THREAD` mode means that the search operation will be executed on
+a single different thread for all local shards. The `THREAD_PER_SHARD`
+mode means that the search operation will be executed on a different
+thread for each local shard.
+
+The default mode is `THREAD_PER_SHARD`.
+
+
+[[msearch]]
+=== MultiSearch API
+
+See {ref}/search-multi-search.html[MultiSearch API Query]
+documentation
+
+[source,java]
+--------------------------------------------------
+SearchRequestBuilder srb1 = node.client()
+ .prepareSearch().setQuery(QueryBuilders.queryString("elasticsearch")).setSize(1);
+SearchRequestBuilder srb2 = node.client()
+ .prepareSearch().setQuery(QueryBuilders.matchQuery("name", "kimchy")).setSize(1);
+
+MultiSearchResponse sr = node.client().prepareMultiSearch()
+ .add(srb1)
+ .add(srb2)
+ .execute().actionGet();
+
+// You will get all individual responses from MultiSearchResponse#getResponses()
+long nbHits = 0;
+for (MultiSearchResponse.Item item : sr.getResponses()) {
+ SearchResponse response = item.getResponse();
+ nbHits += response.getHits().getTotalHits();
+}
+--------------------------------------------------
+
+
+[[java-search-facets]]
+=== Using Facets
+
+The following code shows how to add two facets within your search:
+
+[source,java]
+--------------------------------------------------
+SearchResponse sr = node.client().prepareSearch()
+ .setQuery(QueryBuilders.matchAllQuery())
+ .addFacet(FacetBuilders.termsFacet("f1").field("field"))
+ .addFacet(FacetBuilders.dateHistogramFacet("f2").field("birth").interval("year"))
+ .execute().actionGet();
+
+// Get your facet results
+TermsFacet f1 = (TermsFacet) sr.getFacets().facetsAsMap().get("f1");
+DateHistogramFacet f2 = (DateHistogramFacet) sr.getFacets().facetsAsMap().get("f2");
+--------------------------------------------------
+
+See <<java-facets,Facets Java API>>
+documentation for details.