Age | Commit message (Collapse) | Author | Files | Lines |
|
This plumbs in the first step and getting the list of environments
returned. At the moment it is hard coded to only ever return the
prodution environment.
|
|
This commit does several things, all in order to make authorizing API V2
requests a bit easier:
* Renames /v2 to /v2.0, which was chosen because it doesn't conflict
with a legal environment name in the V1 API.
* Adds route chaining so that a handler can deal with a request
prefixed with /v2.0 and then continue on to another route
* Changes how calls to authorization are handled so that full paths
are checked rather than indirection/key pairs.
* Introduces an authorization step in the /v2.0 request chain. This is
currently limited to only handle GET requests (seen as find in
auth.conf).
|
|
They aren't really part of the handler, and many more areas of HTTP
processing are referring to them.
|
|
Rather than Routes having to match on path and method, they match only
on path and can then decide whether the method is allowed or raise
a method not allowed error.
Most of the handler routing spec tests are really route spec tests, so
those have been moved over.
|
|
Add a routing framework for the HTTP module that routes requests to
either the V1 or V2 API as appropriate (first it sees if the V2 API
would like to handle a request, and if not it's passed off to the V1
API).
To support the routing framework, several new classes in the HTTP module
are introduced: Request, Route, Response, and MemoryResponse (for
testing).
The V2 API is implemented in lib/puppet/network/http/api/v2.rb.
Change the method to handle the request from process to call, which
allows using lamdbas and procs as route handlers.
Change the require path in the HTTP module to break a cycle, which
necessitates changing requires in a few other files to just require the
entire Puppet::Network::HTTP module (rather than just the V1 API class).
Delete Puppet::Network::HTTP:RackHttpHandler because it was useless.
Change the initialize method of Puppet::Network::HTTP::WEBrickRest to
omit the unused handler argument.
|
|
It doesn't seem to be used anywhere except in some tests that expect it
to be called.
Also rescue any exception when processing a request in the http handler,
and respond with a 500(!).
|
|
The end goal is to have v1 and v2 API endpoints in their own processors,
with the routes registered on a shared handler that performs client
authentication and dispatch.
This commit rips out the v1 endpoints to their own processor, but does
not define anything about the handler including registration.
v1-related behavorial tests have been moved from the handler spec to the
v1 spec.
|
|
hlindberg/pup-716_short-lived-objects-in-filesystem
(PUP-716) short lived objects in filesystem
|
|
|
|
|
|
Sometimes a request needs to perform authentication. We can support this
with basic auth readily enough.
This commit moves the basic auth support from the report processor and
makes it a capability of the network connection code. As part of this it
needed to move away from using the #get, #post, etc calls on the
connection object and instead construct the Net::HTTP request objects.
This might present a slight change in behavior, but not one that users
will normally notice. The most noticable change is that the #request
method can no longer be used for calling arbitrary methods on the
underlying connection, instead it can only be used to call to the
appropriate request method (get, head, delete, post, put)
In order to make this work more clearly, this also adds actual
parameters to the connection class's #get, #post, etc methods. The
defaults are taken from the Net::HTTP defaults, which is what would have
been used before.
|
|
This extends the trusted information that is exposed to the system and
manifests to include the certificate extensions that are part of the
certificate provided by the agent. Only the custom extensions that are
part of the puppet extension arc are included.
|
|
This pulls apart the indirector request and the trusted information. The
trusted information is stored in the Puppet::Context and overridden by
the remote request handler to allow the injection of various elements of
trusted information. The reason to separate it from the indirector
request is because the indirector requests are hard to control, as they
are contstructed from hashes of information passed around and
manipulated. This should provide a more straightforward mechanism for
managing this kind of "contextual" information.
|
|
Using the mocks is not absolutely necessary in this case. It is easy to
construct the certificate and using it makes it a little clearer what
kind of object we are dealing with.
|
|
* Sharpie/21869-fix-ssl-recursion:
(#21869) Fix recursion in cert expiration check
|
|
Two factory methods had been created that ended up following CamelCase
naming. This is ruby, man! We name with underscores here!
|
|
This refactors the API with an abstract superclass that contains
the API documentation and two factory methods; to obtain the no
validation implementation, and the default implementation.
Yardoc reworked.
|
|
During every authenticated request, the expiration date of the involved
certificates is checked. However, if the localhost cert is loaded when the CA
cert is not present an authenticated request will be initiated to download the
CA cert. This triggers another expiration check for authenticated requests,
which loads the localhost cert, which initiates another authenticated request
to download the CA cert... and so on until stack space is exhausted.
This patch skips the expiration check for the localhost cert if the CA cert is
missing.
|
|
This provides the ability to create a verifier for SSL connections
opened with puppet's Puppet::Network::HTTP::Connection. The verifier is
provided the Net::HTTP connection and it just needs to configure it for
the correct verification mode.
The new functionality can be used when puppet's standard SSL
verification rules are not suited to the needs of the caller. The
impetus for this was a requirement that a connection be able to be made,
the certificates checked, but the subject and alt name checks skipped.
Rather than put that directly into puppet, this allows a new validator
to be written that performs those checks.
|
|
- All previous File and FileTest calls to exist? or exists? go through
the new FileSystem::File abstraction so that the implementation can
later be swapped for a Windows specific one to support symlinks
|
|
- All calls to File class stat / lstat go through the new
FileSystem::File abstraction so that the implementation can later
be swapped for a Windows specific one to support symlinks
|
|
Ticket/22893 control remote requests
|
|
The http handler will check whether an indirection allows remote requests
before performing any operations.
|
|
Pass the full indirection object into do_* methods instead of just the name.
This is a better access point for getting what these methods need and reduces
duplication.
|
|
Previously the code had assumed that the order of the supported formats
didn't mean anything. However, the preferred_serialization_format, and
other factors, meant that the order really does matter. This makes the
selected format, when the Accept header is */* be the most preferred,
supported format.
|
|
|
|
Use a feature to check for existence of msgpack library and add the
serialization format if it is supported.
|
|
This will raise a FormatError if asked to intern invalid YAML instead of an
unintelligible ruby error.
|
|
|
|
This test was failing due to a bug in the test, which didn't show up because
it was checking for too broad of an error.
|
|
|
|
The first pass at the fix for #22652 fixed ignore handling by fixing
support for multi-valued parameters in a GET query_string (i.e. by returning
them as an array); however, it inadvertently broke parsing of parameters
in a POST body, e.g. as found in the catalog endpoint POST.
This patch addresses parameters in either case. As a footnote however,
note that multi-valued parameters in a POST body (which we don't use) would not
return them as an array.
This patch also adds two specs tests, one to catch the issue in the first pass fix
(POST parameters not parsed), and one to flag the state of POST multi-valued param
parsing (does not return array).
|
|
Rack does not actually support parsing multiple valued query strings
into an array of values (see https://github.com/rack/rack/pull/519).
This is not going to be supported before rack 2.0, but we need this
behavior in order to not have to deal with YAML query parameters.
This changes the rack based query parameter parsing to use CGI.parse
instead of the built in rack semantics (which also has a large number of
other semantics, which we weren't using). As a note, it does not appear
that the behavior of rack's query parameter parsing is documented
anywhere.
|
|
The initial fix for #22535 used a new field in the status endpoint hash
to return the puppet master version. The status endpoint has been
enabled in authconfig.rb since 2.6, so this seemed safe. However,
acceptance tests are based on conf/auth.conf (as is the default PE
auth.conf), which was never updated to enable the status endpoint.
So the status endpoint is not a good choice for returning the puppet
master's version.
This patch adds a response header containing the puppet version and
uses that to control yaml backward compatibility mode.
This also required updating spec tests, including several that have
a simple mock up of the http response object.
|
|
When the specific exception classes for HTTP control flow were
introduced, they ended up changing the logging from info to notice.
Notice is a much more severe level, which doesn't warrant having simple
file not found errors showing up at it. This changes the level back to
info.
|
|
The changes to how we process Accept headers left out the case of being
asked for anything (*/*), which the puppet dashboard will do. This
expands */* to mean the client accepts any format the the requested
endpoint can support.
In addition, this also adds logic to ignore quality specifiers.
|
|
|
|
|
|
|
|
|
|
This patch adds tests for the newly introduced support
for ssl connections without peer verification.
|
|
This patch extends the functionality of the
Puppet::Network::HTTP::Connection
class to support ssl connections without peer verification.
The signature of the initialize method of the class is changed
to take the "options" hash argument instead of a list of individual
arguments for easier use as well as easier addition of possible
additional parameters in the future.
The patch also changes the
Puppet::Network::HttpPool.http_instance
method, but in a way which ensures the method signature remains
backwards compatible by only adding one parameter with a default
value.
|
|
Previously, although we stripped the leading `HTTP_` from
HTTP headers which Rack turned into environment variables,
we did not restore hyphens from underscores.
The user-visible consequence of this was that the match
to detect whether profiling was request from an agent would
not work when running under Passenger, but did work under
Webrick.
This commit updates the translation routine to turn underscores
back into hyphens. It also fixes an erroneous test fixture; the
sample header 'HTTP_X-Custom-Header' would never been seen in
the wild because environment variables are not allowed to have
hyphens.
|
|
(#18255) - Follow http redirections
|
|
This patch adds support for HTTP redirection in the Puppet HTTP
client.
The http client includes a redirection limit, and recreate the
redirected connection with the same certificates and store as the
original (as long as the redirect new location is ssl protected).
This patch allows redirection following for all HTTP methods, which
in the Puppet ecosystem shouldn't be an issue.
Signed-off-by: Brice Figureau <brice-puppet@daysofwonder.com>
Use a loop - temp commit
|
|
When puppet sends facts to the master it doubly encodes the data
structure. This happens because both the facts_handler encodes that
serialized facts and the http request functionality encodes the value
again for the query string. Although this happens, the compiler, when
deserializing the facts does not decode the data. This is because
decoding was handled by decode_params, although it never should have
been (the underlying library already has decoded the data), which
disguised this problem.
This changes the compiler to unescape the data that it gets to be
symmetric with the fact sending code.
|
|
We had previously converted to trying to parse the query string
directly, because it looked like WEBrick didn't handle multivalued
parameters. This had the effect of not being able to deal with POST
parameters correctly. After investigating WEBrick some more, it turns
out that it actually does support multivalued parameters, but that is
fairly hidden if you just look at the value returned from #query. The
WEBrick::HTTPRequest#query method actually returns a Hash{String,
FormData}. FormData is a subclass of String and adds a linked list of
Strings, which is used to represent the multiple values. Because it is a
subclass of String, it isn't clear what you are dealing with unless you
specifically look at the class that you have. It also looks like a
string because its #to_s returns the head of the linked list.
This changes the #params method for webrick to understand this structure
and correctly return multivalued parameters as individual strings or an
array of strings.
|
|
Previously, when the REST requests were constructed and one of the
parameters was an Array, the Array was encoded as YAML and then
identified on the master and parsed as YAML. This was used specifically
to send the list of "ignores" when making file metadata requests.
This changes the system to now use multi-valued query parameters instead
of YAML. Both the agent and master have been updated to support this,
however existing masters will not be able to understand the new requests
correctly since they will either:
* in webrick only choose the first value as the value of the parameter
* in rack fail because the decode_params method does not understand
arrays as values
|
|
When the save and destroy actions started respecting the accept header
we duplicated how accept headers were handled for selecting the response
format. This removes that duplication and in putting specific tests in
place, cleans up the tests to work mainly through the public interfaces
as well as putting in place an HTTP error handling mechanism that allows
the request handlers to more specifically control the HTTP responses.
Invalid Accept headers will now cause a 406 response instead of the
previous 400.
Since all of this is tested with the memory terminus, it needed to be
expanded to also handle search and head actions.
|
|
The REST destroy action will now respect the formats that are requested in
the Accept header. In order to retain backwards compatibility for
clients that have never been required to specify an Accept before this
it will also default to :yaml when no Accept is provided. If an Accept
is provided but none of the wanted formats are suitable an error is
raised, just like find and search.
|