Age | Commit message (Collapse) | Author | Files | Lines |
|
Previously, puppet's webrick server did not specify which ciphersuites
it would accept. Depending on the ruby and openssl, the default set of
ciphersuites is:
$ ruby -ropenssl -e 'puts OpenSSL::SSL::SSLContext::DEFAULT_PARAMS[:ciphers]'
ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW
Note that "ALL:!ADH" does not exclude AECDH, and the default param
string includes "LOW", e.g. DES-CBC-SHA.
This commit updates the webrick ciphersuites to match the value used
in passenger in commit 204b2974b. The resulting ciphersuites are:
[["DHE-RSA-AES256-GCM-SHA384", "TLSv1/SSLv3", 256, 256],
["DHE-RSA-AES256-SHA256", "TLSv1/SSLv3", 256, 256],
["ECDHE-RSA-AES256-GCM-SHA384", "TLSv1/SSLv3", 256, 256],
["ECDHE-RSA-AES256-SHA384", "TLSv1/SSLv3", 256, 256],
["DHE-RSA-AES128-GCM-SHA256", "TLSv1/SSLv3", 128, 128],
["DHE-RSA-AES128-SHA256", "TLSv1/SSLv3", 128, 128],
["ECDHE-RSA-AES128-GCM-SHA256", "TLSv1/SSLv3", 128, 128],
["ECDHE-RSA-AES128-SHA256", "TLSv1/SSLv3", 128, 128],
["DHE-RSA-CAMELLIA256-SHA", "TLSv1/SSLv3", 256, 256],
["DHE-RSA-AES256-SHA", "TLSv1/SSLv3", 256, 256],
["ECDHE-RSA-AES256-SHA", "TLSv1/SSLv3", 256, 256],
["DHE-RSA-CAMELLIA128-SHA", "TLSv1/SSLv3", 128, 128],
["DHE-RSA-AES128-SHA", "TLSv1/SSLv3", 128, 128],
["ECDHE-RSA-AES128-SHA", "TLSv1/SSLv3", 128, 128],
["CAMELLIA256-SHA", "TLSv1/SSLv3", 256, 256],
["AES256-SHA", "TLSv1/SSLv3", 256, 256],
["CAMELLIA128-SHA", "TLSv1/SSLv3", 128, 128],
["AES128-SHA", "TLSv1/SSLv3", 128, 128]]
|
|
Webrick will now reject SSLv3 connections. If an SSL client tries to
connection with SSLv3, webrick+openssl will issue an sslv3 alert
handshake failure.
|
|
|
|
Override the Ruby 2.x default of setting accept-encoding to gzip when puppet http_compression is set to false.
|
|
The http handler code contains a check to see if the expiration
date of the client certificate is within a certain window, so
that we can log a warning message if it will expire soon.
However, the mechanisms for handling this kind of check can
really vary depending on what web server you're running in, so
it doesn't make sense for this check to occur in a code path
that is common to all of the different web servers.
This commit simply moves the logic up into the code for
the individual web servers so that they will have the
ability to adjust the behavior according to their own
needs.
|
|
The persistent http connection work introduced a regression,
preventing the agent from displaying useful error messages when SSL
verification fails, e.g. the server's SSL certificate doesn't match
the hostname the agent tried to connect to. The connection_spec test
didn't catch the issue, because those tests execute with the
non-caching pool, which always uses non-persistent connections.
The root cause is because the Connection class assumed http
connections are started by ruby in the `Net::HTTP#request`
method, so the OpenSSL rescue block wrapped that call.
However, in order to use persistent http connection, the caller needs
to explicitly start the connection prior to calling Net::HTTP#request,
which happens in the outer `Connection#with_connection` method.
This commit expands the scope of the rescue block. This way we receive
meaningful error messages if the connection is started explicitly for
persistent connections, or on-demand for non-persistent connections. It
also executes the ssl verification tests using persistent connections.
Also note that `with_connection` is private, so the fact that
`Pool#with_connection` or `Net::HTTP#request` can start the connection
is not visible to users of the Connection class.
|
|
This commit refactors the spec tests that ensure puppet displays
meaningful error messages when SSL verification fails, e.g. when the
server certificate doesn't match the hostname we connected to.
|
|
(PUP-744) Support persistent HTTP connections
|
|
Re-word the description of a test in `webrick_spec.rb` as it is testing the
behavior of `masterhttplog` and not `masterlog`.
|
|
Previously, it was possible for puppet to create an HTTPS connection
with VERIFY_NONE, and cache the connection. As a result, the pool code
could hand out an insecure connection, when the caller expected a secure
one (VERIFY_PEER).
In practice this isn't an issue, because puppet only uses insecure HTTPS
connections when bootstrapping its SSL key and certs, and that happens
before `Configurer#run` overrides the default non-caching pool
implementation. However, it could be an issue if a provider were to
request a insecure connection *during* catalog application, as that
connection would be cached, and possibly reused by puppet, e.g. when
sending the report.
This commit modifies the connection pool to only cache HTTP or secure
HTTPS connections. All other connections are closed after one HTTP
request/response.
|
|
Add yard doc for http pooling. Also adds a test to verify multiple
persistent connections can be borrowed from the pool.
|
|
Remove expectations that were secondary to the actual test, also
clarify behavior that was being tested.
|
|
Previously, I was concerned that a caller could borrow a connection
from the pool, and perform a long operation before returning it to the
pool. That could make it trivial for the caller to add expired
connections back to the pool, and cause issues for the next HTTP
request.
However, that is not likely, because connections are only borrowed for
the duration of the `Connection#get`, `#post`, etc calls, and those
methods don't take a block. The connection methods that do take a
block, e.g. `#request_get` are deprecated, and are not used within
puppet.
|
|
Previously, the `redirect_limit` specified the maximum number of HTTP
requests that the connection would make, not the maximum number of
redirects that it would follow. As a result, a limit of 0 would prevent
any HTTP requests from being made. This was introduced in 60f22eb1 when
the redirect logic was added.
This commit changes the limit so that 0 means don't follow redirects,
1 means follow one redirect, etc.
|
|
Previously, the connection would mutate its host, port, etc when
following redirects. This wasn't really an issue, because connections
were always single use.
Now that connections can be cached, we do not want to mutate the
original site information even when following permanent redirects.
|
|
Previously, the `Connection#connection` method was used in
`connection_spec.rb` to validate the state of the Net::HTTP object
created by the puppet connection.
Since the puppet connection no longer owns an Net::HTTP object (it may
use newly created or cached instances), the `#connection` method
doesn't make sense, and since it was private, can be removed.
The tests that relied on the private method have been updated, and in
some cases moved to the factory_spec, e.g. verifying proxy settings.
|
|
When starting a persistent HTTP connection, we do not expliclty specify
`Connection: Keep-Alive`, because that is the default in HTTP/1.1[1].
This commit adds a test to ensure we are using HTTP 1.1 or later.
Amazingly ruby has defaulted to 1.1 as far back as ruby 1.8.7[2].
[1] http://tools.ietf.org/html/rfc7230#appendix-A.1.2
[2] https://github.com/ruby/ruby/blob/v1_8_7/lib/net/http.rb#L282
|
|
The puppet connection class doesn't have an ssl_host instance variable.
|
|
Previously, the puppet connection class owned the Net::HTTP factory and
passed it to the pool when requesting a connection. However, the caching
pool only needs the factory when creating a new connection.
This commit makes the factory stateless and pushes it into the
respective pools. The SSL verifier is stateful (contains peer certs)
and remains in puppet's connection class.
When a pool needs to create a connection, it calls back into the puppet
connection object to initialize SSL on the Net::HTTP object. In the case
of the caching poool, it needs to do this before starting the
connection, which will perform the TCP connect and SSL handshake
operations.
|
|
DummyPool was a bad name, renaming to what it actually does.
|
|
Previously, the dummy and caching pools allowed the caller to take a
connection from the pool and retain ownership of the connection. This is
undesirable because if the caller encounters an exception while using
the connection, then they have to remember to close it, otherwise the
underlying socket will leak.
There is also the issue that a caller may forget to add it back to the
pool before the pool is closed, in which case the socket will again
leak.
This commit changes the api to a block form to ensure connections are
always returned to the pool or closed.
Note the `reuse` flag is used to ensure the connection is returned to
the pool even if the caller does a non-local return:
with_connection(..) { |conn| return }
|
|
This commit adds a pool for caching puppet network connections. The pool
maintains a hash indexed by site. Each site refers to an array of
sessions, where each session refers to a puppet HTTP connection and its
timeout.
It is expected the consumers of the pool will follow the pattern:
conn = pool.take_connection(site)
# do something with conn
pool.add_connection(site, conn)
Whenever a connection is (re-)added to the pool, the pool will assign a
expiration timeout. When a caller next requests a connection, the pool
will lazily close any expired connection(s) for that site.
The timeout exists, because if there is a long delay between the time
that an open connection is added to the pool and when it is taken, the
remote peer may close the connection, e.g. apache. As a result, we
only want to cache connections for a short time. Currently the timeout
defaults to 15 seconds.
|
|
This commit adds a DummyPool that doesn't perform any caching of network
connections. It also adds the pool to the context system, but does so
using a Proc to defer evaluation of the pool. This way we are not
loading networking code while bootstrapping puppet.
|
|
Previously, puppet connections were directly responsible for creating
Net::HTTP objects. This was okay because there was only one code path
where new Net::HTTP objects needed to be created.
However, in order to support different connection pool behaviors,
caching and non-caching of Net::HTTP objects, we want to have two pool
implementations, and each of them will need the ability to create
Net::HTTP objects.
This commit creates a factory object for creating Net::HTTP objects,
and has the puppet connection delegate to it.
|
|
An HTTP session binds a puppet HTTP connection and an expiration time.
|
|
This commit adds a site helper method for use_ssl?
|
|
This commit adds a Site#move_to method that creates a new site pointing
to a new location. Note that we don't preserve the path from the new
site, as sites are only concerned with the HTTP connection layer, not
the HTTP request path and query parameters.
|
|
This commit creates a site value object containing a scheme, host, and
port. A site identifies an HTTP(S) connection. It does not currently
take into account proxy host, port, user, password, as these are global
puppet settings.
The Site class defines the `eql?` and `hash` methods making it suitable
for use as the key in a hash table.
|
|
This commit modifies the signature of the
`Puppet::Util::Profiler.profile` method to accept a new argument,
which is basically a "metric id". The argument takes the form
of an array of strings/symbols, allowing us to group specific sets of
profiling data into hierarchies and report aggregate profiling
statistics at any level of the hierarchy.
It also adds a new profiler, `Aggregate`, which extends the
existing `WallClock` profiler. The new profiler tracks the
aggregate timing data based on the metric id hierarchy,
and logs a message containing the aggregate data
at the end of the run.
|
|
* pr/2758:
(maint) Stop using ambiguous Manager name
(PUP-2747) Add support for multiple profilers
|
|
This commit makes a slight modification to the profiling system,
such that we can register multiple profilers at the same time.
This allows callers to register their own profiler without
worrying about collisions with profilers that might have
been registered by other parts of the code.
|
|
This adds the two new properties environment_timeout, and config_version
to the returned settings properties.
The timeout is either an integer measured in seconds from min 0, or
the string 'unlimited'.
|
|
When we forgot to handle DELETE the error showed up as a 500 because the
code tried to call a method on nil. The correct response is a 405, since
the server does not handle that method. Now that GET, POST, PUT, HEAD,
and DELETE are all covered, it should be safe to respond with 405 in all
other cases.
|
|
|
|
Previously resources would end up "floating away" from the catalog that
contained them because they wouldn't use the catalog as their
source for an environment. This also extended to catalogs, which lost
track of their environment at times. So in order to have resources use
the environment of their catalog, the catalog needed to have a
guaranteed environment.
This adds an environment parameter to the catalog constructor and uses
it to drive the environment to use for everything in that catalog. The
resource now uses its catalog's environment, if it has a catalog, or it
uses the new NONE environment, if it does not have a catalog.
When catalogs are deserialized, they also need to have a reference to
*some* environment. However, since they only have a name of the
environment, there isn't any guarantee that an environment of that name
is available locally. To deal with this there is a new "remote"
environment constructor to be able to create references to the remote
environment of the catalog. This is used as the environment of a
deserialized catalog.
We (Joshua Partlow and Andrew Parker) tried to have the environment a
required parameter of catalog construction, but it appears that creating
catalogs is public API and so cannot be changed (we verified this by
checking stdlib tests).
|
|
A few specs relying on mocking Puppet::Node::Environment.new to return
a specific test environment were colliding with the new environments
cacheing layer. Switched to using puppet override to supply a static
environment.
|
|
|
|
This commit modifies Puppet::Network::HttpPool by adding a
`set_http_client_class` method, which can be used to override
the implementation class that we use as our HTTP client.
|
|
Now that there is a possibility of environments not existing (when
directory environments are enabled) the API needs to report a 404 Not
Found error when the client requests an environment that doesn't exist.
|
|
The tests were often creating environments that didn't need to have
manifests. Since we didn't have a way of specifying that when they
were written, we used '' to fill in the blank. This actually caused a
large number of tests to try to find code to load in the PWD, which
caused tests to break if a developer had parse errors in manifests being
used for testing in the root directory of their puppet project. This
changes the manifest to be optional and removes '' from those tests. The
tests no longer fail if a bad manifest is in the PWD :D
|
|
Modulepath was already being expanded; this ensures manifest is expanded
as well and fixes some specs so that they will work on Windows too.
|
|
In talking with consumers of the environment information it turns out
that listing the modules isn't the desired feature. Instead they really
want the settings of the environment (modulepath, manifest). Other
settings may be added later.
This has the added benefit that listing the environments is now a much
cheaper operations since it doesn't need to load the module data.
|
|
This is because these methods are also used for deserialization from
other formats than PSON. The method for serializing a object is called
to_data_hash, so makes sense to call this from_data_hash.
|
|
The webrick interface was using the body of the response for the reason
phrase for non-2xx responses. In the v1 api, this meant that we got
bizarre response headers. In the v2 api, it meant that we got json
formatted body data in the first header.
Not only was that state of affairs odd, it was also inconsistent with
the rack implementation, which didn't ever set the reason phrase. I am
also a little concerned that some clients might have choked on large
reason phrases (header too long issues).
This removes the setting of the reason phrase on webrick, which give us
back the standard reason phrases.
|
|
Instead of creating environments directly, we need to go through the
configured environment loaders, otherwise environments can't come from
other places.
|
|
The v2.0 api implementation had a type that was not caught by any tests.
This fixes that issue (a typo) and adds a test that would have shown the
problem.
|
|
The TagSet class didn't include FormatSupport and didn't have a
to_data_hash method, so it couldn't be serialized to msgpack.
|
|
|
|
This commit introduces error responses, a JSON object as described
in the api documentation (with associated schema).
It introduces the Puppet::Network::HTTP::Issues module, which collects
the known issue_kind values, in order to ensure consistency and provide
ability to yardoc them.
It adds two HTTPError subclasses for future use: an HTTPBadRequestError
(400 response), and an HTTPServerError class. The HTTPServerError
allows the API to provide more information about an unexpected error
than simply letting the error bubble up to be caught by the generic
HTTP handler.
|
|
This adds a new matcher to the JSONMatchers to validate json against a
given json-schema. All of the json schema validation now goes through
this one matcher, which also handles skipping the checks on windows.
|