summaryrefslogtreecommitdiff
path: root/methods
AgeCommit message (Collapse)AuthorFilesLines
2016-08-11block direct connections to .onion domains (RFC7687)David Kalnischkies1-1/+19
Doing a direct connect to an .onion address (if you don't happen to use it as a local domain, which you shouldn't) is bound to fail and does leak the information that you do use Tor and which hidden service you wanted to connect to to a DNS server. Worse, if the DNS is poisoned and actually resolves tricking a user into believing the setup would work correctly… This does block also the usage of wrappers like torsocks with apt, but with native support available and advertised in the error message this shouldn't really be an issue. Inspired-by: https://bugzilla.mozilla.org/show_bug.cgi?id=1228457
2016-08-10implement socks5h proxy support for http methodDavid Kalnischkies2-23/+170
Socks support is a requested feature in sofar that the internet is actually believing Acquire::socks::Proxy would exist. It doesn't and this commit isn't adding it as that isn't how our configuration works, but it allows Acquire::http::Proxy="socks5h://…". The HTTPS method was changed already to support socks proxies (all versions) via curl. This commit implements only SOCKS5 (RFC1928) with no auth or pass&user auth (RFC1929), but not GSSAPI which is required by the RFC. The 'h' in the protocol name further indicates that DNS resolution is delegated to the socks proxy rather than performed locally. The implementation works and was tested with Tor as socks proxy for which implementing socks5h only can actually be considered a feature. Closes: 744934
2016-08-10implement generic config fallback for methodsDavid Kalnischkies15-202/+287
The https method implemented for a long while now a hardcoded fallback to the same options in http, which, while it works, is rather inflexible if we want to allow the methods to use another name to change their behavior slightly, like apt-transport-tor does to https – most of the diff being s#https#tor#g which then fails to do the full circle fallthrough tor -> https -> http for https sources. With this config infrastructure this could be implemented now.
2016-08-10use the same redirection handling for http and httpsDavid Kalnischkies5-99/+95
cURL which backs our https implementation can handle redirects on its own, but by dealing with them on our own we gain finer control over which redirections will be performed (we don't like https → http) and by whom so that redirections to other hosts correctly spawn a new https method dealing with these instead of letting the current one deal with it.
2016-08-10detect redirection loops in acquire instead of workersDavid Kalnischkies6-45/+42
Having the detection handled in specific (http) workers means that a redirection loop over different hostnames isn't detected. Its also not a good idea have this implement in each method independently even if it would work
2016-08-10fail on unsupported http/https proxy settingsDavid Kalnischkies3-6/+12
Closes: #623443
2016-08-10support all socks-proxy known to curl in https methodDavid Kalnischkies1-1/+12
2016-08-10Get rid of the old buildsystemJulian Andres Klode1-110/+0
Bye, bye, old friend.
2016-08-06CMake: Add basic CMake build systemJulian Andres Klode1-0/+35
Introduce an initial CMake buildsystem. This build system can build a fully working apt system without translation or documentation. The FindBerkelyDB module is from kdelibs, with some small adjustements to also look in db5 directories. Initial work on this CMake build system started in 2009, and was resumed in August 2016.
2016-07-30prevent C++ locale number formatting in text APIs (try 2)David Kalnischkies1-1/+1
Followup of b58e2c7c56b1416a343e81f9f80cb1f02c128e25. Still a regression of sorts of 8b79c94af7f7cf2e5e5342294bc6e5a908cacabf. Closes: 832044
2016-07-27rred: truncate result file before writing to itDavid Kalnischkies1-2/+2
If another file in the transaction fails and hence dooms the transaction we can end in a situation in which a -patched file (= rred writes the result of the patching to it) remains in the partial/ directory. The next apt call will perform the rred patching again and write its result again to the -patched file, but instead of starting with an empty file as intended it will override the content previously in the file which has the same result if the new content happens to be longer than the old content, but if it isn't parts of the old content remain in the file which will pass verification as the new content written to it matches the hashes and if the entire transaction passes the file will be moved the lists/ directory where it might or might not trigger errors depending on if the old content which remained forms a valid file together with the new content. This has no real security implications as no untrusted data is involved: The old content consists of a base file which passed verification and a bunch of patches which all passed multiple verifications as well, so the old content isn't controllable by an attacker and the new one isn't either (as the new content alone passes verification). So the best an attacker can do is letting the user run into the same issue as in the report. Closes: #831762
2016-07-27http: skip requesting if pipeline is fullDavid Kalnischkies1-0/+2
The rewrite in 742f67eaede80d2f9b3631d8697ebd63b8f95427 is based on the assumption that the pipeline will always be at least one item short each time it is called, but the logs in #832113 suggest that this isn't always the case. I fail to see how at the moment, but the old implementation had this behavior, so restoring it can't really hurt, can it?
2016-07-27use proper warning for automatic pipeline disableDavid Kalnischkies1-4/+1
Also fixes message itself to mention the correct option name as noticed in #832113.
2016-07-26verify hash of input file in rredDavid Kalnischkies1-16/+41
We read the entire input file we want to patch anyhow, so we can also calculate the hash for that file and compare it with what he had expected it to be. Note that this isn't really a security improvement as a) the file we patch is trusted & b) if the input is incorrect, the result will hardly be matching, so this is just for failing slightly earlier with a more relevant error message (althrough, in terms of rred its ignored and complete download attempt instead).
2016-07-06keep trying with next if connection to a SRV host failedDavid Kalnischkies1-7/+23
Instead of only trying the first host we get via SRV, we try them all as we are supposed to and if that isn't working we try to connect to the host itself as if we hadn't seen any SRV records. This was already the intend of the old code, but it failed to hide earlier problems for the next call, which would unconditionally fail then resulting in an all around failure to connect. With proper stacking we can also keep the error messages of each call around (and in the order tried) so if the entire connection fails we can report all the things we have tried while we discard the entire stack if something works out in the end.
2016-07-06report all instead of first error up the acquire chainDavid Kalnischkies1-1/+7
If we don't give a specific error to report up it is likely that all error currently in the error stack are equally important, so reporting just one could turn out to be confusing e.g. if name resolution failed in a SRV record list.
2016-07-06don't change owner/perms/times through file:// symlinksDavid Kalnischkies3-21/+35
If we have files in partial/ from a previous invocation or similar such those could be symlinks created by file:// sources. The code is expecting only real files through and happily changes owner, modification times and permission on the file the symlink points to which tend to be files we have no business in touching in this way. Permissions of symlinks shouldn't be changed, changing owner is usually pointless to, but just to be sure we pick the easy way out and use lchown, check for symlinks before chmod/utimes. Reported-By: Mattia Rizzolo on IRC
2016-07-05avoid 416 response teardown binding to null pointerDavid Kalnischkies4-10/+12
methods/http.cc:640:13: runtime error: reference binding to null pointer of type 'struct FileFd' This reference is never used in the cases it has a nullptr, so the practical difference is non-existent, but its a bug still. Reported-By: gcc -fsanitize=undefined
2016-07-02use +0000 instead of UTC by default as timezone in outputDavid Kalnischkies2-3/+3
All apt versions support numeric as well as 3-character timezones just fine and its actually hard to write code which doesn't "accidently" accepts it. So why change? Documenting the Date/Valid-Until fields in the Release file is easy to do in terms of referencing the datetime format used e.g. in the Debian changelogs (policy §4.4). This format specifies only the numeric timezones through, not the nowadays obsolete 3-character ones, so in the interest of least surprise we should use the same format even through it carries a small risk of regression in other clients (which encounter repositories created with apt-ftparchive). In case it is really regressing in practice, the hidden option -o APT::FTPArchive::Release::NumericTimezone=0 can be used to go back to good old UTC as timezone. The EDSP and EIPP protocols use this 'new' format, the text interface used to communicate with the acquire methods does not for compatibility reasons even if none of our methods would be effected and I doubt any other would (in these instances the timezone is 'GMT' as that is what HTTP/1.1 requires). Note that this is only true for apt talking to methods, (libapt-based) methods talking to apt will respond with the 'new' format. It is therefore strongly adviced to support both also in method input.
2016-06-27close server if parsing of header field failedDavid Kalnischkies1-0/+1
Seen in #828011 if we fail to parse a header field like Last-Modified we end up interpreting the data as response header for coming requests in case we don't rotate to a new server in DNS rotation.
2016-06-27methods/ftp: Cope with weird PASV responsesJulian Andres Klode1-2/+15
wu-ftpd sends the response without parens, whereas we expect them. I did not test the patch, but it should work. I added another return true if Pos is still npos after the second find to make sure we don't add npos to the string. Thanks: Lukasz Stelmach for the initial patch Closes: #420940
2016-06-15http: don't hang on redirect with length + connection closeDavid Kalnischkies1-4/+4
Most servers who close the connection do not send a content-length as this is redundant information usually, but some might and while testing with our server and with 'aptwebserver::response-header::Connection' set to 'close' I noticed that http hangs after a redirect in such cases, so if we have the information, just use it instead of discarding it.
2016-06-02ignore std::locale exeception on non-existent "" localeDavid Kalnischkies1-1/+5
In 8b79c94af7f7cf2e5e5342294bc6e5a908cacabf changing to usage of C++ way of setting the locale causes us to be terminated in case of usage of an ungenerated locale as LC_ALL (or similar) – but we don't want to fail here, we just want to carry on as before with setlocale which we call in that case just for good measure.
2016-05-28use std::locale::global instead of setlocaleDavid Kalnischkies20-77/+24
We use a wild mixture of C and C++ ways of generating output, so having a consistent world-view in both styles sounds like a good idea and should help in preventing regressions.
2016-05-27prevent C++ locale number formatting in text APIsDavid Kalnischkies2-3/+3
Setting the C++ locale via std::locale::global(std::locale("")); which would otherwise default to the default C locale (aka: unaffected by setlocale) effects the formatting of numeric types in IO streams, which for output for humans is perfectly sensible, but breaks our many text interfaces used and parsed by us and others without expecting the numbers to be formatted. Closes: #825396
2016-05-08gpgv: show always webportal error on NODATADavid Kalnischkies1-12/+21
gpg doesn't give use a UID on NODATA, which we were "expecting" (but not using for anything), but just an error number. Instead of collecting these as badsigners which will trigger a "invald signature" error with remarks like "NODATA 1" we instead adapt a message similar to the NODATA error of a clearsigned file (which is actually not reached anymore as we split them up, which fails with a NOSPLIT error, which uses the same general error message). In other words: Not a security relevant change, just a user experience improvement as we now point them to the most likely cause of the problem instead of saying "invalid signature" which would point them in the direction of the archive being broken (for everyone) instead. Closes: 823746
2016-05-01support multiple fingerprints in signed-byDavid Kalnischkies1-13/+17
A keyring file can include multiple keys, so its only fair for transitions and such to support multiple fingerprints as well.
2016-05-01gpgv: cleanup statusfd parsing a bitDavid Kalnischkies1-57/+45
We parse the messages we receive into two big categories: Most of the messages have a keyid as well as a userid and as they are errors we want to show the userids as well. The other category is also errors, but have no userid (like NO_PUBKEY). Explicitly expressing this in code should make it a bit easier to look at and it also help in dropping additional fields or just the newline at the end consistently. Git-Dch: Ignore
2016-05-01don't show NO_PUBKEY warning if repo is signed by another keyDavid Kalnischkies1-4/+4
Daniel Kahn Gillmor highlights in the bugreport that security isn't improving by having the user import additional keys – especially as importing keys securely is hard. The bugreport was initially about dropping the warning to a notice, but in given the previously mentioned observation and the fact that we weren't printing a warning (or a notice) for expired or revoked keys providing a signature we drop it completely as the code to display a message if this was the only key is in another path – and is considered critical. Closes: 618445
2016-05-01gpgv: handle expired sig as worthlessDavid Kalnischkies1-0/+7
Signatures on data can have an expiration date, too, which we hadn't handled previously explicitly (no problem – gpg still has a non-zero exit code so apt notices the invalid signature) so the error message wasn't as helpful as it could be (aka mentioning the key signing it).
2016-05-01gpgv: use EXPKEYSIG instead of KEYEXPIREDDavid Kalnischkies1-3/+3
The upstream documentation says about KEYEXPIRED: "This status line is not very useful". Indeed, it doesn't mention which key is expired, and suggests to use the other message which does.
2016-04-27refactored no_proxy code to work regardless of where https proxy is setPatrick Cable1-6/+6
when using the https transport mechanism, $no_proxy is ignored if apt is getting it's proxy information from $https_proxy (as opposed to Acquire::https::Proxy somewhere in apt config). if the source of proxy information is Acquire::https::Proxy set in apt.conf (or apt.conf.d), then $no_proxy is honored.
2016-04-25don't ask server if we have entire file in partial/David Kalnischkies1-24/+54
We have this situation in cases were parts of the transaction are refused (e.g. in a hashsum mismatch) and rerun the update (e.g. in the hope that we get a mirror which is synced this time). Previously we would ask the server with an if-range and in the best case recieve a 416 in response (less featureful server might end up giving us the entire file again or we get the wrong file this time giving us a hashsum mismatch…), which is a waste of time if we know already by checking the hashsums that we got the complete and correct file.
2016-04-14allow uncompressed files to be empty in store againDavid Kalnischkies1-1/+1
With the previous fix for file applied we can again hit repositories which contain uncompressed empty files, which since the introduction of the central store: method wasn't accounted for anymore as we forbid empty compressed files.
2016-04-14fix Alt-Filename handling of file methodDavid Kalnischkies1-1/+1
A silly of-by-one error in the stripping of the extension to check for the uncompressed filename broken in an attempt to support all compressions in commit a09f6eb8fc67cd2d836019f448f18580396185e5. Fixing this highlights also mistakes in the handling of the Alt-Filename in libapt which would cause apt to remove the file from the repository (if root has the needed rights – aka the disk isn't readonly or similar)
2016-03-28Allow lowering trust level of a hash via configJulian Andres Klode1-12/+12
Introduces APT::Hashes::<NAME> with entries Untrusted and Weak which can be set to true to cause the hash to be treated as untrusted and/or weak.
2016-03-22handle gpgv's weak-digests ERRSIGDavid Kalnischkies1-7/+50
Our own gpgv method can declare a digest algorithm as untrusted and handles these as worthless signatures. If gpgv comes with inbuilt untrusted (which is called weak in official terminology) which it e.g. does for MD5 in recent versions we should handle it in the same way. To check this we use the most uncommon still fully trusted hash as a configureable one via a hidden config option to toggle through all of the three states a hash can be in.
2016-03-21properly check for "all good sigs are weak"David Kalnischkies1-9/+14
Using erase(pos) is invalid in our case here as pos must be a valid and derefenceable iterator, which isn't the case for an end-iterator (like if we had no good signature). The problem runs deeper still through as VALIDSIG is a keyid while GOODSIG is just a longid so comparing them will always fail. Closes: 818910
2016-03-16Make the weak signature message less ambigiousJulian Andres Klode1-1/+1
There was a complaint that, in the previous message, the key fingerprint could be mistaken for a SHA1 digest due to the (SHA1) after it. Gbp-Dch: ignore
2016-03-16methods/gpgv: Rewrite error handling and messageJulian Andres Klode1-19/+50
This should be easy to extend in the future and allow us to simplify the error handling cases somewhat. Thanks: Ron Lee for wording suggestions
2016-03-15methods/gpgv: Warn about SHA1 (and RIPEMD-160)Julian Andres Klode1-3/+29
We will drop support for those in the future. Also adjust the std::array to be a std::vector, as that's easier to maintain.
2016-03-15apt-pkg/acquire-worker.cc: Introduce 104 Warning messageJulian Andres Klode1-0/+8
This can be used by workers to send warnings to the main program. The messages will be passed to _error->Warning() by APT with the URI prepended. We are not going to make that really public now, as the interface might change a bit.
2016-03-15methods/gpgv: Correctly handle weak signatures with multiple keysJulian Andres Klode1-1/+6
We added weak signatures to BadSigners, meaning that a Release file signed by both a weak signature and a strong signature would be rejected; preventing people from migrating from DSA to RSA keys in a sane way. Instead of using BadSigners, treat weak signatures like expired keys: They are no good signatures, and they are worthless. Gbp-Dch: ignore
2016-03-14methods/gpgv: Reject weak digest algorithmsJulian Andres Klode1-0/+16
This keeps a list of weak digest algorithms. For now, only MD5 is disabled, as SHA1 breaks to many repos.
2016-03-14Revert "Handle ERRSIG in the gpgv method like BADSIG"Julian Andres Klode1-7/+0
This reverts commit 76a71a1237d22c1990efbc19ce0e02aacf572576. That commit broke the test suite. Gbp-Dch: ignore
2016-03-14Handle ERRSIG in the gpgv method like BADSIGJulian Andres Klode1-0/+7
ERRSIG is created whenever a key uses an unknown/weak digest algorithm, for example. This allows us to report a more useful error than just "unknown apt-key error.": The following signatures were invalid: ERRSIG 13B00F1FD2C19886 1 2 01 1457609403 5 While still not being the best reportable error message, it's better than unknown apt-key error and hopefully redirects users to complain to their repository owners.
2016-02-04rred: If there were I/O errors, failJulian Andres Klode1-0/+5
We basically ignored errors from writing and flushing, let's not do that.
2016-01-26act on various suggestions from cppcheckDavid Kalnischkies2-1/+6
Reported-By: cppcheck Git-Dch: Ignore
2016-01-12Only enable pipelining if server is HTTP/1.1Julian Andres Klode2-1/+10
Just enabling it for anyone breaks with HTTP/1.0 servers and proxies sometimes. Closes: #810796
2016-01-08allow pdiff bootstrap from all supported compressorsDavid Kalnischkies1-2/+2
There is no reason to enforce that the file we start the bootstrap with is compressed with a compressor which is available online. This allows us to change the on-disk format as well as deals with repositories adding/removing support for a specific compressor.