summaryrefslogtreecommitdiff
path: root/databases
AgeCommit message (Collapse)AuthorFilesLines
2015-08-25Pullup ticket #4799 - requested by manutron3-27/+3
databases/openldap-smbk5pwd: build fix Revisions pulled up: - databases/openldap-smbk5pwd/Makefile 1.18 - databases/openldap/distinfo 1.100 patch - databases/openldap/patches/patch-de deleted --- Module Name: pkgsrc Committed By: manu Date: Mon Aug 10 12:47:51 UTC 2015 Modified Files: pkgsrc/databases/openldap: distinfo pkgsrc/databases/openldap-smbk5pwd: Makefile Removed Files: pkgsrc/databases/openldap/patches: patch-de Log Message: Use OpenSSL libcrypto instead of libdes on NetBSD All recent NetBSD releases now have an OpenSSL recent enough so that the DES symbols required by slapo-smbk5pwd can be found in OpenSSL's libcrypto. We therefore do not need to link with -ldes anymore, especialy since it now causes a build failure.
2015-07-19Pullup ticket #4776 - requested by manutron5-4/+158
databases/mysql56-client: bug fix patch databases/mysql56-server: bug fix patch Revisions pulled up: - databases/mysql56-client/Makefile 1.17 - databases/mysql56-client/distinfo 1.25 - databases/mysql56-client/patches/patch-include_violite.h 1.1 - databases/mysql56-client/patches/patch-vio_viosslfactories.c 1.1 - databases/mysql56-server/Makefile 1.25 --- Module Name: pkgsrc Committed By: manu Date: Tue Jul 14 12:09:24 UTC 2015 Modified Files: pkgsrc/databases/mysql56-client: Makefile distinfo Added Files: pkgsrc/databases/mysql56-client/patches: patch-include_violite.h patch-vio_viosslfactories.c Log Message: Restore SSL functionnality with OpenSSL 1.0.1p With OpenSSL 1.0.1p upgrade, DH parameters below 1024 bits are now refused. MySQL hardcodes 512 bits DH parameters and will therefore fail to run SSL connexions with OpenSSL 1.0.1p Apply fix from upstream: https://github.com/mysql/mysql-server/commit/ 866b988a76e8e7e217017a7883a52a12ec5024b9 --- Module Name: pkgsrc Committed By: manu Date: Tue Jul 14 16:38:56 UTC 2015 Modified Files: pkgsrc/databases/mysql56-server: Makefile Log Message: Restore SSL functionnality with OpenSSL 1.0.1p (revision bump) This changes just bumps PKGREVISION after patches were added in mysql56-client/patches which impact mysql56-server. For the record, the commit log or that patches: > With OpenSSL 1.0.1p upgrade, DH parameters below 1024 bits are now > refused. MySQL hardcodes 512 bits DH parameters and will therefore > fail to run SSL connexions with OpenSSL 1.0.1p > > Apply fix from upstream: > https://github.com/mysql/mysql-server/commit/ 866b988a76e8e7e217017a7883a52a12ec5024b9
2015-06-27Fix build with Perl 5.22.joerg2-1/+35
2015-06-26Version 0.7 - 2015-05-19rodent3-7/+13
* Fix WINDOW and HAVING params order in Select * Add window functions * Add filter and within group to aggregate * Add limitstyle with 'offset' and 'limit' * Add Lateral
2015-06-23Update to 3.0.4ryoon3-11/+11
Changelog: MongoDB 3.0.4 is released June 6, 2015 MongoDB 3.0.4 is out and is ready for production deployment. This release contains only fixes since 3.0.3, and is a recommended upgrade for all 3.0 users. Fixed in this release: SERVER-17923 Creating/dropping multiple background indexes on the same collection can cause fatal error on secondaries SERVER-18079 Large performance drop with documents > 16k on Windows SERVER-18190 Secondary reads block replication SERVER-18213 Lots of WriteConflict during multi-upsert with WiredTiger storage engine SERVER-18316 Database with WT engine fails to recover after system crash SERVER-18475 authSchemaUpgrade fails when the system.users contains non MONGODB-CR users SERVER-18629 WiredTiger journal system syncs wrong directory SERVER-18822 Sharded clusters with WiredTiger primaries may lose writes during chunk migration Announcing MongoDB 3.0 and Bug Hunt Winners March 3, 2015 Today MongoDB 3.0 is generally available; you can download now. Our community was critical to ensuring the quality of the release. Thank you to everyone who participated in our 3.0 Bug Hunt. From the submissions, we've selected winners based on the user impact and severity of the bugs found. First Prize Mark Callaghan, Member of Technical Staff, Facebook During the 3.0 release cycle, Mark submitted 10 bug reports and collaborated closely with the MongoDB engineering team to debug the issues he uncovered. As a first place winner, Mark will receive a free pass to MongoDB World in New York City on June 1-2, including a front row seat to the keynote sessions. Mark was also eligible to receive a $1,000 Amazon gift card but opted to donate the award to a charity. We are donating $1,000 to CodeStarters.org in his name. Honorable Mentions Nick Judson, Conevity Nick submitted SERVER-17299, uncovering excessive memory allocation on Windows when using "snappy" compression in WiredTiger. Koshelyaev Konstantin, RTEC Koshelyaev submitted SERVER-16664, which uncovered a memory overflow in WiredTiger when using "zlib" compression. Tim Callaghan, Crunchtime! In submitting SERVER-16867, Tim found an uncaught WriteConflict exception affecting replicated writes during insert-heavy workloads. Nathan Arthur, PreEmptive Solutions Nathan submitted SERVER-16724, which found an issue with how collection metadata is persisted. Thijs Cadier, AppSignal Thijs submitted SERVER-16197, which revealed a bug in the build system interaction with the new MongoDB tools. Nick, Koshelyaev, Tim, Nathan, and Thijs will also receive tickets to MongoDB World in New York City on June 1-2 (with reserved front-row seat for keynote sessions), $250 Amazon Gift Cards, and MongoDB t-shirts. Congratulations to the winners and thanks to everyone who downloaded, tested and gave feedback on the release candidates.
2015-06-22Substitute hardcoded paths to compiler wrappers. Fixes CHECK_WRKREF builds.jperkin3-4/+24
2015-06-22Update ruby-activerecord32 to 3.2.22.taca2-6/+5
## Rails 3.2.22 (Jun 16, 2015) ## * No changes.
2015-06-18Changes:adam25-55/+60
This release primarily fixes issues not successfully fixed in prior releases. It should be applied as soon as possible all users of major versions 9.3 and 9.4. Other users should apply at the next available downtime. Crash Recovery Fixes: Earlier update releases attempted to fix an issue in PostgreSQL 9.3 and 9.4 with "multixact wraparound", but failed to account for issues doing multixact cleanup during crash recovery. This could cause servers to be unable to restart after a crash. As such, all users of 9.3 and 9.4 should apply this update as soon as possible.
2015-06-18Refresh the lists of man pages. Closes PR 38998.dholland8-18/+109
(Because of the partitioning into client and server packages, the man pages have to be partitioned to match; this interferes with the configure script's handling of them so the list of pages ends up hardcoded in these patches. And it seems the lists haven't been updated since the first mysql 5.x package.)
2015-06-14Update to 1.48:wiz2-7/+6
1.48 2015-06-12 - Switched to a production version. (ISHIGAKI) 1.47_05 2015-05-08 - Updated to SQLite 3.8.10 1.47_04 2015-05-02 - Used MY_CXT instead of a global variable 1.47_03 2015-04-16 - Added :all to EXPORT_TAGS in ::Constants 1.47_02 2015-04-16 - Updated to SQLite 3.8.9 - Added DBD::SQLite::Constants, from which you can import any "useful" constants into your applications. - Removed previous Cygwin hack as SQLite 3.8.9 compiles well again - Now create_function/aggregate accepts an extra bit (SQLITE_DETERMINISTIC) for better performance. 1.47_01 2015-02-17 *** (EXPERIMENTAL) CHANGES THAT MAY POSSIBLY BREAK YOUR OLD APPLICATIONS *** - Commented OPTIMIZE out of WriteMakefile (RT #94207). If your perl is not compiled with -O2, your DBD::SQLite may possibly behave differently under some circumstances. (This release is to find notable examples from CPAN Testers). - Set THREADSAFE to 0 under Cygwin to cope with an upstream regression since 3.8.7 (GH #7). - Updated to SQLite 3.8.8.2 - Resolved #35449: Fast DBH->do (ptushnik, ISHIGAKI)
2015-06-13Add php-mongofhajny1-1/+2
2015-06-13Set maintainership to bartoszkuzma, didn't notice his wip/php-mongo before.fhajny1-2/+2
2015-06-13Import the PECL mongo 1.6.9 module as databases/php-mongo.fhajny4-0/+35
Provides an interface for communicating with the Mongo database in PHP.
2015-06-12Recursive PKGREVISION bump for all packages mentioning 'perl',wiz154-243/+307
having a PKGNAME of p5-*, or depending such a package, for perl-5.22.0.
2015-06-10Update databases/py-peewee to 2.6.1.fhajny3-8/+13
2.6.1 - #606, support self-referential joins with prefetch and aggregate_rows() methods. - #588, accomodate changes in SQLite's PRAGMA index_list() return value. - #607, fixed bug where pwiz was not passing table names to introspector. - #591, fixed bug with handling of named cursors in older psycopg2 version. - Removed some cruft from the APSWDatabase implementation. - Added CompressedField and AESEncryptedField - #609, #610, added Django-style foreign key ID lookup. - Added support for Hybrid Attributes (cool idea courtesy of SQLAlchemy). - Added upsert keyword argument to the Model.save() function (SQLite only). - #587, added support for ON CONFLICT SQLite clause for INSERT and UPDATE queries. - #601, added hook for programmatically defining table names. - #581, #611, support connection pools with playhouse.db_url.connect(). - Added Contributing section section to docs. 2.6.0 - get_or_create() now returns a 2-tuple consisting of the model instance and a boolean indicating whether the instance was created. The function now behaves just like the Django equivalent. - #574, better support for setting the character encoding on Postgresql database connections. Thanks @klen! - Improved implementation of get_or_create().
2015-06-10Update databases/mongo-c-driver to 1.1.7.fhajny3-9/+13
mongo-c-driver 1.1.7 - Thread-safe use of Cyrus SASL library. - Experimental support for building with CMake and SASL. - Faster reconnection to replica set with some hosts down. - Crash iterating a cursor after reconnecting to a replica set. - Unchecked errors decoding invalid UTF-8 in MongoDB URIs. - Fix error reporting from mongoc_client_get_database_names. mongo-c-driver 1.1.6 - mongoc_bulk_operation_execute now coalesces consecutive update operations into a single message to a MongoDB 2.6+ server, yielding huge performance gains. Same for remove operations. (Inserts were always coalesced.) - Large numbers of insert operations are now properly batched according to number of documents and total data size. - GSSAPI / Kerberos auth now works. - The driver no longer tries three times in vain to reconnect to a primary, so socketTimeoutMS and connectTimeoutMS now behave closer to what you expect for replica sets with down members. A full fix awaits 1.2.0. - mongoc_matcher_t now supports basic subdocument and array matching mongo-c-driver 1.1.5 - The fsync and j write concern flags now imply acknowledged writes - Prevent using fsync or j with conflicting w=0 write concern - Obey socket timeout consistently in TLS/SSL mode - Return an error promptly after a network hangup in TLS mode - Prevent crash using SSL in FIPS mode - Always return NULL from mongoc_database_get_collection_names on error - Fix version check for GCC 5 and future versions of Clang - Fix warnings and errors building on various platforms - Add configure flag to enable/disable shared memory performance counters - Minor docs improvements and fix links from C Driver docs to Libbson docs
2015-06-10fix buildlinkwiedi1-3/+3
2015-06-09Remove stale patch file.fhajny1-23/+0
2015-06-09Update databases/py-barman to 1.4.1.fhajny3-7/+21
Version 1.4.1 - 05 May 2015 * Fix for WAL archival stop working if first backup is EMPTY (Closes: #64) * Fix exception during error handling in Barman recovery (Closes: #65) * After a backup, limit cron activity to WAL archiving only (Closes: #62) * Improved robustness and error reporting of the backup delete command (Closes: #63) * Fix computation of WAL production ratio as reported in the show-backup command * Improved management of xlogb file, which is now correctly fsynced when updated. Also, the rebuild-xlogdb command now operates on a temporary new file, which overwrites the main one when finished. * Add unit tests for dateutil module compatibility * Modified Barman version following PEP 440 rules and added support of tests in Python 3.4
2015-06-09Update databases/redis to 3.0.2.fhajny4-23/+8
--[ Redis 3.0.2 ] Release date: 4 Jun 2015 Upgrade urgency: HIGH for Redis because of a security issue. LOW for Sentinel. * [FIX] Critical security issue fix by Ben Murphy: http://t.co/LpGTyZmfS7 * [FIX] SMOVE reply fixed when src and dst keys are the same. (Glenn Nethercutt) * [FIX] Lua cmsgpack lib updated to support str8 type. (Sebastian Waisbrot) * [NEW] ZADD support for options: NX, XX, CH. See new doc at redis.io. (Salvatore Sanfilippo) * [NEW] Senitnel: CKQUORUM and FLUSHCONFIG commands back ported. (Salvatore Sanfilippo and Bill Anderson) --[ Redis 3.0.1 ] Release date: 5 May 2015 Upgrade urgency: LOW for Redis and Cluster, MODERATE for Sentinel. * [FIX] Sentinel memory leak due to hiredis fixed. (Salvatore Sanfilippo) * [FIX] Sentinel memory leak on duplicated instance. (Charsyam) * [FIX] Redis crash on Lua reaching output buffer limits. (Yossi Gottlieb) * [FIX] Sentinel flushes config on +slave events. (Bill Anderson)
2015-06-09Update databases/py-cassandra-driver to 2.5.1.fhajny2-7/+8
- Fix thread safety in DC-aware load balancing policy (PYTHON-297) - Fix race condition in node/token rebuild (PYTHON-298) - Set and send serial consistency parameter (PYTHON-299)
2015-06-08Extend SunOS epoll quirk to fix build on recent Illumos platforms.fhajny1-2/+2
2015-06-08Changes:adam12-28/+39
* File Permissions Fix * Have pg_get_functiondef() show the LEAKPROOF property * Make pushJsonbValue() function push jbvBinary type * Allow building with threaded Python on OpenBSD
2015-06-07Revert unintentional change.joerg1-2/+1
2015-06-07Update PostgreSQL 9.3 to 9.3.8:joerg4-8/+13
- Avoid failures while fsync'ing data directory during crash restart - Fix pg_get_functiondef() to show functions' LEAKPROOF property, if set - Remove configure's check prohibiting linking to a threaded libpython on OpenBSD - Allow libpq to use TLS protocol versions beyond v1
2015-06-07Update to 0.47.gdt3-7/+8
Upstream changes are mainly housekeeping and minor build system changes not visible to pkgsrc users, plus the usual bugfixes. Some procedures previously advertised for deprecation have been dropped, and some new ones are added to the deprectation list, notably dbcoltypes.
2015-06-05Update hiredis to 0.13.1wiedi5-67/+41
### 0.13.1 - May 03, 2015 This is a bug fix release. The new `reconnect` method introduced new struct members, which clashed with pre-defined names in pre-C99 code. Another commit forced C99 compilation just to make it work, but of course this is not desirable for outside projects. Other non-C99 code can now use hiredis as usual again. Sorry for the inconvenience. * Fix memory leak in async reply handling (Salvatore Sanfilippo) * Rename struct member to avoid name clash with pre-c99 code (Alex Balashov, ncopa) ### 0.13.0 - April 16, 2015 This release adds a minimal Windows compatibility layer. The parser, standalone since v0.12.0, can now be compiled on Windows (and thus used in other client libraries as well) * Windows compatibility layer for parser code (tzickel) * Properly escape data printed to PKGCONF file (Dan Skorupski) * Fix tests when assert() undefined (Keith Bennett, Matt Stancliff) * Implement a reconnect method for the client context, this changes the structure of `redisContext` (Aaron Bedra) ### 0.12.1 - January 26, 2015 * Fix `make install`: DESTDIR support, install all required files, install PKGCONF in proper location * Fix `make test` as 32 bit build on 64 bit platform ### 0.12.0 - January 22, 2015 * Add optional KeepAlive support * Try again on EINTR errors * Add libuv adapter * Add IPv6 support * Remove possiblity of multiple close on same fd * Add ability to bind source address on connect * Add redisConnectFd() and redisFreeKeepFd() * Fix getaddrinfo() memory leak * Free string if it is unused (fixes memory leak) * Improve redisAppendCommandArgv performance 2.5x * Add support for SO_REUSEADDR * Fix redisvFormatCommand format parsing * Add GLib 2.0 adapter * Refactor reading code into read.c * Fix errno error buffers to not clobber errors * Generate pkgconf during build * Silence _BSD_SOURCE warnings * Improve digit counting for multibulk creation
2015-06-05Update to 2.1.7.gdt3-8/+19
Upstream changes (plus many bug fixes): PostGIS 2.1.7 2015/03/30 PostGIS 2.1.6 2015/03/20 - #3000, Ensure edge splitting and healing algorithms use indexes - #3048, Speed up geometry simplification (J.Santana @ CartoDB) - #3050, Speep up geometry type reading (J.Santana @ CartoDB) PostGIS 2.1.5 2014/12/18 - #2933, Speedup construction of large multi-geometry objects
2015-06-03Fix build problem on Ruby 2.2 and later.taca2-1/+26
2015-06-03This package is work on Ruby 2.2.taca1-3/+1
2015-06-03Update ruby-sequel to 4.23.0.taca3-13/+20
=== 4.23.0 (2015-06-01) * Make dataset.call_sproc(:insert) work in the jdbc adapter (flash-gordon) (#1013) * Add update_refresh plugin, for refreshing a model instance when updating (jeremyevans) * Add delay_add_association plugin, for delaying add_* method calls on new objects until after saving the object (jeremyevans) * Add validate_associated plugin, for validating associated objects when validating the current object (jeremyevans) * Make Postgres::JSONBOp#[] and #get_text return JSONBOp instances (jeremyevans) (#1005) * Remove the fdbsql, jdbc/fdbsql, and openbase adapters (jeremyevans) * Database#transaction now returns block return value if :rollback=>:always is used (jeremyevans) * Allow postgresql:// connection strings as aliases to postgres://, for compatibility with libpq (jeremyevans) (#1004) * Make Model#move_to in the list plugin handle out-of-range targets without raising an exception (jeremyevans) (#1003) * Make Database#add_named_conversion_proc on PostgreSQL handle conversion procs for enum types (celsworth) (#1002) === 4.22.0 (2015-05-01) * Deprecate the db2, dbi, fdbsql, firebird, jdbc/fdbsql, informix, and openbase adapters (jeremyevans) * Avoid hash allocations and rehashes (jeremyevans) * Don't silently ignore :jdbc_properties Database option in jdbc adapter (jeremyevans) * Make tree plugin set reciprocal association for children association correctly (lpil, jeremyevans) (#995) * Add Sequel::MassAssignmentRestriction exception, raised for mass assignment errors in strict mode (jeremyevans) (#994) * Handle ODBC::SQL_BIT type as boolean in the odbc adapter, fixing boolean handling on odbc/mssql (jrgns) (#993) * Make :auto_validations plugin check :default entry instead of :ruby_default entry for checking existence of default value (jeremyevans) (#990) * Adapters should now set :default schema option to nil when adapter can determine that the value is nil (jeremyevans) * Do not add a schema :max_length entry for a varchar(max) column on MSSQL (jeremyevans) * Allow :default value for PostgreSQL array columns to be a ruby array when using the pg_array extension (jeremyevans) (#989) * Add csv_serializer plugin for serializing model objects to and from csv (bjmllr, jeremyevans) (#988) * Make Dataset#to_hash and #to_hash_groups handle single array argument for model datasets (jeremyevans) * Handle Model#cancel_action in association before hooks (jeremyevans) * Use a condition variable instead of busy waiting in the threaded connection pools on ruby 1.9+ (jeremyevans) * Use Symbol#to_proc instead of explicit blocks (jeremyevans) === 4.21.0 (2015-04-01) * Support :tsquery and :tsvector options in Dataset#full_text_search on PostgreSQL, for using existing tsquery/tsvector expressions (jeremyevans) * Fix TinyTds::Error being raised when trying to cancel a query on a closed connection in the tinytds adapter (jeremyevans) * Add GenericExpression#!~ for inverting =~ on ruby 1.9 (similar to inverting a hash) (jeremyevans) (#979) * Add GenericExpression#=~ for equality, inclusion, and pattern matching (similar to using a hash) (jeremyevans) (#979) * Add Database#add_named_conversion_proc on PostgreSQL to make it easier to add conversion procs for types by name (jeremyevans) * Make Sequel.pg_jsonb return JSONBOp instances instead of JSONOp instances when passed other than Array or Hash (jeremyevans) (#977) * Demodulize default root name in json_serializer plugin (janko-m) (#968) * Make Database#transaction work in after_commit/after_rollback blocks (jeremyevans)
2015-06-03Update ruby-pg to 0.18.2.taca2-6/+6
== v0.18.2 [2015-05-14] Michael Granger <ged@FaerieMUD.org> Enhancements: - Allow URI connection string (thanks to Chris Bandy) Bugfixes: - Speedups and fixes for PG::TextDecoder::Identifier and quoting behavior - Revert addition of PG::Connection#hostaddr [#202]. - Fix decoding of fractional timezones and timestamps [#203] - Fixes for non-C99 compilers - Avoid possible symbol name clash when linking againt static libpq.
2015-06-03Update ruby-moneta to 0.8.0.taca3-8/+16
0.8.0 * Rename Moneta::Adapters::Mongo to Moneta::Adapters::MongoOfficial * Add Moneta::Adapters::MongoMoped * Drop Ruby 1.8 support
2015-06-03Use "editline" package from pkgsrc to fix the build under NetBSD.tron1-1/+7
2015-06-03Fix typo in comment of patch.ryoon2-4/+4
2015-06-01Update ruby-do_sqlite3 to 0.10.16.taca2-8/+7
No change except version.
2015-06-01Update ruby-do_postgres to 0.10.16.taca2-7/+7
## 0.10.16 2015-05-17 * Fix compile issue with do_postgres on stock OS X Ruby
2015-06-01Update ruby-do_mysql to 0.10.16.taca2-7/+7
No change except version.
2015-06-01Update ruby-data_objects to 0.10.16.taca2-6/+6
No change except version.
2015-06-01Changes 5.6.25:adam4-24/+122
Functionality Added or Changed * MySQL Enterprise Firewall operates on parser states and does not work well together with the query cache, which circumvents the parser. MySQL Enterprise Firewall now checks whether the query cache is enabled. If so, it displays a message that the query cache must be disabled and does not load. * my_print_defaults now masks passwords. To display passwords in cleartext, use the new --show option. * MySQL distributions now include an innodb_stress suite of test cases. Thanks to Mark Callaghan for the contribution. Bugs Fixed * InnoDB; Partitioning: The CREATE_TIME column of the INFORMATION_SCHEMA.TABLES table now shows the correct table creation time for partitioned InnoDB tables. The CREATE_TIME column of the INFORMATION_SCHEMA.PARTITIONS table now shows the correct partition creation time for a partition of partitioned InnoDB tables. The UPDATE_TIME column of the INFORMATION_SCHEMA.TABLES table now shows when a partitioned InnoDB table was last updated by an INSERT, DELETE, or UPDATE. The UPDATE_TIME column of the INFORMATION_SCHEMA.PARTITIONS table now shows when a partition of a partitioned InnoDB table was last updated. * InnoDB: An assertion was raised on shutdown due to XA PREPARE transactions holding explicit locks. * InnoDB: The strict_* forms of innodb_checksum_algorithm settings (strict_none, strict_innodb, and strict_crc32) caused the server to halt when a non-matching checksum was encountered, even though the non-matching checksum was valid. For example, with innodb_checksum_algorithm=strict_crc32, encountering a valid innodb checksum caused the server to halt. Instead of halting the server, a message is now printed to the error log and the page is accepted as valid if it matches an innodb, crc32 or none checksum. * InnoDB: The memcached set command permitted a negative expire time value. Expire time is stored internally as an unsigned integer. A negative value would be converted to a large number and accepted. The maximum expire time value is now restricted to INT_MAX32 to prevent negative expire time values. * InnoDB: Removal of a foreign key object from the data dictionary cache during error handling caused the server to exit. * InnoDB: SHOW ENGINE INNODB STATUS output showed negative reservation and signal count values due to a counter overflow error. * InnoDB: Failure to check the status of a cursor transaction read-only option before reusing the cursor transaction for a write operation resulted in a server exit during a memcached workload. * InnoDB: MDL locks taken by memcached clients caused a MySQL Enterprise Backup FLUSH TABLES WITH READ LOCK operation to hang. * InnoDB: Estimates that were too low for the size of merge chunks in the result sorting algorithm caused a server exit. * InnoDB: For full-text searches, the optimizer could choose an index that does not produce correct relevancy rankings. * Partitioning: When creating a partitioned table, partition-level DATA DIRECTORY or INDEX DIRECTORY option values that contained an excessive number of characters were handled incorrectly. * Partitioning: Executing an ALTER TABLE on a partitioned table on which a write lock was in effect could cause subsequent SQL statements on this table to fail. * Replication: When binary logging was enabled, using stored functions and triggers resulting in a long running procedure that inserted many records caused the memory use to increase rapidly. This was due to memory being allocated per variable. The fix ensures that in such a situation, memory is allocated once and the same memory is reused. * Replication: If an error was encountered while adding a GTID to the received GTID set, the log lock was not being correctly released. This could cause a deadlock. more...
2015-06-01Changes 5.5.44:adam4-23/+9
Bugs fixed: * InnoDB; Partitioning: The CREATE_TIME column of the INFORMATION_SCHEMA.TABLES table now shows the correct table creation time for partitioned InnoDB tables. The CREATE_TIME column of the INFORMATION_SCHEMA.PARTITIONS table now shows the correct partition creation time for a partition of partitioned InnoDB tables. The UPDATE_TIME column of the INFORMATION_SCHEMA.TABLES table now shows when a partitioned InnoDB table was last updated by an INSERT, DELETE, or UPDATE. The UPDATE_TIME column of the INFORMATION_SCHEMA.PARTITIONS table now shows when a partition of a partitioned InnoDB table was last updated. * InnoDB: An assertion was raised on shutdown due to XA PREPARE transactions holding explicit locks. * InnoDB: Removal of a foreign key object from the data dictionary cache during error handling caused the server to exit. * InnoDB: SHOW ENGINE INNODB STATUS output showed negative reservation and signal count values due to a counter overflow error. * InnoDB: Estimates that were too low for the size of merge chunks in the result sorting algorithm caused a server exit. * SHOW VARIABLES mutexes were being locked twice, resulting in a server exit. * A Provides rule in RPM .spec files misspelled “mysql-embedded” as “mysql-emdedded”. * Under certain conditions, the libedit command-line library could write outside an array boundary and cause a client program crash. * Host value matching for the grant tables could fail to use the most specific of values that contained wildcard characters. * A user with a name of event_scheduler could view the Event Scheduler process list without the PROCESS privilege. * SHOW GRANTS after connecting using a proxy user could display the password hash of the proxied user. * For a prepared statement with an ORDER BY that refers by column number to a GROUP_CONCAT() expression that has an outer reference, repeated statement execution could cause a server exit. * Loading corrupt spatial data into a MyISAM table could cause the server to exit during index building. * Certain queries for the INFORMATION_SCHEMA TABLES and COLUMNS tables could lead to excessive memory use when there were large numbers of empty InnoDB tables. * MySQL failed to compile using OpenSSL 0.9.8e.
2015-05-31Make this package build on Ruby 2.2.taca3-6/+25
2015-05-27The PostgreSQL Global Development Group has released an update with multiple ↵adam20-41/+67
functionality and security fixes to all supported versions of the PostgreSQL database system, which includes minor versions 9.4.2, 9.3.7, 9.2.11, 9.1.16, and 9.0.20. The update contains a critical fix for a potential data corruption issue in PostgreSQL 9.3 and 9.4; users of those versions should update their servers at the next possible opportunity.
2015-05-25Update to MySQL Cluster 7.4.6:jnemeth4-25/+158
---- Changes in MySQL Cluster NDB 7.4.6 (5.6.24-ndb-7.4.6) Bugs Fixed During backup, loading data from one SQL node followed by repeated DELETE statements on the tables just loaded from a different SQL node could lead to data node failures. (Bug #18949230) When an instance of NdbEventBuffer was destroyed, any references to GCI operations that remained in the event buffer data list were not freed. Now these are freed, and items from the event bufer data list are returned to the free list when purging GCI containers. (Bug #76165, Bug #20651661) When a bulk delete operation was committed early to avoid an additional round trip, while also returning the number of affected rows, but failed with a timeout error, an SQL node performed no verification that the transaction was in the Committed state. (Bug #74494, Bug #20092754) References: See also Bug #19873609. Changes in MySQL Cluster NDB 7.4.5 (5.6.23-ndb-7.4.5) Bugs Fixed In the event of a node failure during an initial node restart followed by another node start, the restart of the the affected node could hang with a START_INFOREQ that occurred while invalidation of local checkpoints was still ongoing. (Bug #20546157, Bug #75916) References: See also Bug #34702. It was found during testing that problems could arise when the node registered as the arbitrator disconnected or failed during the arbitration process. In this situation, the node requesting arbitration could never receive a positive acknowledgement from the registered arbitrator; this node also lacked a stable set of members and could not initiate selection of a new arbitrator. Now in such cases, when the arbitrator fails or loses contact during arbitration, the requesting node immediately fails rather than waiting to time out. (Bug #20538179) DROP DATABASE failed to remove the database when the database directory contained a .ndb file which had no corresponding table in NDB. Now, when executing DROP DATABASE, NDB performs an check specifically for leftover .ndb files, and deletes any that it finds. (Bug #20480035) References: See also Bug #44529. The maximum failure time calculation used to ensure that normal node failure handling mechanisms are given time to handle survivable cluster failures (before global checkpoint watchdog mechanisms start to kill nodes due to GCP delays) was excessively conservative, and neglected to consider that there can be at most number_of_data_nodes / NoOfReplicas node failures before the cluster can no longer survive. Now the value of NoOfReplicas is properly taken into account when performing this calculation. (Bug #20069617, Bug #20069624) References: See also Bug #19858151, Bug #20128256, Bug #20135976. When performing a restart, it was sometimes possible to find a log end marker which had been written by a previous restart, and that should have been invalidated. Now when when searching for the last page to invalidate, the same search algorithm is used as when searching for the last page of the log to read. (Bug #76207, Bug #20665205) During a node restart, if there was no global checkpoint completed between the START_LCP_REQ for a local checkpoint and its LCP_COMPLETE_REP it was possible for a comparison of the LCP ID sent in the LCP_COMPLETE_REP signal with the internal value SYSFILE->latestLCP_ID to fail. (Bug #76113, Bug #20631645) When sending LCP_FRAG_ORD signals as part of master takeover, it is possible that the master is not synchronized with complete accuracy in real time, so that some signals must be dropped. During this time, the master can send a LCP_FRAG_ORD signal with its lastFragmentFlag set even after the local checkpoint has been completed. This enhancement causes this flag to persist until the statrt of the next local checkpoint, which causes these signals to be dropped as well. This change affects ndbd only; the issue described did not occur with ndbmtd. (Bug #75964, Bug #20567730) When reading and copying transporter short signal data, it was possible for the data to be copied back to the same signal with overlapping memory. (Bug #75930, Bug #20553247) NDB node takeover code made the assumption that there would be only one takeover record when starting a takeover, based on the further assumption that the master node could never perform copying of fragments. However, this is not the case in a system restart, where a master node can have stale data and so need to perform such copying to bring itself up to date. (Bug #75919, Bug #20546899) Cluster API: A scan operation, whether it is a single table scan or a query scan used by a pushed join, stores the result set in a buffer. This maximum size of this buffer is calculated and preallocated before the scan operation is started. This buffer may consume a considerable amount of memory; in some cases we observed a 2 GB buffer footprint in tests that executed 100 parallel scans with 2 single-threaded (ndbd) data nodes. This memory consumption was found to scale linearly with additional fragments. A number of root causes, listed here, were discovered that led to this problem: Result rows were unpacked to full NdbRecord format before they were stored in the buffer. If only some but not all columns of a table were selected, the buffer contained empty space (essentially wasted). Due to the buffer format being unpacked, VARCHAR and VARBINARY columns always had to be allocated for the maximum size defined for such columns. BatchByteSize and MaxScanBatchSize values were not taken into consideration as a limiting factor when calculating the maximum buffer size. These issues became more evident in NDB 7.2 and later MySQL Cluster release series. This was due to the fact buffer size is scaled by BatchSize, and that the default value for this parameter was increased fourfold (from 64 to 256) beginning with MySQL Cluster NDB 7.2.1. This fix causes result rows to be buffered using the packed format instead of the unpacked format; a buffered scan result row is now not unpacked until it becomes the current row. In addition, BatchByteSize and MaxScanBatchSize are now used as limiting factors when calculating the required buffer size. Also as part of this fix, refactoring has been done to separate handling of buffered (packed) from handling of unbuffered result sets, and to remove code that had been unused since NDB 7.0 or earlier. The NdbRecord class declaration has also been cleaned up by removing a number of unused or redundant member variables. (Bug #73781, Bug #75599, Bug #19631350, Bug #20408733) ----- Changes in MySQL Cluster NDB 7.4.4 (5.6.23-ndb-7.4.4) Bugs Fixed When upgrading a MySQL Cluster from NDB 7.3 to NDB 7.4, the first data node started with the NDB 7.4 data node binary caused the master node (still running NDB 7.3) to fail with Error 2301, then itself failed during Start Phase 5. (Bug #20608889) A memory leak in NDB event buffer allocation caused an event to be leaked for each epoch. (Due to the fact that an SQL node uses 3 event buffers, each SQL node leaked 3 events per epoch.) This meant that a MySQL Cluster mysqld leaked an amount of memory that was inversely proportional to the size of TimeBetweenEpochs that is, the smaller the value for this parameter, the greater the amount of memory leaked per unit of time. (Bug #20539452) The values of the Ndb_last_commit_epoch_server and Ndb_last_commit_epoch_session status variables were incorrectly reported on some platforms. To correct this problem, these values are now stored internally as long long, rather than long. (Bug #20372169) When restoring a MySQL Cluster from backup, nodes that failed and were restarted during restoration of another node became unresponsive, which subsequently caused ndb_restore to fail and exit. (Bug #20069066) When a data node fails or is being restarted, the remaining nodes in the same nodegroup resend to subscribers any data which they determine has not already been sent by the failed node. Normally, when a data node (actually, the SUMA kernel block) has sent all data belonging to an epoch for which it is responsible, it sends a SUB_GCP_COMPLETE_REP signal, together with a count, to all subscribers, each of which responds with a SUB_GCP_COMPLETE_ACK. When SUMA receives this acknowledgment from all subscribers, it reports this to the other nodes in the same nodegroup so that they know that there is no need to resend this data in case of a subsequent node failure. If a node failed before all subscribers sent this acknowledgement but before all the other nodes in the same nodegroup received it from the failing node, data for some epochs could be sent (and reported as complete) twice, which could lead to an unplanned shutdown. The fix for this issue adds to the count reported by SUB_GCP_COMPLETE_ACK a list of identifiers which the receiver can use to keep track of which buckets are completed and to ignore any duplicate reported for an already completed bucket. (Bug #17579998) The output format of SHOW CREATE TABLE for an NDB table containing foreign key constraints did not match that for the equivalent InnoDB table, which could lead to issues with some third-party applications. (Bug #75515, Bug #20364309) An ALTER TABLE statement containing comments and a partitioning option against an NDB table caused the SQL node on which it was executed to fail. (Bug #74022, Bug #19667566) Cluster API: When a transaction is started from a cluster connection, Table and Index schema objects may be passed to this transaction for use. If these schema objects have been acquired from a different connection (Ndb_cluster_connection object), they can be deleted at any point by the deletion or disconnection of the owning connection. This can leave a connection with invalid schema objects, which causes an NDB API application to fail when these are dereferenced. To avoid this problem, if your application uses multiple connections, you can now set a check to detect sharing of schema objects between connections when passing a schema object to a transaction, using the NdbTransaction::setSchemaObjectOwnerChecks() method added in this release. When this check is enabled, the schema objects having the same names are acquired from the connection and compared to the schema objects passed to the transaction. Failure to match causes the application to fail with an error. (Bug #19785977) Cluster API: The increase in the default number of hashmap buckets (DefaultHashMapSize API node configuration parameter) from 240 to 3480 in MySQL Cluster NDB 7.2.11 increased the size of the internal DictHashMapInfo::HashMap type considerably. This type was allocated on the stack in some getTable() calls which could lead to stack overflow issues for NDB API users. To avoid this problem, the hashmap is now dynamically allocated from the heap. (Bug #19306793) ----- Changes in MySQL Cluster NDB 7.4.3 (5.6.22-ndb-7.4.3) Functionality Added or Changed Important Change; Cluster API: This release introduces an epoch-driven Event API for the NDB API that supercedes the earlier GCI-based model. The new version of this API also simplifies error detection and handling, and monitoring of event buffer memory usage has been been improved. New event handling methods for Ndb and NdbEventOperation added by this change include NdbEventOperation::getEventType2(), pollEvents2(), nextEvent2(), getHighestQueuedEpoch(), getNextEventOpInEpoch2(), getEpoch(), isEmptyEpoch(), and isErrorEpoch. The pollEvents(), nextEvent(), getLatestGCI(), getGCIEventOperations(), isConsistent(), isConsistentGCI(), getEventType(), getGCI(), getLatestGCI(), isOverrun(), hasError(), and clearError() methods are deprecated beginning with the same release. Some (but not all) of the new methods act as replacements for deprecated methods; not all of the deprecated methods map to new ones. The Event Class, provides information as to which old methods correspond to new ones. Error handling using the new API is no longer handled using dedicated hasError() and clearError() methods, which are now deprecated as previously noted. To support this change, TableEvent now supports the values TE_EMPTY (empty epoch), TE_INCONSISTENT (inconsistent epoch), and TE_OUT_OF_MEMORY (insufficient event buffer memory). Event buffer memory management has also been improved with the introduction of the get_eventbuffer_free_percent(), set_eventbuffer_free_percent(), and get_eventbuffer_memory_usage() methods, as well as a new NDB API error Free percent out of range (error code 4123). Memory buffer usage can now be represented in applications using the EventBufferMemoryUsage data structure, and checked from MySQL client applications by reading the ndb_eventbuffer_free_percent system variable. For more information, see the detailed descriptions for the Ndb and NdbEventOperation methods listed. See also The Event::TableEvent Type, as well as The EventBufferMemoryUsage Structure. Additional logging is now performed of internal states occurring during system restarts such as waiting for node ID allocation and master takeover of global and local checkpoints. (Bug #74316, Bug #19795029) Added the MaxParallelCopyInstances data node configuration parameter. In cases where the parallelism used during restart copy phase (normally the number of LDMs up to a maximum of 16) is excessive and leads to system overload, this parameter can be used to override the default behavior by reducing the degree of parallelism employed. Added the operations_per_fragment table to the ndbinfo information database. Using this table, you can now obtain counts of operations performed on a given fragment (or fragment replica). Such operations include reads, writes, updates, and deletes, scan and index operations performed while executing them, and operations refused, as well as information relating to rows scanned on and returned from a given fragment replica. This table also provides information about interpreted programs used as attribute values, and values returned by them. Cluster API: Two new example programs, demonstrating reads and writes of CHAR, VARCHAR, and VARBINARY column values, have been added to storage/ndb/ndbapi-examples in the MySQL Cluster source tree. For more information about these programs, including source code listings, see NDB API Simple Array Example, and NDB API Simple Array Example Using Adapter. Bugs Fixed The global checkpoint commit and save protocols can be delayed by various causes, including slow disk I/O. The DIH master node monitors the progress of both of these protocols, and can enforce a maximum lag time during which the protocols are stalled by killing the node responsible for the lag when it reaches this maximum. This DIH master GCP monitor mechanism did not perform its task more than once per master node; that is, it failed to continue monitoring after detecting and handling a GCP stop. (Bug #20128256) References: See also Bug #19858151, Bug #20069617, Bug #20062754. When running mysql_upgrade on a MySQL Cluster SQL node, the expected drop of the performance_schema database on this node was instead performed on all SQL nodes connected to the cluster. (Bug #20032861) The warning shown when an ALTER TABLE ALGORITHM=INPLACE ... ADD COLUMN statement automatically changes a column's COLUMN_FORMAT from FIXED to DYNAMIC now includes the name of the column whose format was changed. (Bug #20009152, Bug #74795) The local checkpoint scan fragment watchdog and the global checkpoint monitor can each exclude a node when it is too slow when participating in their respective protocols. This exclusion was implemented by simply asking the failing node to shut down, which in case this was delayed (for whatever reason) could prolong the duration of the GCP or LCP stall for other, unaffected nodes. To minimize this time, an isolation mechanism has been added to both protocols whereby any other live nodes forcibly disconnect the failing node after a predetermined amount of time. This allows the failing node the opportunity to shut down gracefully (after logging debugging and other information) if possible, but limits the time that other nodes must wait for this to occur. Now, once the remaining live nodes have processed the disconnection of any failing nodes, they can commence failure handling and restart the related protocol or protocol, even if the failed node takes an excessiviely long time to shut down. (Bug #19858151) References: See also Bug #20128256, Bug #20069617, Bug #20062754. The matrix of values used for thread configuration when applying the setting of the MaxNoOfExecutionThreads configuration parameter has been improved to align with support for greater numbers of LDM threads. See Multi-Threading Configuration Parameters (ndbmtd), for more information about the changes. (Bug #75220, Bug #20215689) When a new node failed after connecting to the president but not to any other live node, then reconnected and started again, a live node that did not see the original connection retained old state information. This caused the live node to send redundant signals to the president, causing it to fail. (Bug #75218, Bug #20215395) In the NDB kernel, it was possible for a TransporterFacade object to reset a buffer while the data contained by the buffer was being sent, which could lead to a race condition. (Bug #75041, Bug #20112981) mysql_upgrade failed to drop and recreate the ndbinfo database and its tables as expected. (Bug #74863, Bug #20031425) Due to a lack of memory barriers, MySQL Cluster programs such as ndbmtd did not compile on POWER platforms. (Bug #74782, Bug #20007248) In spite of the presence of a number of protection mechanisms against overloading signal buffers, it was still in some cases possible to do so. This fix adds block-level support in the NDB kernel (in SimulatedBlock) to make signal buffer overload protection more reliable than when implementing such protection on a case-by-case basis. (Bug #74639, Bug #19928269) Copying of metadata during local checkpoints caused node restart times to be highly variable which could make it difficult to diagnose problems with restarts. The fix for this issue introduces signals (including PAUSE_LCP_IDLE, PAUSE_LCP_REQUESTED, and PAUSE_NOT_IN_LCP_COPY_META_DATA) to pause LCP execution and flush LCP reports, making it possible to block LCP reporting at times when LCPs during restarts become stalled in this fashion. (Bug #74594, Bug #19898269) When a data node was restarted from its angel process (that is, following a node failure), it could be allocated a new node ID before failure handling was actually completed for the failed node. (Bug #74564, Bug #19891507) In NDB version 7.4, node failure handling can require completing checkpoints on up to 64 fragments. (This checkpointing is performed by the DBLQH kernel block.) The requirement for master takeover to wait for completion of all such checkpoints led in such cases to excessive length of time for completion. To address these issues, the DBLQH kernel block can now report that it is ready for master takeover before it has completed any ongoing fragment checkpoints, and can continue processing these while the system completes the master takeover. (Bug #74320, Bug #19795217) Local checkpoints were sometimes started earlier than necessary during node restarts, while the node was still waiting for copying of the data distribution and data dictionary to complete. (Bug #74319, Bug #19795152) The check to determine when a node was restarting and so know when to accelerate local checkpoints sometimes reported a false positive. (Bug #74318, Bug #19795108) Values in different columns of the ndbinfo tables disk_write_speed_aggregate and disk_write_speed_aggregate_node were reported using differing multiples of bytes. Now all of these columns display values in bytes. In addition, this fix corrects an error made when calculating the standard deviations used in the std_dev_backup_lcp_speed_last_10sec, std_dev_redo_speed_last_10sec, std_dev_backup_lcp_speed_last_60sec, and std_dev_redo_speed_last_60sec columns of the ndbinfo.disk_write_speed_aggregate table. (Bug #74317, Bug #19795072) Recursion in the internal method Dblqh::finishScanrec() led to an attempt to create two list iterators with the same head. This regression was introduced during work done to optimize scans for version 7.4 of the NDB storage engine. (Bug #73667, Bug #19480197) Transporter send buffers were not updated properly following a failed send. (Bug #45043, Bug #20113145) Disk Data: An update on many rows of a large Disk Data table could in some rare cases lead to node failure. In the event that such problems are observed with very large transactions on Disk Data tables you can now increase the number of page entries allocated for disk page buffer memory by raising the value of the DiskPageBufferEntries data node configuration parameter added in this release. (Bug #19958804) Disk Data: In some cases, during DICT master takeover, the new master could crash while attempting to roll forward an ongoing schema transaction. (Bug #19875663, Bug #74510) Cluster API: It was possible to delete an Ndb_cluster_connection object while there remained instances of Ndb using references to it. Now the Ndb_cluster_connection destructor waits for all related Ndb objects to be released before completing. (Bug #19999242) References: See also Bug #19846392. ----- Changes in MySQL Cluster NDB 7.4.2 (5.6.21-ndb-7.4.2) Functionality Added or Changed Added the restart_info table to the ndbinfo information database to provide current status and timing information relating to node and system restarts. By querying this table, you can observe the progress of restarts in real time. (Bug #19795152) After adding new data nodes to the configuration file of a MySQL Cluster having many API nodes, but prior to starting any of the data node processes, API nodes tried to connect to these missing data nodes several times per second, placing extra loads on management nodes and the network. To reduce unnecessary traffic caused in this way, it is now possible to control the amount of time that an API node waits between attempts to connect to data nodes which fail to respond; this is implemented in two new API node configuration parameters StartConnectBackoffMaxTime and ConnectBackoffMaxTime. Time elapsed during node connection attempts is not taken into account when applying these parameters, both of which are given in milliseconds with approximately 100 ms resolution. As long as the API node is not connected to any data nodes as described previously, the value of the StartConnectBackoffMaxTime parameter is applied; otherwise, ConnectBackoffMaxTime is used. In a MySQL Cluster with many unstarted data nodes, the values of these parameters can be raised to circumvent connection attempts to data nodes which have not yet begun to function in the cluster, as well as moderate high traffic to management nodes. For more information about the behavior of these parameters, see Defining SQL and Other API Nodes in a MySQL Cluster. (Bug #17257842) Bugs Fixed When performing a batched update, where one or more successful write operations from the start of the batch were followed by write operations which failed without being aborted (due to the AbortOption being set to AO_IgnoreError), the failure handling for these by the transaction coordinator leaked CommitAckMarker resources. (Bug #19875710) References: This bug was introduced by Bug #19451060, Bug #73339. Online downgrades to MySQL Cluster NDB 7.3 failed when a MySQL Cluster NDB 7.4 master attempted to request a local checkpoint with 32 fragments from a data node already running NDB 7.3, which supports only 2 fragments for LCPs. Now in such cases, the NDB 7.4 master determines how many fragments the data node can handle before making the request. (Bug #19600834) The fix for a previous issue with the handling of multiple node failures required determining the number of TC instances the failed node was running, then taking them over. The mechanism to determine this number sometimes provided an invalid result which caused the number of TC instances in the failed node to be set to an excessively high value. This in turn caused redundant takeover attempts, which wasted time and had a negative impact on the processing of other node failures and of global checkpoints. (Bug #19193927) References: This bug was introduced by Bug #18069334. The server side of an NDB transporter disconnected an incoming client connection very quickly during the handshake phase if the node at the server end was not yet ready to receive connections from the other node. This led to problems when the client immediately attempted once again to connect to the server socket, only to be disconnected again, and so on in a repeating loop, until it suceeded. Since each client connection attempt left behind a socket in TIME_WAIT, the number of sockets in TIME_WAIT increased rapidly, leading in turn to problems with the node on the server side of the transporter. Further analysis of the problem and code showed that the root of the problem lay in the handshake portion of the transporter connection protocol. To keep the issue described previously from occurring, the node at the server end now sends back a WAIT message instead of disconnecting the socket when the node is not yet ready to accept a handshake. This means that the client end should no longer need to create a new socket for the next retry, but can instead begin immediately with a new handshake hello message. (Bug #17257842) Corrupted messages to data nodes sometimes went undetected, causing a bad signal to be delivered to a block which aborted the data node. This failure in combination with disconnecting nodes could in turn cause the entire cluster to shut down. To keep this from happening, additional checks are now made when unpacking signals received over TCP, including checks for byte order, compression flag (which must not be used), and the length of the next message in the receive buffer (if there is one). Whenever two consecutive unpacked messages fail the checks just described, the current message is assumed to be corrupted. In this case, the transporter is marked as having bad data and no more unpacking of messages occurs until the transporter is reconnected. In addition, an entry is written to the cluster log containing the error as well as a hex dump of the corrupted message. (Bug #73843, Bug #19582925) During restore operations, an attribute's maximum length was used when reading variable-length attributes from the receive buffer instead of the attribute's actual length. (Bug #73312, Bug #19236945) ----- Changes in MySQL Cluster NDB 7.4.1 (5.6.20-ndb-7.4.1) Node Restart Performance and Reporting Enhancements Performance: A number of performance and other improvements have been made with regard to node starts and restarts. The following list contains a brief description of each of these changes: Before memory allocated on startup can be used, it must be touched, causing the operating system to allocate the actual physical memory needed. The process of touching each page of memory that was allocated has now been multithreaded, with touch times on the order of 3 times shorter than with a single thread when performed by 16 threads. When performing a node or system restart, it is necessary to restore local checkpoints for the fragments. This process previously used delayed signals at a point which was found to be critical to performance; these have now been replaced with normal (undelayed) signals, which should shorten significantly the time required to back up a MySQL Cluster or to restore it from backup. Previously, there could be at most 2 LDM instances active with local checkpoints at any given time. Now, up to 16 LDMs can be used for performing this task, which increases utilization of available CPU power, and can speed up LCPs by a factor of 10, which in turn can greatly improve restart times. Better reporting of disk writes and increased control over these also make up a large part of this work. New ndbinfo tables disk_write_speed_base, disk_write_speed_aggregate, and disk_write_speed_aggregate_node provide information about the speed of disk writes for each LDM thread that is in use. The DiskCheckpointSpeed and DiskCheckpointSpeedInRestart configuration parameters have been deprecated, and are subject to removal in a future MySQL Cluster release. This release adds the data node configuration parameters MinDiskWriteSpeed, MaxDiskWriteSpeed, MaxDiskWriteSpeedOtherNodeRestart, and MaxDiskWriteSpeedOwnRestart to control write speeds for LCPs and backups when the present node, another node, or no node is currently restarting. For more information, see the descriptions of the ndbinfo tables and MySQL Cluster configuration parameters named previously. Reporting of MySQL Cluster start phases has been improved, with more frequent printouts. New and better information about the start phases and their implementation has also been provided in the sources and documentation. See Summary of MySQL Cluster Start Phases. Improved Scan and SQL Processing Performance: Several internal methods relating to the NDB receive thread have been optimized to make mysqld more efficient in processing SQL applications with the NDB storage engine. In particular, this work improves the performance of the NdbReceiver::execTRANSID_AI() method, which is commonly used to receive a record from the data nodes as part of a scan operation. (Since the receiver thread sometimes has to process millions of received records per second, it is critical that this method does not perform unnecessary work, or tie up resources that are not strictly needed.) The associated internal functions receive_ndb_packed_record() and handleReceivedSignal() methods have also been improved, and made more efficient. Per-Fragment Memory Reporting Information about memory usage by individual fragments can now be obtained from the memory_per_fragment view added in this release to the ndbinfo information database. This information includes pages having fixed, and variable element size, rows, fixed element free slots, variable element free bytes, and hash index memory usage. For information, see The ndbinfo memory_per_fragment Table. Bugs Fixed In some cases, transporter receive buffers were reset by one thread while being read by another. This happened when a race condition occurred between a thread receiving data and another thread initiating disconnect of the transporter (disconnection clears this buffer). Concurrency logic has now been implemented to keep this race from taking place. (Bug #19552283, Bug #73790) When a new data node started, API nodes were allowed to attempt to register themselves with the data node for executing transactions before the data node was ready. This forced the API node to wait an extra heartbeat interval before trying again. To address this issue, a number of HA_ERR_NO_CONNECTION errors (Error 4009) that could be issued during this time have been changed to Cluster temporarily unavailable errors (Error 4035), which should allow API nodes to use new data nodes more quickly than before. As part of this fix, some errors which were incorrectly categorised have been moved into the correct categories, and some errors which are no longer used have been removed. (Bug #19524096, Bug #73758) Executing ALTER TABLE ... REORGANIZE PARTITION after increasing the number of data nodes in the cluster from 4 to 16 led to a crash of the data nodes. This issue was shown to be a regression caused by previous fix which added a new dump handler using a dump code that was already in use (7019), which caused the command to execute two different handlers with different semantics. The new handler was assigned a new DUMP code (7024). (Bug #18550318) References: This bug is a regression of Bug #14220269. When certain queries generated signals having more than 18 data words prior to a node failure, such signals were not written correctly in the trace file. (Bug #18419554) Failure of multiple nodes while using ndbmtd with multiple TC threads was not handled gracefully under a moderate amount of traffic, which could in some cases lead to an unplanned shutdown of the cluster. (Bug #18069334) For multithreaded data nodes, some threads do communicate often, with the result that very old signals can remain at the top of the signal buffers. When performing a thread trace, the signal dumper calculated the latest signal ID from what it found in the signal buffers, which meant that these old signals could be erroneously counted as the newest ones. Now the signal ID counter is kept as part of the thread state, and it is this value that is used when dumping signals for trace files. (Bug #73842, Bug #19582807) Cluster API: When an NDB API client application received a signal with an invalid block or signal number, NDB provided only a very brief error message that did not accurately convey the nature of the problem. Now in such cases, appropriate printouts are provided when a bad signal or message is detected. In addition, the message length is now checked to make certain that it matches the size of the embedded signal. (Bug #18426180) ----- The following improvements to MySQL Cluster have been made in MySQL Cluster NDB 7.4: Conflict detection and resolution enhancements. A reserved column name namespace NDB$ is now employed for exceptions table metacolumns, allowing an arbitrary subset of main table columns to be recorded, even if they are not part of the original table's primary key. Recording the complete original primary key is no longer required, due to the fact that matching against exceptions table columns is now done by name and type only. It is now also possible for you to record values of columns which not are part of the main table's primary key in the exceptions table. Read conflict detection is now possible. All rows read by the conflicting transaction are flagged, and logged in the exceptions table. Rows inserted in the same transaction are not included among the rows read or logged. This read tracking depends on the slave having an exclusive read lock which requires setting ndb_log_exclusive_reads in advance. See Read conflict detection and resolution, for more information and examples. Existing exceptions tables remain supported. For more information, see Section 18.6.11, "MySQL Cluster Replication Conflict Resolution". Circular ("active-active") replication improvements. When using a circular or "active-active" MySQL Cluster Replication topology, you can assign one of the roles of primary of secondary to a given MySQL Cluster using the ndb_slave_conflict_role server system variable, which can be employed when failing over from a MySQL Cluster acting as primary, or when using conflict detection and resolution with NDB$EPOCH2() and NDB$EPOCH2_TRANS() (MySQL Cluster NDB 7.4.2 and later), which support delete-delete conflict handling. See the description of the ndb_slave_conflict_role variable, as well as NDB$EPOCH2(), for more information. See also Section 18.6.11, MySQL Cluster Replication Conflict Resolution. Per-fragment memory usage reporting. You can now obtain data about memory usage by individual MySQL Cluster fragments from the memory_per_fragment view, added in MySQL Cluster NDB 7.4.1 to the ndbinfo information database. For more information, see Section 18.5.10.17, "The ndbinfo memory_per_fragment Table". Node restart improvements. MySQL Cluster NDB 7.4 includes a number of improvements which decrease the time needed for data nodes to be restarted. These are described in the following list: Memory allocated that is allocated on node startup cannot be used until it has been, which causes the operating system to set aside the actual physical memory required. In previous versions of MySQL Cluster, the process of touching each page of memory that was allocated was singlethreaded, which made it relatively time-consuming. This process has now been reimplimented with multithreading. In tests with 16 threads, touch times on the order of 3 times shorter than with a single thread were observed. Increased parallelization of local checkpoints; in MySQL Cluster NDB 7.4, LCPs now support 32 fragments rather than 2 as before. This greatly increases utilization of CPU power that would otherwise go unused, and can make LCPs faster by up to a factor of 10; this speedup in turn can greatly improve node restart times. The degree of parallelization used for the node copy phase during node and system restarts can be controlled in MySQL Cluster NDB 7.4.3 and later by setting the MaxParallelCopyInstances data node configuration parameter to a nonzero value. Reporting on disk writes is provided by new ndbinfo tables disk_write_speed_base, disk_write_speed_aggregate, and disk_write_speed_aggregate_node, which provide information about the speed of disk writes for each LDM thread that is in use. This release also adds the data node configuration parameters MinDiskWriteSpeed, MaxDiskWriteSpeed, MaxDiskWriteSpeedOtherNodeRestart, and MaxDiskWriteSpeedOwnRestart to control write speeds for LCPs and backups when the present node, another node, or no node is currently restarting. These changes are intended to supersede configuration of disk writes using the DiskCheckpointSpeed and DiskCheckpointSpeedInRestart configuration parameters. These 2 parameters have now been deprecated, and are subject to removal in a future MySQL Cluster release. Faster times for restoring a MySQL Cluster from backup have been obtained by replacing delayed signals found at a point which was found to be critical to performance with normal (undelayed) signals. The elimination or replacement of these unnecessary delayed signals should noticeably reduce the amount of time required to back up a MySQL Cluster, or to restore a MySQL Cluster from backup. Several internal methods relating to the NDB receive thread have been optimized, to increase the efficiency of SQL processing by NDB. The receiver thread at time may have to process several million received records per second, so it is critical that it not perform unnecessary work or waste resources when retrieving records from MySQL Cluster data nodes. Improved reporting of MySQL Cluster restarts and start phases. The restart_info table (included in the ndbinfo information database beginning with MySQL Cluster NDB 7.4.2) provides current status and timing information about node and system restarts. Reporting and logging of MySQL Cluster start phases also provides more frequent and specific printouts during startup than previously. See Section 18.5.1, Summary of MySQL Cluster Start Phases, for more information. NDB API: new Event API. MySQL Cluster NDB 7.4.3 introduces an epoch-driven Event API that supercedes the earlier GCI-based model. The new version of the API also simplifies error detection and handling. These changes are realized in the NDB API by implementing a number of new methods for Ndb and NdbEventOperation, deprecating several other methods of both classes, and adding new type values to Event::TableEvent. The event handling methods added to Ndb in MySQL Cluster NDB 7.4.3 are pollEvents2(), nextEvent2(), getHighestQueuedEpoch(), and getNextEventOpInEpoch2(). The Ndb methods pollEvents(), nextEvent(), getLatestGCI(), getGCIEventOperations(), isConsistent(), and isConsistentGCI() are deprecated beginning with the same release. MySQL Cluster NDB 7.4.3 adds the NdbEventOperation event handling methods getEventType2(), getEpoch(), isEmptyEpoch(), and isErrorEpoch; it obsoletes getEventType(), getGCI(), getLatestGCI(), isOverrun(), hasError(), and clearError(). While some (but not all) of the new methods are direct replacements for deprecated methods, not all of the deprecated methods map to new ones. The Event Class, provides information as to which old methods correspond to new ones. Error handling using the new API is no longer handled using dedicated hasError() and clearError() methods, which are now deprecated (and thus subject to removal in a future release of MySQL Cluster). To support this change, the list of TableEvent types now includes the values TE_EMPTY (empty epoch), TE_INCONSISTENT (inconsistent epoch), and TE_OUT_OF_MEMORY (inconsistent data). Improvements in event buffer management have also been made by implementing new get_eventbuffer_free_percent(), set_eventbuffer_free_percent(), and get_eventbuffer_memory_usage() methods. Memory buffer usage can now be represented in application code using EventBufferMemoryUsage. The ndb_eventbuffer_free_percent system variable, also implemented in MySQL Cluster NDB 7.4, makes it possible for event buffer memory usage to be checked from MySQL client applications. For more information, see the detailed descriptions for the Ndb and NdbEventOperation methods listed. See also The Event::TableEvent Type, as well as The EventBufferMemoryUsage Structure. Per-fragment operations information. In MySQL Cluster NDB 7.4.3 and later, counts of various types of operations on a given fragment or fragment replica can obtained easily using the operations_per_fragment table in the ndbinfo information database. This includes read, write, update, and delete operations, as well as scan and index operations performed by these. Information about operations refused, and about rows scanned and returned from a given fragment replica, is also shown in operations_per_fragment. This table also provides information about interpreted programs used as attribute values, and values returned by them. MySQL Cluster NDB 7.4 is also supported by MySQL Cluster Manager, which provides an advanced command-line interface that can simplify many complex MySQL Cluster management tasks. See MySQL Cluster Manager 1.3.5 User Manual, for more information. ----- Changes in MySQL Cluster NDB 7.3.9 (5.6.24-ndb-7.3.9) Bugs Fixed It was found during testing that problems could arise when the node registered as the arbitrator disconnected or failed during the arbitration process. In this situation, the node requesting arbitration could never receive a positive acknowledgement from the registered arbitrator; this node also lacked a stable set of members and could not initiate selection of a new arbitrator. Now in such cases, when the arbitrator fails or loses contact during arbitration, the requesting node immediately fails rather than waiting to time out. (Bug #20538179) The values of the Ndb_last_commit_epoch_server and Ndb_last_commit_epoch_session status variables were incorrectly reported on some platforms. To correct this problem, these values are now stored internally as long long, rather than long. (Bug #20372169) The maximum failure time calculation used to ensure that normal node failure handling mechanisms are given time to handle survivable cluster failures (before global checkpoint watchdog mechanisms start to kill nodes due to GCP delays) was excessively conservative, and neglected to consider that there can be at most number_of_data_nodes / NoOfReplicas node failures before the cluster can no longer survive. Now the value of NoOfReplicas is properly taken into account when performing this calculation. (Bug #20069617, Bug #20069624) References: See also Bug #19858151, Bug #20128256, Bug #20135976. When a data node fails or is being restarted, the remaining nodes in the same nodegroup resend to subscribers any data which they determine has not already been sent by the failed node. Normally, when a data node (actually, the SUMA kernel block) has sent all data belonging to an epoch for which it is responsible, it sends a SUB_GCP_COMPLETE_REP signal, together with a count, to all subscribers, each of which responds with a SUB_GCP_COMPLETE_ACK. When SUMA receives this acknowledgment from all subscribers, it reports this to the other nodes in the same nodegroup so that they know that there is no need to resend this data in case of a subsequent node failure. If a node failed before all subscribers sent this acknowledgement but before all the other nodes in the same nodegroup received it from the failing node, data for some epochs could be sent (and reported as complete) twice, which could lead to an unplanned shutdown. The fix for this issue adds to the count reported by SUB_GCP_COMPLETE_ACK a list of identifiers which the receiver can use to keep track of which buckets are completed and to ignore any duplicate reported for an already completed bucket. (Bug #17579998) When performing a restart, it was sometimes possible to find a log end marker which had been written by a previous restart, and that should have been invalidated. Now when when searching for the last page to invalidate, the same search algorithm is used as when searching for the last page of the log to read. (Bug #76207, Bug #20665205) When reading and copying transporter short signal data, it was possible for the data to be copied back to the same signal with overlapping memory. (Bug #75930, Bug #20553247) When a bulk delete operation was committed early to avoid an additional round trip, while also returning the number of affected rows, but failed with a timeout error, an SQL node performed no verification that the transaction was in the Committed state. (Bug #74494, Bug #20092754) References: See also Bug #19873609. An ALTER TABLE statement containing comments and a partitioning option against an NDB table caused the SQL node on which it was executed to fail. (Bug #74022, Bug #19667566) Cluster API: When a transaction is started from a cluster connection, Table and Index schema objects may be passed to this transaction for use. If these schema objects have been acquired from a different connection (Ndb_cluster_connection object), they can be deleted at any point by the deletion or disconnection of the owning connection. This can leave a connection with invalid schema objects, which causes an NDB API application to fail when these are dereferenced. To avoid this problem, if your application uses multiple connections, you can now set a check to detect sharing of schema objects between connections when passing a schema object to a transaction, using the NdbTransaction::setSchemaObjectOwnerChecks() method added in this release. When this check is enabled, the schema objects having the same names are acquired from the connection and compared to the schema objects passed to the transaction. Failure to match causes the application to fail with an error. (Bug #19785977) Cluster API: The increase in the default number of hashmap buckets (DefaultHashMapSize API node configuration parameter) from 240 to 3480 in MySQL Cluster NDB 7.2.11 increased the size of the internal DictHashMapInfo::HashMap type considerably. This type was allocated on the stack in some getTable() calls which could lead to stack overflow issues for NDB API users. To avoid this problem, the hashmap is now dynamically allocated from the heap. (Bug #19306793) Cluster API: A scan operation, whether it is a single table scan or a query scan used by a pushed join, stores the result set in a buffer. The maximum size of this buffer is calculated and preallocated before the scan operation is started. This buffer may consume a considerable amount of memory; in some cases we observed a 2 GB buffer footprint in tests that executed 100 parallel scans with 2 single-threaded (ndbd) data nodes. This memory consumption was found to scale linearly with additional fragments. A number of root causes, listed here, were discovered that led to this problem: Result rows were unpacked to full NdbRecord format before they were stored in the buffer. If only some but not all columns of a table were selected, the buffer contained empty space (essentially wasted). Due to the buffer format being unpacked, VARCHAR and VARBINARY columns always had to be allocated for the maximum size defined for such columns. BatchByteSize and MaxScanBatchSize values were not taken into consideration as a limiting factor when calculating the maximum buffer size. These issues became more evident in NDB 7.2 and later MySQL Cluster release series. This was due to the fact buffer size is scaled by BatchSize, and that the default value for this parameter was increased fourfold (from 64 to 256) beginning with MySQL Cluster NDB 7.2.1. This fix causes result rows to be buffered using the packed format instead of the unpacked format; a buffered scan result row is now not unpacked until it becomes the current row. In addition, BatchByteSize and MaxScanBatchSize are now used as limiting factors when calculating the required buffer size. Also as part of this fix, refactoring has been done to separate handling of buffered (packed) from handling of unbuffered result sets, and to remove code that had been unused since NDB 7.0 or earlier. The NdbRecord class declaration has also been cleaned up by removing a number of unused or redundant member variables. (Bug #73781, Bug #75599, Bug #19631350, Bug #20408733)
2015-05-21Fix some issues raised by the previous updatefhajny4-15/+12
- Never force install the init script, this is done by pkgsrc automatically - Pre-create ${PKG_SYSCONFDIR}/sqlrelay.conf.d that was added in 0.48 - Doesn't really need libiconv directly - Improve SMF manifest - Remove some unneeded definitions. Bump PKGREVISION.
2015-05-21Changes 3.8.10.2:adam5-16/+17
Fix an index corruption issue introduced by version 3.8.7. An index with a TEXT key can be corrupted by an INSERT into the corresponding table if the table has two nested triggers that convert the key value to INTEGER and back to TEXT again.
2015-05-20Reset PKGREVISION.ryoon2-4/+2
2015-05-20Update to 0.59ryoon24-201/+381
* Fix build with Ruby 2.2. Changelog: 0.59 - updated docs, removed some Cygwin-specific info added support for login warnings made bind variable buffers dynamic on the client side added maxbindvars parameter on the server side binding a NULL to an integer works with db2 now moved getting started with DB docs into the cloud added a semaphore to ensure that the listener doesn't hand off the client to the connection until the connection is ready, elimiating a race condition on the handoff socket that could occur if the connection timed out waiting for the listener just after the listener had decided to use that connection oracle temp tables that need to be truncated at the end of the session are truncated with "truncate table xxx" now rather than "delete from xxx" oracle temp tables that need to be dropped at the end of the session are truncated first, rather than the connection re-logging in an ora-14452 error (basically indicating that a temp table can only be dropped after being truncated, or if the current session ends) does not automatically trigger a re-login any more updated cachemanager to use directory::read() directly instead of directory::getChildName(index) added cache and opencache commands to sqlrsh made cache ttl a 64-bit number added enabled="yes"/"no" parameter to logger modules updated odbc connection code to use new/delete and rudiments methods rather than malloc/free and native calls retired Ruby DBI driver fixed command line client crash when using -id "instance" with an instance that uses authtier="database" fixed bugs that could make reexecuted db2 selects fail and cause a database re-login loop tweaked spec file to remove empty directories on uninstall fixed typo that could sometimes cause a listener crash postgresql and mdbtools return error code of 1 rather than 0 for all errors now tweaked odbc driver to work with Oracle Heterogenous Agent (dblinks) fixed bugs related to autocommit with db's that support transaction blocks implemented the ODBC driver-manager dialog for windows updated windows installer to install ODBC registry settings ODBC driver copies references now fixed various bugs in sqlrconfigfile that caused sqlr-start with no -id to crash or behave strangely sometimes refactored build process to use nmake and be compatible with many different versions of MS Visual Studio updated the slow query logger to show the date/time that the query was executed consolidated c, c++ and server source/includes down a few levels implemented column-remapping for get db/table/column commands to enable different formats for mysql, odbc, etc. odbc connection correctly returns database/table lists now added support for maxselectlistsize/maxitembuffersize to MySQL connection updated mysql connection to fetch blob columns in chunks and not be bound by maxitembuffersize fixed a misspelling in sqlrelay.dtd swapped order of init directory detection, looking for /etc/init.d ahead of /etc/rc.d/init.d to resolve conflict with dkms on SuSE Enterprise C# api and tests compile and work under Mono on unix/linux now sqlr-start spawns a new window on Windows now added global temp table tracking for firebird added droptemptables parameter for firebird added globaltemptables parameter for oracle and firebird updated mysql connection to allow mysql_init to allocate a mysql struct on platforms that support mysql_init, rather than using a static struct fixed subtle noon/midnight-related bugs in date/time translation updated mysql connection to get affected rows when not using the statement api updated mysql connection not to use the statement API on windows, for now disabled mysql_change_user, for now fixed blob-input binds on firebird 0.58 - updated spawn() calls to detach on windows added support for sqlrelay.conf.d removed support for undocumented ~/.sqlrelay.conf fixed detection of oracle jdk 7 and 8 on debian and ubuntu systems added ini files for PHP and PDO modules added resultsetbuffersize, dontgetcolumninfo and nullsasnulls connect string variables to the PHP PDO driver refactored sqlr-status and removed dependency on libsqlrserver cleaned up and refactored server-side classes quite a bit fixed a bug where sqlrsh was losing the timezone when binding dates server-devel headers are now installed removed backupschema script moved triggers, translations, resultsettranslations and parser into separate project blobs work when using fake input binds now replaced sqlr-stop script with a binary (for Windows) preliminary support for server components on Windows sessionhandler="thread" is now forced on Windows added various compile flags for clang's aggressive -Wall added support for sybase 16.0 removed unnecessary -lsybdb/-lsybdb64 for sybase 15+ fixed PQreset, PQresetStart, PQresetPoll in postgresql drop-in replacement lib added debug-to-file support to PHP PDO driver fixed subtle row-fetch bug in sybase/freetds drivers that could cause the total row count to be set to garbage fixed support for older versions of perl (5.00x) fixed a bug in the DB2 connoutpection that caused blob input binds to be truncated at the first null added support for binding streams to output bind blobs in the PHP PDO driver updated PHP PDO guide with notes about bind variable formats integrated Samat Yusup's dbh driver methods for PHP PDO added stmt driver methods for suspending/resuming result sets to the PHP PDO driver added row cache to mysql drop-in replacement library to fix issues on systems with 32-bit pointers fixed subtle db2 output bind bfers the entire result set by default now implemented an ext_SQLR_Debug database handle attribute for perl DBI added support for type, length, precision, scale bind variable attributes in perl DBI output bind clobs and blobs work in perl DBI now addd support for perl DBI ParamValues, ParamTypes and ParamArrays attributes tweaked the odbc driver so it works with the jdbc-odbc bridge and jmeter added custom db/statement attributes to perl DBI for DontGetColumnInfo, GetNullsAsEmptyStrings and ResultSetBufferSize added note about JDBC-ODBC bridge removal in Oracle Java 8 made threaded listener the default tweaks to sqlr-connection/sqlr-scaler processes to deal with lack of SIGCHLD/waitpid() on windows the signal on semaphore 2 is now undone manually when sqlr-connections shut down and doesn't rely on semaphore undo's for normal operation subtly tweaked freeing of Oracle column-info buffers to work around a crash that could occur after using a cursor bind
2015-05-20Record error message that prompted MAKE_JOBS_SAFE=noabs1-1/+5
2015-05-20Add MAKE_JOBS_SAFE=noabs1-1/+2