Introduces `FLUSH HASHICORP_KEY_MANAGEMENT_CACHE` command to flush the
cached keys in the HashiCorp Key Management plugin, enabling rotation of
encryption keys without needing to restart the server.
The new `INFORMATION_SCHEMA.HASHICORP_KEY_MANAGEMENT_CACHE` table lists
the key id and key version from the latest version cache. The table's
content can be viewed using `SHOW HASHICORP_KEY_MANAGEMENT_CACHE` or
queried directly.
Executing the `FLUSH` command requires `RELOAD` privilege and access to
INFORMATION_SCHEMA table requires `PROCESS` privilege.
Bugfix (squashed):
MDEV-38111: SIGSEGV when multiple servers use the same Vault KV storage for encrypted tables
Problem:
A data race between InnoDB background threads reading the cached keys
and the thread executing FLUSH command clearing it without acquiring
a lock. This non-synchronized memory write caused InnoDB threads that
were concurrently reading the cache to access freed memory, leading to
a crash.
Fix:
Acquire the lock before clearing the latest version cahce. This
ensures the cache clearing operation is serialized, preventing
concurrent access and resolving the data race.
With WolfSSL, the plugins is statically compiled, and enabled,
and defaults to autogenerating ssl keys, which was left unimplemented.
Thus, it spits out some [ERROR] on every startup.
Fixed by removing a couple some ifdefs. Allowed tcp_nossl to run on
Windows.
As WolfSSL is missing some APIs with FILE*, use related API that
accept BIO
, i.e
- BIO_new_file() instead of fopen()
- BIO_free instead of fclose()
- PEM_write_bio_PrivateKey() instead of PEM_write_PrivateKey()
- etc
A note about BIO and error reporting:
BIO_new_file sets the errno, therefore FILE_ERROR macro
produces good expected error messages, while SSL_ERROR unfortunately
creates something incomprehensible. Thus, FILE_ERROR is left in place
where it was used previously (fopen errors)
Curiously, removing APIs with FILE*, solves another bug MDEV-37343,
where server on Windows dies with obscure message as plugins tries to use
this function. OpenSSL_Applink supposed to be official solution against
such problems, but I could not get it to work properly, no matter how
much I tried. Avoiding APIs with FILE* in first place works best
set hashicorp_key_management_cache_version_timeout=60s by default
increase hashicorp_key_management_cache_timeout to 24h by default,
because key values should never change, but we don't want to remove
a variable for compatibility reasons
Commit 1d80e8e updated lock_operations to a rwlock from a mutex but didn't
update the PS instrumentation setup accordingly.
We update the PS setup accordingly so the lock is correctly instrumented in
performance_schema.rwlock_instances.
All new code of the whole pull request, including one or several files
that are either new files or modified ones, are contributed under the
BSD-new license. I am contributing on behalf of my employer Amazon Web
Services, Inc.
Thanks to Sergei Golubchik for the idea and a working prototype of this patch.
Problem:
Inside these methods:
- Item_splocal_assoc_array_element::append_for_log()
- Item_splocal_assoc_array_element_field::append_for_log()
an expression like this:
first_names(nick || CONVERT(' ' USING ucs2)
was converted to:
first_names(nick || CONVERT(CONVERT(' ' USING ucs2) USING latin1)
i.e. an automatic CONVERT(... USING latin1) was added, as expected.
In the end of append_for_log() the destructor of
Item_change_list_savepoint_raii restored the Item changes, so
the automatically added CONVERT(..USING latin1) was removed from
the tree and the tree changed back to:
first_names(nick || CONVERT(' ' USING ucs2)
But all Item_splocal_assoc_array_element* Items were left in the fixed state.
Later, duing the INSERT, a concatenation of the SP variable `nick`
and the space character in UCS2 evaluated 'Michael\x00\x20' instead
of the expected 'Michael\x20', so the assoc array
element with the given key was not found.
Note:
Item_change_list_savepoint_raii was needed to make this DBUG_ASSERT in
sp_lex_keeper::reset_lex_and_exec_core() happy:
DBUG_ASSERT(thd->Item_change_list::is_empty());
The fix:
- Removing Item_change_list_savepoint_raii from the implementations of
Item_splocal_assoc_array_element*::append_for_log()
Removing the class Item_change_list_savepoint_raii as it's not needed
any more.
- Relaxing the DBUG_ASSERT() in sp_lex_keeper::reset_lex_and_exec_core() to:
DBUG_ASSERT(dbug_rqp_are_fixed(instr) || thd->Item_change_list::is_empty());
where dbug_rqp_are_fixed() is a new debug function to check that
all Rewritable_query_parameter's in instr::free_list are fixed.
Fixing a confusing error message:
Unknown column 'assoc_array_var' in 'unknown_method'
to a clearer:
FUNCTION/PROCEDURE assoc_array_var.unknown_method does not exist
Fixes self assignment issues e.g.
assoc(1):= assoc(1);
assoc(1):= assoc(2);
assoc(1).field:= assoc(1).field;
assoc(1).field:= assoc(2).field;
assoc:= assoc;
etc
Fixes null element handling and null element's field handling e.g.
assoc(1):= NULL;
assoc(1).f:= NULL;
Fixes crash when a missing assoc array's element is accesed in the RHS of an assignment operation.
Bulk tests for self-assign and subselect tests for associative array contributed by Alexander Barkov.
Additional small cleanup (forgoten in the previous commit):
- Item_splocal_assoc_array_element_field::fix_fields()
Fixing the error message for:
SELECT marks('a').name; -- marks is a TABLE OF VARCHAR(10)
From: ERROR 42S22: Unknown column '1' in 'SELECT'
To: ERROR 42S22: Unknown column 'name' in 'marks'
Fixing wrong results of packed_col_length(): it returned a wrong result for:
- CHAR(N)
- BINARY(N)
- MEDIUMBLOB
- LONGBLOB
Note, this method was not used. Now it's used by the assoc array data type.
Details:
- Cleanup: adding "const" qualifier to:
* Field::pack()
* Field::max_packed_col_length()
* Field::packed_col_length()
- Removing the "max_length" argument from Field::pack().
The caller passed either UINT_MAX or Field::max_data_length() to it.
So it was never used to limit the destination size and only
made the code more complicated and confusing.
- Removing arguments from packed_col_length().
Now it calculates the value for the Field, using its "ptr",
and assuming the entire value will be packed,
without limiting the destination size.
- Fixing Field_blob::packed_col_length(). It worked fine only for
TINYBLOB and BLOB, and did not work for MEDIUMBLOB and LONGBLOB.
- Overriding Field_char::packed_col_length().
Using the inherited method was wrong - it implemented
variable length data behavior.
- Overriding Field_string::max_data_length(). It was also incorrect.
Implementing fixed size behavior.
Moving the old implementation of Field_longstring::max_data_length()
to Field_varstring::max_data_length().
- Fixing class StringPack:
* Removing the "length" argument from StringPack::packed_col_length().
It now assumes that the packed length for the entire data buffer
is needed, without limiting the destination size.
* Fixing StringPack::packed_col_length() to implement fixed dat size behavior.
It erroneously implemented VARCHAR style behavor
(assumed that the length was stored in the leading 1 or 2 bytes).
* Adding a helper method length_bytes(). Reusing it in packed_col_length()
and max_packed_col_length().
* Moving a part the method pack() into a new method trimmed_length().
Reusing it in pack() and packed_col_length().
* Rewriting the code in trimmed_length() in a more straightforward way.
It was hard to understand what it was doing.
Adding a comment.
- Adding a test sp-assoc-array-pack-debug.test covering packed_col_length()
and pack() for various data types.
- Checking that the key expression is compatible with the INDEX BY data type
for assignment in expressions:
assoc_array_variable(key_expr)
assoc_array_variable(key_expr).field
in all contexts: SELECT, assignment target, INTO target.
Raising an error in case it's not compatible.
- Disallowing non-constant expressions as a key,
as the key is evaluated during the fix_fields() time.
- Disallowing stored functions as a key:
assoc_array(stored_function())
assoc_array(stored_function()).field
The underlying MariaDB code is not ready to call a stored function
during the fix_fields() time. This will be fixed in a separate MDEV.
- Removing the move Assoc_array_data's constructor.
Using the usual constructor instead.
- Setting m_key.thread_specific and m_value.thread_specific to true
in the Assoc_array_data constructor. This is needed to get assoc array
element data counted by the @@session.memory_used status variable.
Adding DBUG_ASSERTs to make sure the thread_specific flag never
disappears in Assoc_array_data members.
- Removing my_free(item) from Field_assoc_array::element_by_key.
It was a remainder from an earlier patch version.
In the current patch version all Items behind an assoc array are
created on a mem_root. It's wrong to use my_free() with them.
- Adding a helper method Field_assoc_array::assoc_tree_search()
- Fixing assoc_array_var.delete() to work as a procedure
rather than a function. It does not need SELECT/DO any more.
- Fixing the crash in a few ctype_xxx tests, caused by the grammar change.
- Fixing compilation failure on Windows
- Adding a new method LEX::set_field_type_udt_or_typedef()
and removing duplicate code from sql_yacc.yy
- Renaming the grammar rule field_type_all_with_composites to
field_type_all_with_typedefs
- Removing the grammar rule assoc_array_index_types.
Changing the grammar to "INDEX_SYM BY field_type".
Removing the grammar rule field_type_all_with_record.
Allow field_type_all_with_typedefs as an assoc array element.
Catching wrong index and element data types has been moved to
Type_handler_assoc_array::Column_definition_set_attributes().
It raises an SQL error on things like:
* assoc array of assoc arrays in TABLE OF
* index by a non-supported types in INDEX BY
- Removing four methods:
* sp_type_def_list::type_defs_add_record()
* sp_type_def_list::type_defs_add_composite2()
* sp_pcontext::type_defs_declare_record()
* sp_type_def_list::type_defs_declare_composite2()
Adding two methods instead:
* sp_type_def_list::type_defs_add()
* sp_pcontext::type_defs_add()
This allows to get rid of the duplicate code detecting data type
declarations with the same name in the same sp_pcontext frame.
- Adding new methods:
* LEX::declare_type_assoc_array()
* LEX::LEX::declare_type_record()
They create a type specific sp_type_def_xxx and the call the generic
sp_pcontext::type_defs_add().
- m_key_def.sp_prepare_create_field() inside
Field_assoc_array::create_fields() is now called for all key data types
(not only for integers)
- Removing the assignment of key_def->charset in
Type_handler_assoc_array::sp_variable_declarations_finalize().
The charset is now evaluated in m_key_def.sp_prepare_create_field().
- Fixing Item_assoc_array::get_key() to set the character set of the "key"
to utf8mb3 instead of binary
- Fixing Field_assoc_array::copy_and_convert_key() to set the key length
limit in terms of the character length as specified in
INDEX BY VARCHAR(N), instead of octet length. This is needed to make
keys with multi-byte characters work correctly.
Also it now raises different errors depending on the reason of the
key conversion failures:
* ER_INVALID_CHARACTER_STRING
* ER_CANNOT_CONVERT_CHARACTER
- Changing the prototype for Type_handler_composite::key_to_lex_cstring() to
virtual LEX_CSTRING key_to_lex_cstring(THD *thd,
const sp_rcontext_addr &var,
Item **key,
String *buffer) const;
* Now it returns a LEX_CSTRING, instead of getting it as an out parameter.
* Gets an sp_rcontext_addr instead of "name" and "def"
* Gets a String buffer which can be used to be passed to val_str(),
or for character set conversion purposes.
- Removing Field_assoc_array::m_key_def, as all required information
is available from Field_assoc_array::m_key_field.
In Field_assoc_array::create_fields turning m_key_def to a local variable
key_def.
- Fixing Field_assoc_array::copy_and_convert_key() to follow MariaDB coding
style: only constants can be passed by-reference, not-constants should
be passed by-pointer.
- Adding DBUG_ASSERTs into Type_handler_assoc_array::get_item()
and Type_handler_assoc_array::get_or_create_item() that the passed
key in "name" is well formed according to the charset of INDEX BY.
- Changing the error ER_TOO_LONG_KEY to ER_WRONG_STRING_LENGTH.
The former prints length limit in bytes, which is not applicable
for INDEX BY values, because its limit is in characters.
Also, the latter is more verbose.
- Fixing the problem that these wrong uses of an assoc array variable:
BEGIN
assoc_var;
assoc_var(1);
END;
raised a weird error message:
ERROR 1054 (42S22): Unknown column 'assoc_var' in '(null)'
Now a more readable parse error is raised.
- Adding a "Duplicate key" warning for the cases when assigning
between two assoc arrays rejects some records due to different
collations in their INDEX BY key definitions.
- Disallow INDEX OF propagation from VARCHAR to TEXT.
The underlying code cannot handle TEXT.
Adding tests.
- Adding a helper class StringBufferKey to pass to val_str() when
a key value is evaluated.
Fixing all val_str() calls to val_str(&buffer), as the former is
not desirable.
- Fixing a wrong use of args[0]->null_value in
Item_func_assoc_array_exists::val_bool()
- Fixing a problem that using TABLE OF TEXT crashed the server.
Thanks to Iqbal Hassan for the proposed patch.
- Changes in Qualified_ident:
* Fixing the Qualified_ident constructors to get all parst as
Lex_ident_cli_st, rather than the first part as Lex_ident_cli_st
with the following parts Lex_ident_sys.
This makes the code more symmetric.
* Fixing the grammar in sql_yacc.yy accordinly.
* Fixing the data type storing the possition in the client query
from "const char *" to Lex_ident_cli.
* Adding a new method Qualified_ident::is_sane().
It allows to reduce the code side in sql_yacc.yy.
Thanks to Iqbal Hassan for the idea.
- Replacing qs_append() to append_ulonglong() in:
* Item_method_func::print()
* Item_splocal_assoc_array_element::print()
* Item_splocal_assoc_array_element_field::print()
These methods do not use reserve()/alloc(), so calling qs_append()
was wrong and caused a crash.
- Changing the output formats of these methods:
* Item_splocal_assoc_array_element::print()
* Item_splocal_assoc_array_element_field::print()
not to print the key two times.
Also moving the `@123` part (the variable offset) immediately
after the variabl name and before the `[key]` part.
- Fixing a memory leak happened when trying to insert a duplicate
key into an assoc array. Also adding a new "THD *" parameter to
Field_assoc_array::insert_element(). Thanks to Iqbal Hassan for the fix.
Adding a test into sp-assoc-array-ctype.test.
- In Field_assoc_array::create_fields: m_element_field->field_name is now
set for all element data types (not only for records).
This fixed a wrong variable name in warnings. Adding tests.
- Adding tests:
* Adding tests for assoc array elements in UNIONs.
* Copying from an assoc array with a varchar key
to an assoc array with a shorter varchar key.
* A relatively big associative array.
* Memory usage for x86_64.
* Package variable as assoc array keys.
* Character set conversion
* TABLE OF TEXT
* TABLE OF VARCHAR(>64k bytes) propagation to TABLE OF TEXT.
* TEXT element fields in an array of records.
* VARCHAR->TEXT propagation in elements in an array of records.
* Some more tests
This patch adds support for associative arrays in stored procedures
for sql_mode=ORACLE.
The syntax follows Oracle's PL/SQL syntax for associative arrays -
TYPE assoc_array_t IS TABLE OF VARCHAR2(100) INDEX BY INTEGER;
or
TYPE assoc_array_t IS TABLE OF record_t INDEX BY VARCHAR2(100);
where record_t is a record type.
The following functions were added for associative arrays:
- COUNT - Retrieve the number of elements within the arra
- EXISTS - Check whether given key exists in the array
- FIRST - Retrieve the first key in the array
- LAST - Retrieve the last key in the array
- PRIOR - Retrieve the key before the given key
- NEXT - Retrieve the key after the given key
- DELETE - Remove the element with the given key or remove all elements
if no key is given
The arrays/elements can be initialized with the following methods:
- Constructor
i.e. array:= assoc_array_t('key1'=>1, 'key2'=>2, 'key3'=>3)
- Assignment
i.e. array(key):= record_t(1, 2)
- SELECT INTO
i.e. SELECT x INTO array(key)
TODOs:
- Nested tables are not supported yet.
i.e. TYPE assoc_array_t IS TABLE OF other_assoc_array_t INDEX BY INTEGER;
- Associative arrays comparisons are not supported yet.
* clarify the help text for --server-audit-file-rotate-size
* initialize cn->sync_statement, otherwise new connection randomly syncs
* and DON'T SPAM syslog
main/statistics_json.result is updated for f8ba5ced55 (MDEV-36099)
The test uses 'delete from t1' in many places and then populates
the table again. The natural order of rows in a MyISAM table is well
defined and the test was implicitly relying on that.
before f8ba5ced55 delete was deleting rows one by one, using
ha_myisam::delete_row() because the connection was stuck in rbr mode.
This caused rows to be shown in the reverse insertion order (because of
the delete link list).
MDEV-36099 fixes this bug and the server now correctly uses
ha_myisam::delete_all_rows(). This makes rows to be shown in the
insertion order as expected.
With WolfSSL, the plugins is statically compiled, and enabled,
and defaults to autogenerating ssl keys, which was left unimplemented.
Thus, it spits out some [ERROR] on every startup.
Fixed by removing a couple some ifdefs. Allowed tcp_nossl to run on
Windows.
As WolfSSL is missing some APIs with FILE*, use related API that
accept BIO
, i.e
- BIO_new_file() instead of fopen()
- BIO_free instead of fclose()
- PEM_write_bio_PrivateKey() instead of PEM_write_PrivateKey()
- etc
A note about BIO and error reporting:
BIO_new_file sets the errno, therefore FILE_ERROR macro
produces good expected error messages, while SSL_ERROR unfortunately
creates something incomprehensible. Thus, FILE_ERROR is left in place
where it was used previously (fopen errors)
Curiously, removing APIs with FILE*, solves another bug MDEV-37343,
where server on Windows dies with obscure message as plugins tries to use
this function. OpenSSL_Applink supposed to be official solution against
such problems, but I could not get it to work properly, no matter how
much I tried. Avoiding APIs with FILE* in first place works best
let's always disconnect a user connection before dropping the said user.
MariaDB is traditionally very tolerant to active connections
of the dropped user, which isn't the case for most other databases.
Let's avoid unintentionally spreading incompatible behavior
and disconnect before drop.
Except in cases when the test specifically tests such a behavior.
Avoid accessing MDL_lock::key from outside of MDL_lock class directly,
use MDL_lock::get_key() instead.
This is part of broader cleanup, which aims to make large part of
MDL_lock members private. It is needed to simplify further work on
MDEV-19749 - MDL scalability regression after backup locks.
Nullability is decided in two stages-
1. Based on argument NULL-ness
Problem:
- COALESCE currently uses a generic logic- "Result of a function
is nullable if any of the arguments is nullable", which is wrong.
- IFNULL sets nullability using second argument alone, which incorrectly
sets the result to NULL even when first argument is not null.
Fix:
- Result of COALESCE and IFNULL is set to NULL only if all arguments are
NULL.
2. Based on type conversion safety of fallback value
Problem:
- The generic `Item_hybrid_func_fix_attributes` logic would mark the
function's result as nullable if any argument involved a type
conversion that could yield NULL.
Fix:
- For COALESCE and IFNULL, nullability is set to NOT NULL if the first
non-null argument can be safely converted to function's target return
type.
- For other functions, if any argument's conversion to target type could
result in NULL, the function is marked nullable.
Tests included in `mysql-test/main/func_hybrid_type.test`
Fix AWS SDK build, it has changed substantionally since the plugin was
introduced. There is now a bunch of intermediate C libraries, aws-cpp-crt
and others, and for static linking, the link dependency must be declared.
Also support AWS C++ SDK in vcpkg package manager.