Search_context is supposed to increase the refcnt of the context
it keeps the pointer to. It was doing so for MHNSW_Share, but not
for MHNSW_Trx that was created inside mhnsw_read_next().
This meant MHNSW_Trx could've been reset (when it overflowed),
even though Search_context still had results inside it.
ER_DUP_ENTRY only rolls back one current statement, it doesn't
abort the transaction. Thus MHNSW_Trx has to register for
"statement transaction" and support "statement rollback".
This patch introduces loop unrolling by a factor of 4 in the
dot_product() function used in vector-based distance calculations.
The goal is to improve SIMD utilization and overall performance during
high-throughput vector operations, particularly in indexing and search
routines that rely on this function.
Observations from benchmarking (ann-benchmark):
- Query Performance (QPS) improved by 4–10% across datasets.
- Indexation time reduced by 22–28%.
- Loop unrolling factors of 8 or 16 yielded similar performance to
factor-4 but made the code less readable. Hence, a factor of 4 was
chosen to maintain a balance between performance and code clarity.
This change is architecture-specific (PowerPC) and should not
introduce any behavioral regressions or side effects in unrelated
parts of the codebase.
Signed-off-by: Manjul Mohan <manjul.mohan@ibm.com>
* skip subdist optimization for vectors shorter than 2*192, as before
* for longer vectors do not enable the optimization at first, but
still calculate the extrapolated distance and accumulate statistics
of extrapolated_distance/true_distance ratios.
* calculated for every distance_greater_than() call
* accumulated using parallel Welford's online algorithm,
only M1 and M2, experiments show that higher-order central
moment are not needed here
* stored in MHNSW_Share
* after accumulating 10000 (best by test) data points, enable
subdist optimization if the standard deviation of ratios (as above)
is below 0.05, this distinguishes applicable datasets from
not applicable datasets with a >99.9% probabilty
* to be on the safe side, keep calculating statistics (it's cheap)
and disable subdist optimization again if standard deviation
grows above 0.05
* in addition to X.distance_to(Y) operation, introduce
X.distance_greater_than(Y,Z) to be used like
- if (X.distance_to(Y) > Z)
+ if (X.distance_greater_than(Y,Z) > Z)
here we can afford to calculate the distance approximately,
as long as it's greater than Z
* calculate the distance approximately, by looking only at the
first 192 dimensions and extrapolate
* if it's larger than Z*1.05 (safety margin), assume |X-Y| > Z too
and just return the extrapolated distance
* 192 and 1.05 are best by test
* optimization is obviously not applicable for vectors shorter than
192 dimensions, let's not apply it for vectors shorter than 2*192
dimensions because it needs to stop SIMD calculations and
calculate the intermediate distance, this is not cheap
* this works reasonably well for dbpedia-openai and for randomized
(all vectors multiplied by a random orthogonal matrix) variants
of gist and *mnist, but completely destroys recall for
original (non-randomized) gist and *mnist.
* up to 50% speedup for dbpedia-openai-500k, 40% faster run for gist-960
don't abuse TABLE::used_stat_records, get a dedicated
structure Stats for maintaining various graph-related
runtime statistics, and keep it in MHNSW_Share
also, move diameter and ef_power there
Avoid accessing MDL_lock::key from outside of MDL_lock class directly,
use MDL_lock::get_key() instead.
This is part of broader cleanup, which aims to make large part of
MDL_lock members private. It is needed to simplify further work on
MDEV-19749 - MDL scalability regression after backup locks.
never estimate that a graph search will visit more nodes than there
are in the graph. In fact, let's reduce the graph size by 30%, it'll
increase the false positive rate of a bloom filter by 2% when
visiting the whole graph, it doesn't affect recall noticeably.
we need to read the shared graph size under a lock. let's store it
in the thread-local unused TABLE::used_stat_records member.
This patch optimises the dot_product function by leveraging
vectorisation through SIMD intrinsics. This transformation enables
parallel execution of multiple operations, significantly improving the
performance of dot product computation on supported architectures.
The original dot_product function does undergo auto-vectorisation when
compiled with -O3. However, performance analysis has shown that the
newly optimised implementation performs better on Power10 and achieves
comparable performance on Power9 machines.
Benchmark tests were conducted on both Power9 and Power10 machines,
comparing the time taken by the original (auto-vectorized) code and the
new vectorised code. GCC 11.5.0 on RHEL 9.5 operating system with -O3
were used. The benchmarks were performed using a sample test code with
a vector size of 4096 and 10⁷ loop iterations. Here are the average
execution times (in seconds) over multiple runs:
Power9:
Before change: ~16.364 s
After change: ~16.180 s
Performance gain is modest but measurable.
Power10:
Before change: ~8.989 s
After change: ~6.446 s
Significant improvement, roughly 28–30% faster.
Signed-off-by: Manjul Mohan <manjul.mohan@ibm.com>
a start node in the shared context must be atomically assigned
to a valid and fully initialized node otherwise a concurrent
thread might use it before it's safe
SIMD implementations of bloom filters and dot product calculation.
A microbenchmark shows 1.7x dot product performance improvement compared to
regular -O2/-O3 builds and 2.4x compared to builds with auto-vectorization
disabled.
Performance improvement (microbenchmark) for bloom filters is less exciting,
within 10-30% ballpark depending on compiler options and load.
Misc implementation notes:
CalcHash: no _mm256_shuffle_epi8(), use explicit XOR/shift.
CalcHash: no 64bit multiplication, do scalar multiplication.
ConstructMask/Query: no _mm256_i64gather_epi64, access array elements explicitly.
Query: no _mm256_movemask_epi8, accumulate bits manually.
Closes#3671
x86 builds don't use SIMD, fast math and inlining causes
distances to be quite unstable and
1) comparison with the threshold no longer works, the distance calculated
twice between the same two vectors comes out differently
2) a bunch of identical vectors get the non-zero distance between
them and HNSW cross-links them with no outbound links (if there're
more than 2M identical vectors). Let's strengthen the select_neighbors
heuristic to skip neighbors that are too close to each other
MDEV-35418 suggests a better solution for this.
now with streaming (MDEV-35032) we cannot longer free MHNSW_Trx
at the end of the search. Cannot even free it at the end of the
mhnsw_insert, because there can be a search running (INSERT ... SELECT).
Let's do reference counting, even though it's a thread-local object.
considering that users don't interact with MariaDB vector search directly,
but primarily use AI frameworks, we should use names familiar
to vector store connector writers and for AI framework users.
That is industry standard M and ef.
mhnsw_cache_size -> mhnsw_max_cache_size
mhnsw_distance_function -> mhnsw_default_distance
mhnsw_max_edges_per_node -> mhnsw_default_m
mhnsw_min_limit -> mhnsw_ef_search
inside CREATE TABLE:
max_edges_per_node -> m
distance_function -> distance
with streaming implemened mhnsw no longer needs to know
the LIMIT in advance. let's just cap it to avoid allocating
too much memory for the one step result set
MHNSW_Trx cannot store a pointer to the TABLE_SHARE for the duration
of a transaction, because the share can be evicted from the cache any
time.
Use a pointer to the MDL_ticket instead, it is stored in the THD
and has a duration of MDL_TRANSACTION, so won't go away.
When we need a TABLE_SHARE - on commit - get a new one from tdc.
Normally, it'll be already in the cache, so it'd be fast.
We don't optimize for the edge case when TABLE_SHARE was evicted.
support SQL semantics for SELECT ... WHERE ... ORDER BY ... LIMIT
* switch from returning k nearest neighbors to returning
as many as needed, in k-neighbor chunks, with increasing distance
* make search_layer() skips nodes that are closer than a threshold
* read_next keeps a search context - list of k found nodes,
threshold, ctx, etc.
* when the list of found nodes is exhausted, it repeats the search
starting from last found nodes and a threshold
* search context kepts ctx->refcount incremented, so ctx won't go away
* but commit_lock is unlocked between calls, so InnoDB can modify the table
* use ctx version to detect that, switch to MHNSW_Trx when it happens
bugfix:
* use the correct lock in ha_external_lock() for the graph table
* InnoDB didn't reset locks on ha_external_lock(F_UNLCK) and previous
LOCK_X leaked into the next statement
make generosity depend on
1. M. Keep small M's fast, increase generosity for larger M's to get
better recall.
2. distance. Keep generosity small when vectors are far from the
target, increase generosity when the search gets closer. This
allows to examine more relevant vectors but doesn't waste time
examining irrelevant vectors. Particularly important with cosine
metric when the distance is bounded
Fixes for ALTER TABLE ... ADD/DROP COLUMN, ALGORITHM=COPY.
Let quick_rm_table() remove high-level indexes along with original table.
Avoid locking uninitialized LOCK_share for INTERNAL_TMP_TABLEs.
Don't enable bulk insert when altering a table containing vector index.
InnoDB can't handle situation when bulk insert is enabled for one table
but disabled for another. We can't do bulk insert on vector index as it
does table updates currently.
into a separate transaction_participant structure
handlerton inherits it, so handlerton itself doesn't change.
but entities that only need to participate in a transaction,
like binlog or online alter log, use a transaction_participant
and no longer need to pretend to be a full-blown but invisible
storage engine which doesn't support create table.
* fix the truncate-by-handler variant, used by InnoDB
* test that insert works after truncate, meaning graph table was emptied
* test that the vector index size is zero after truncate in MyISAM
introduced a generosity factor that makes the search less greedy.
it dramatically improves the recall by making the search a bit slower
(for the same recall one can use half the M and smaller ef).
had to add Queue::safe_push() method that removes one of the
furthest elements (not necessarily the furthest) in the queue
to keep it from overflowing.