* fix clipping of charts on small devices (eg iPhone)
* Rebuild web UI
---------
Co-authored-by: mgdigital <mgdigital@users.noreply.github.com>
Co-authored-by: Mike Gibson <mike@mgdigital.co.uk>
Also upgraded Apollo GraphQL client based on security advisory of unescaped input its compiled code. It didn't resolve the security advisory, but keeping the upgrade anyway. Fairly sure it's a false positive!
This PR adds ordering capability in the API and web UI. There is also some refactoring of the web UI that has significantly improved performance.
To get proper ordering (particularly by seeders/leechers), the index must be reprocessed.
I'm aware that certain combinations of filters and orders can be slow, but for most performance is acceptable. For example, ordering a large filtered result by size (ascending) seems very slow, though descending seems okay - I've experimented with a few indexing and database tweaks without much luck and so decided to leave as-is, it can possibly be addressed later or it may be an inherent limitation of Postgres.
* Torrent search: Fix for Safari on iOS.
keyup.enter event does not seem to trigger for Safari on iOS.
Trigger update-code on blur-event too to workaround.
This fixes https://github.com/bitmagnet-io/bitmagnet/issues/117.
* Update embedded npm content
This PR introduces a significant optimisation of aggregations (counts).
It takes advantage of the fact that a Postgres query plan can tell you the cost of a query up-front, along with a rough estimate of the count based on indexes. All count queries now have a "budget", defaulting to 5,000. If the budget is exceeded according to the query plan, then the estimate will be returned (and the UI will display an estimate symbol `~` next to the associated count), otherwise the query will be executed and an exact count will be returned.
The accuracy of the estimate seems to be within 10-20% of the exact count in most cases - though accuracy depends on selected filter criteria and what is being counted, I've noticed bigger discrepancies but overall it seems like an acceptable trade-off.
The background cache warmer has been removed and aggregations are now real time again (the cache warmer was at best a short term mitigation while I figured out a better solution). The cache TTL has been reduced to 10 minutes. It was previously increased to allow the cache warmer to be run less frequently.
There are also some adjustments to the indexes that improve performance and the accuracy of estimations. For large indexes the migration may take a while to run: in my tests on 12 million torrents it took 15 minutes.
A rework of the torrent creation workflow: previously a `Torrent` record was always created with a corresponding `TorrentContent` record, which would usually be empty; following this a `classify_torrent` queue job would attempt classification then update the `TorrentContent` record.
Following this update, `Torrent` records will always be created in isolation, and a `process_torrent` job will then run in the queue. This will not only classify the torrent, but also perform any other tasks like search reindexing. For torrents that have already been matched to a piece of content, rematching will not occur (unless specified in the CLI command, see below), which saves a significant amount of work.
A new entity type, `TorrentHint` has been created for providing hints to the classifier (currently used only by the import tool). Previously any hints for the classifier were added directly to the `TorrentContent` record, which was a conflation of 2 different things (the classification result, and hints for the classifier), which is problematic when it comes to reclassification.
Additionally, a new CLI command, `reprocess` has been added, which will reprocess all torrents, classify them, and update the search index. For already matched torrents, rematching will only occur when passing the `--rematch` flag.
A few reasons for this change:
- It will prevent unclassified (but classifiable) torrents showing at the top of the list in the WebUI, which is confusing
- The new search index will require reindexing of all torrents, and the CLI command provides a simple way to do this
- In future, we'll want to hang further steps off the `process_torrent` job, such as rules-based deletion, and this provides the groundwork for that
Fix https://github.com/bitmagnet-io/bitmagnet/issues/85
- Accurate counts in the web UI will only be shown for the top-level filters; when deep-filtering, the counts will show a less than or equal sign: `≤`, indicating the number is a "maximum possible" rather than an accurate count; I'm still not sure about the UX aspect of showing the `≤` which could be confusing if you're not sure what it means, but I figured it's better than showing nothing at all - open to suggestions!
- A background process keeps the in-memory query cache warm for the top-level aggregations (by default every 10 minutes), allowing them to be served instantly
- The query cache TTL has been increased to 20 minutes; improved performance is preferable to seeing the most up-to-date information
- Pagination has been refactored to account for not knowing the total number of pages; internally the search engine will request 1 more item than it needs to know if there's a next page to advance to
- Some general refactoring of the web app
- Add a stable bloom filter, stored in the database, for blocked and deleted torrents
- Add GraphQL mutations for blocking and deleting torrents
- Add web UI for bulk actions (tagging and deleting)
- Some minor cosmetic web UI tweaks
- Move database operations to dao package