Add cloud:sync-stripe-subscriptions command to manually check all
subscriptions against Stripe. By default it only reports discrepancies
without making changes. Use --fix flag to actually apply corrections.
This addresses race conditions where subscriptions can be cancelled in
Stripe but remain marked as active in Coolify's database.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Builder containers are started with the --rm flag, which automatically removes them when stopped. The explicit docker rm -f is redundant and adds unnecessary steps to deployment logs.
This change adds a skipRemove parameter to graceful_shutdown_container() and sets it to true for builder container shutdowns (uuid-based) while keeping the default behavior for application containers.
Fixes#7566🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Helper containers are started with --rm flag which automatically removes the container when it stops. Removed redundant docker rm commands from graceful_shutdown_container in ApplicationDeploymentJob and replaced docker rm with docker stop in DatabaseBackupJob.
🤖 Generated with Claude Code
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Fix status flickering: Track databases in active/transient states (restarting, starting, created, paused) not just running
- Add isActiveOrTransient() helper to distinguish between active states and terminal states (exited, dead)
- Add safeguard: Protect updateNotFoundDatabaseStatus() from marking as exited when containers collection is empty
- Add restart_count tracking: New migration adds restart_count, last_restart_at, last_restart_type to all standalone database tables
- Update 8 database models with $casts for new restart tracking fields
- Update GetContainersStatus to extract RestartCount from Docker and update database models
- Reset restart tracking when database exits completely
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
- Eager load service applications and databases to eliminate N+1 queries
- Replace individual model updates with batch database updates for applications, previews, and services
- Move connectProxyToNetworks to async ConnectProxyToNetworksJob to avoid blocking status updates
- Optimize Server.databases() and applications() methods with efficient database queries
- Use flatMap for cleaner collection transformations
🤖 Generated with Claude Code
Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Remove unused server filtering logic in Kernel.php that was querying servers
but never using the results. Simplify Sentinel update checks in ServerManagerJob
by reusing the $isSentinelEnabled variable and removing unnecessary timezone
parameter for hourly cron execution.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Prevent deployment status from regressing to FAILED after it's marked as FINISHED by:
1. Calling completeDeployment() first in post_deployment() before any operations that could fail
2. Wrapping all post-deployment side effects in try-catch blocks
3. Adding FINISHED to terminal states that cannot be changed
4. Protecting ExecuteRemoteCommand from overwriting FINISHED status
This fixes the issue where a deployment with a healthy container and successful rolling update was still marked as Failed in the UI.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Move restart counter reset from Livewire to ApplicationDeploymentJob to prevent race conditions with GetContainersStatus
- Remove artificial restart_type=manual tracking (never used in codebase)
- Add Crash Loop Example in seeder for testing restart tracking UI
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add shell escaping with escapeshellarg() for container names in the
docker rm command to prevent command injection. Also add validation
to skip containers with missing names and log a warning.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add CleanupOrphanedPreviewContainersJob that runs daily to find and remove any PR preview containers that weren't properly cleaned up when their PR was closed.
The job:
- Scans all functional servers for containers with coolify.pullRequestId label
- Checks if the corresponding ApplicationPreview record exists in the database
- Removes containers where the preview record no longer exists (truly orphaned)
- Acts as a safety net for webhook failures or race conditions
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
For Docker Compose applications with build directives, inject commit-based
image tags (uuid_servicename:commit) to enable rollback functionality.
Previously these services always used 'latest' tags, making rollback impossible.
- Only injects tags for services with build: but no explicit image:
- Uses pr-{id} tags for pull request deployments
- Respects user-defined image: fields (preserves user intent)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When multiple scheduled tasks or database backups run concurrently on
the same server, they compete for the same SSH multiplexed connection
socket, causing race conditions and SSH exit code 255 errors.
This fix adds a `disableMultiplexing` parameter to bypass SSH
multiplexing for jobs that may run concurrently:
- Add `disableMultiplexing` param to `generateSshCommand()`
- Add `disableMultiplexing` param to `instant_remote_process()`
- Update `ScheduledTaskJob` to use `disableMultiplexing: true`
- Update `DatabaseBackupJob` to use `disableMultiplexing: true`
- Add debug logging to track execution without multiplexing
- Add unit tests for the new parameter
Each backup and scheduled task now gets an isolated SSH connection,
preventing contention on the shared multiplexed socket.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Filter out null and empty environment variables when generating Nixpacks build
configuration to prevent JSON parsing errors. Environment variables with null or
empty values were being passed as `--env KEY=` which created invalid JSON with
null values, causing deployment failures.
This fix ensures only valid non-empty environment variables are included in both
user-defined and auto-generated COOLIFY_* environment variables.
🤖 Generated with Claude Code
Co-Authored-By: Claude <noreply@anthropic.com>
Moved .log-highlight styles from Livewire component views to resources/css/app.css for better separation of concerns and reusability. This follows Laravel and Livewire best practices by keeping styles in the appropriate location rather than inline in component views.
Changes:
- Added .log-highlight styles to resources/css/app.css
- Removed inline <style> tags from deployment/show.blade.php
- Removed inline <style> tags from get-logs.blade.php
- Added XSS security test for log viewer
- Applied code formatting with Laravel Pint
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Extended the maximum allowed timeout for scheduled tasks from 3600 to 36000 seconds (10 hours). Also passes the configured timeout to instant_remote_process() so the SSH command respects the timeout setting.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Allows user-configured backup timeouts > 3600 to be respected. Previously, the SSH process used a hardcoded 3600 second timeout regardless of the job timeout setting. Now the timeout is passed through to instant_remote_process() for all backup operations.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The error "container name already in use" occurred because the container
wasn't fully removed before docker compose up tried to create a new one.
Changes:
- Removed redundant stop/remove logic from START PHASE (was duplicating STOP PHASE)
- Made STOP PHASE more robust:
- Increased wait iterations from 10 to 15
- Added force remove on each iteration in case container got stuck
- Added final verification and force cleanup after the loop
- Added better logging to show removal progress
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Instead of calling StopProxy::run() (synchronous) then StartProxy::run()
(async), now we build a single command sequence that includes both stop
and start phases. This creates one Activity immediately via remote_process(),
so the UI receives the activity ID right away and can show logs in real-time
from the very beginning of the restart operation.
Key changes:
- Removed dependency on StopProxy and StartProxy actions
- Build combined command sequence inline in buildRestartCommands()
- Use remote_process() directly which returns Activity immediately
- Increased timeout from 60s to 120s to accommodate full restart
- Activity ID dispatched to UI within milliseconds of job starting
Flow is now:
1. Job starts → sets "restarting" status
2. Commands built synchronously (fast, no SSH)
3. remote_process() creates Activity and dispatches CoolifyTask job
4. Activity ID sent to UI immediately via WebSocket
5. UI opens activity monitor with real-time streaming logs
6. Logs show "Stopping proxy..." then "Starting proxy..." as they happen
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Set proxy status to 'restarting' and dispatch ProxyStatusChangedUI event
at the very beginning of handle() method, before StopProxy runs. This
notifies the UI immediately so users know a restart is in progress,
rather than waiting until after the stop operation completes.
Also simplified unit tests to focus on testable job configuration
(middleware, tries, timeout) without complex SchemalessAttributes mocking.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
- Add restartInitiated flag to prevent duplicate "Proxy restart initiated" messages
- Restore ProxyStatusChangedUI dispatch with activityId in RestartProxyJob
- This allows the UI to open the activity monitor and show logs during restart
- Simplified restart message (removed redundant "Monitor progress" text)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When restarting the proxy on localhost (where Coolify is running), the UI becomes inaccessible because the connection is lost. This change makes all proxy restarts run as background jobs with WebSocket notifications, allowing the operation to complete even after connection loss.
Changes:
- Enhanced ProxyStatusChangedUI event to carry activityId for log monitoring
- Updated RestartProxyJob to dispatch status events and track activity
- Simplified Navbar restart() to always dispatch job for all servers
- Enhanced showNotification() to handle activity monitoring and new statuses
- Added comprehensive unit and feature tests
Benefits:
- Prevents localhost lockout during proxy restarts
- Consistent behavior across all server types
- Non-blocking UI with real-time progress updates
- Automatic activity log monitoring
- Proper error handling and recovery
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
When Sentinel is enabled and in sync, ServerStorageCheckJob was being
dispatched from two locations causing unnecessary duplication:
1. PushServerUpdateJob (every ~30s with real-time filesystem data)
2. ServerManagerJob (scheduled cron check via SSH)
This commit modifies ServerManagerJob to only dispatch ServerStorageCheckJob
when Sentinel is out of sync or disabled. When Sentinel is active and in sync,
PushServerUpdateJob provides real-time storage data, making the scheduled SSH
check redundant.
Benefits:
- Eliminates duplicate storage checks when Sentinel is working
- Reduces unnecessary SSH overhead
- Storage checks still run as fallback when Sentinel fails
- Maintains scheduled checks for servers without Sentinel
Updated tests to reflect new behavior:
- Storage check NOT dispatched when Sentinel is in sync
- Storage check dispatched when Sentinel is out of sync or disabled
- All timezone and frequency tests updated accordingly
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Dispatch ProxyStatusChangedUI event after version check completes so the UI updates in real-time without requiring page refresh.
Changes:
- Add ProxyStatusChangedUI::dispatch() at all exit points in CheckTraefikVersionForServerJob
- Ensures UI refreshes automatically via WebSocket when version check completes
- Works for all scenarios: version detected, using latest tag, outdated version, up-to-date
User experience:
- User restarts proxy
- Warning clears automatically in real-time (no refresh needed)
- Leverages existing WebSocket infrastructure
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add preserveRestarting parameter to ContainerStatusAggregator to allow applications
and service sub-resources to display "Restarting" status instead of being marked as
"Degraded". This gives better visibility into container restart behavior.
- Update ContainerStatusAggregator to accept preserveRestarting parameter (defaults to false)
- Update GetContainersStatus to use preserveRestarting: true for applications and service sub-resources
- Update PushServerUpdateJob to use preserveRestarting: true for applications and service sub-resources
- Add comprehensive documentation explaining the parameter behavior and when to use it
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fixes#7439 where successful deployments were being marked as FAILED due to exceptions during old container cleanup.
Root cause: Commit 97550f406 wrapped stop_running_container() in try-catch that re-throws ALL exceptions as DeploymentException. When old containers are already removed (a common scenario), the "No such container" error propagates and marks successful deployments as failed.
Solution: Check if deployment has already succeeded (newVersionIsHealthy || force) before re-throwing exceptions from cleanup operations. Cleanup failures are logged but don't fail the deployment.
- Add conditional handling in stop_running_container() catch block
- Log cleanup warnings with hidden: true to avoid UI clutter
- Only re-throw exceptions if deployment hasn't succeeded yet
- Preserves backward compatibility and expected behavior
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Pass the server timezone parameter to shouldRunNow() call at line 127,
ensuring ServerCheckJob dispatch respects the server's local timezone
instead of falling back to the instance default.
This aligns the behavior with other scheduled tasks in the same method:
- ServerStorageCheckJob (line 137)
- ServerPatchCheckJob (line 144)
- Sentinel restart (line 152)
All scheduled tasks in processServerTasks() now consistently use the
server's configured timezone for cron evaluation.
Added unit test to verify timezone-aware cron schedule evaluation.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Fixed a critical bug where $this->executionTime was being mutated during
the server processing loop, causing incorrect scheduling calculations for
subsequent servers.
The issue occurred at line 123 where subSeconds() was called directly on
the shared executionTime instance. This caused the baseline time to shift
by waitTime seconds with each server iteration, resulting in compounding
scheduling errors (e.g., 1680 seconds drift over 5 servers).
Changed:
- app/Jobs/ServerManagerJob.php:123
Added .copy() before .subSeconds() to prevent mutation
Added comprehensive unit tests that verify:
- Immutability when using .copy()
- Demonstration of the bug without .copy()
- Correct behavior across multiple iterations
This follows the existing pattern in shouldRunNow() (line 167) and aligns
with other jobs in the codebase.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Server disk usage checks now run on their configured schedule regardless of Sentinel status, eliminating monitoring blind spots when Sentinel is offline, out of sync, or disabled. Storage checks now respect server timezone settings, consistent with patch checks.
Changes:
- Moved server timezone calculation to top of processServerTasks()
- Extracted ServerStorageCheckJob dispatch from Sentinel conditional
- Fixed default frequency to '0 23 * * *' (11 PM daily)
- Added timezone parameter to storage check scheduling
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
The regex pattern in injectDockerComposeBuildArgs() was too restrictive
and failed to match `docker compose build servicename` commands. Changed
the lookahead from `(?=\s+(?:--|-)|\s+(?:&&|\|\||;|\|)|$)` to the
simpler `(?=\s|$)` to allow any content after the build command,
including service names with hyphens/underscores and flags.
Also improved the ApplicationDeploymentJob to use the new helper function
and added comprehensive test coverage for service-specific builds.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
## Changes
- **CheckForUpdatesJob**: Add triple version comparison (CDN vs cache vs running)
- Never allows version downgrade from currently running version
- Uses data_set() for safer nested array mutation
- Prevents incorrect new_version_available flag setting
- **UpdateCoolify**: Add cache validation before fallback
- Validates cache against running version on CDN failure
- Throws exception if cache is corrupted/older than running
- Applies to both manual and automated updates
- **Tests**: Add comprehensive test coverage
- tests/Unit/CheckForUpdatesJobTest.php (5 tests)
- tests/Unit/UpdateCoolifyTest.php (3 tests)
## Impact
- Prevents all downgrade scenarios (CDN rollback, corrupted cache, etc.)
- Maintains backward compatibility
- Provides clear logging for debugging
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>