If you have multiple Laravel websites on one server or for example you have a server only for testing multiple Laravel projects (not for production) and you use modern technologies in your projects (e.g. generate JS files after each deployment) then this, at least for me is a life saver!
The biggest problem it solves is that whenever you run the deployment script on several websites simultaneously and on the same server it will most likely crash when generating JS files or at some other point. With this script, I use a lock file to check if another script on the same server is already running and if it is - then the deployment script is triggered again after 3 minutes (using the Deployment Trigger URL generated by Forge)
In case you have a public directory for uploads, e.g. I have one in /public/uploads
, you should move that directory outside the root of your project directory and make a link inside /public/uploads to link it to that directory (you should make sure you are using an isolated user for this just in case).
In my case, I don't use S3 so I have ~/domain_name_uploads
and it is linked in ~/project_root/public/uploads
Note: we copy the project root directory, update it and then replace it with the old one, therefore, the downtime is like 1sec. but we need to move any uploads or other static files directory outside of project root or use S3 File Storage so we wouldn't loose any files that might have been uploaded while the deployment script was processing in a copy directory.
# stop script on error signal
set -e
# set your domain / project root directory
domain=domain.com
# set the Deployment Trigger URL that is generated and presented to you by Laravel Forge
TRIGGER_URL=https://forge.laravel.com/servers/xxxxxx/sites/xxxxx/deploy/http?token=xxxxxxx
# this is the location and name of the lock file (should be available to access from any user, therefore, /tmp works just fine)
LOCK_FILE=/tmp/deploy.lock
# check if lock file exist
if [ -f "$LOCK_FILE" ]; then
# Send message and exit
echo "$domain - Already running script. Will try again after 3 minutes."
sleep 3m
curl -I $TRIGGER_URL
exit 0
fi
exec 99>"$LOCK_FILE"
flock -n 99
# remove old deployment folders
rm -R --force ~/deploy_"$domain"
rm -R --force ~/backup_"$domain"
cp -R ~/$domain ~/deploy_"$domain"
# Update
cd ~/deploy_"$domain"
git stash --include-untracked
git pull origin release/testing
git stash clear
$FORGE_COMPOSER install --no-interaction --prefer-dist --optimize-autoloader
yarn install
yarn prod
if [ -f artisan ]; then
$FORGE_PHP artisan migrate --force
$FORGE_PHP artisan cache:clear
$FORGE_PHP artisan view:clear
fi
# Switch (downtime for microseconds)
mv ~/$domain ~/backup_"$domain"
mv ~/deploy_"$domain" ~/$domain
if [ ! -d ~/"$domain"_uploads ]; then
mkdir ~/"$domain"_uploads
fi
cd ~/$domain/public
ln -nfs ../../"$domain"_uploads ./uploads
# Delete map files in case you generate those to upload to rollbar or so
rm -rf ./**/*.map
# Restart PHP services
sudo -S service $FORGE_PHP_FPM reload
# Reset opcache
echo "<?php opcache_reset(); echo 'opcache reset' . PHP_EOL; ?>" > ~/$domain/public/opcachereset.php
curl https://$domain/opcachereset.php
rm ~/$domain/public/opcachereset.php
# Reload queue listener
$FORGE_PHP ~/$domain/artisan queue:restart
# Inform client to update via websockets (optional)
# $FORGE_PHP ~/$domain/artisan check:updates
# clean-up before exit
rm "$LOCK_FILE"