Best practices for deploying Laravel

Published on

Below is a summary of best practices when deploying a Laravel application:

  • Have an automated deployment script
  • Automatically run your tests before deployment
  • Build your application in CI/CD (not on your server)
  • Deploy only what is necessary (don't deploy node_modules)
  • Deploy --no-dev composer packages
  • Use a zero-downtime strategy
  • Run all of Laravel's optimization commands
  • Flush OPCache after deployment (ideally by calling opcache_reset())

Have an automated deployment script

Of all the best practices in this article, this is the most important one. If you are currently deploying your application by hand using FTP or SSH, the biggest improvement you can make is switching to an automated deployment script.

Deploying a Laravel application consists of a number of steps, such as restarting the queue and flushing OPCache. If you deploy by hand, you are always running the risk of forgetting a step and breaking your application. You might run a deployment at the end of the day, forget to restart the queue, and wake up to a log full of errors.

A good automated deployment script will always run a perfect deployment without the risk of human error. It saves you a bunch of time and work too.

Automatically run your tests before deployment

Deploying broken code to your server is a sure-fire way to lose the trust of your customers. A good way to prevent this is by having a suite of thorough automated tests. To minimize the chance of deploying broken code, run your tests before deploying your application, and cancel the deployment if any test fails.

Running your tests should be part of your automated deployment script. Don't rely on remembering to manually run your tests before deploying.

This might go without saying, but don't run your tests on your production server. Not only will this put unnecessary load on your server, but you also run the risk of the Refresh​Database trait wiping your production database if you make any configuration mistakes. The best place to automatically run your tests is inside a CI/CD pipeline such as Wilson, GitHub Actions or GitLab CI/CD.

Build your application in CI/CD (not on your server)

Make sure that the bundle you are deploying to your server is the exact same bundle that passed your test suite. If you are building your application locally and your tests pass, that does not guarantee that the build your server creates also pass the tests. Your server might be running a different npm or composer version that might cause errors in the build.

Like described in the previous paragraph, it is a best practice to automatically run your tests in a CI/CD runner before each deployment. The CI/CD runner builds your application, and then runs your tests. The build you made in CI/CD passed the tests, so that is the build you want to deploy to your server.

Composer and npm are only necessary when building your application. This means that if you build in CI/CD, you don't have to install npm or Composer on your production server. This is especially useful if you have multiple applications on your server, each requiring a different npm version. You can install the specific npm version in your CI/CD pipeline, and you won't have any headaches of managing these versions on your server.

Deploy --no-dev composer packages

Your composer.json file has two sections: "require" and "require-dev". The require-dev section lists all packages that you only need during development, these are usually testing or debug packages. These packages are should only be used during development, and should never end up in production. Installing your development packages in production is bad. For example, there is at least one instance where a website deployed Laravel Debug Bar to production, causing a serious security issues.

You can make Composer only install production packages by running composer install --no-dev. Make sure this is a step in your automated deployment script.

Deploy only what is necessary

As explained previously, it is a best practice to build and deploy your application from a CI/CD pipeline. After your application is built and your tests have passed, the CI/CD runner uploads this build to the server. You can speed up your deployment by not deploying files that your production server doesn't need.

For example, the huge node_modules directory is only required to build your application, it can be excluded when you upload the build to your server. The tests can also be excluded, since you should never run your tests on your production server.

Use a zero-downtime strategy

Deployments should not cause downtime for your users. You can achieve this by using a zero-downtime deployment strategy. In short, this means you want to prepare your new release in a separate directory, and then activate it when it is completely ready.

For example, if you overwrite your production application each time you deploy, you are causing downtime each deployment. This means that any user making a request to your server during a deployment will get an error. Using a zero-downtime strategy prevents these types of errors.

Another advantage of zero-downtime deployments is that you can run Laravel's optimization commands before your new release is activated, and that your deployment will stop if anything goes wrong. The artisan view:cache command for example will fail if any of your views contain an error. If any of the optimization commands fail during a zero-downtime deployment, then your deployment will stop, and the broken release won't be activated.

Run all of Laravel's optimization commands

Laravel has a handful of optimization commands built-in, such as artisan config:cache, artisan route:cache and artisan view:cache. These commands speed up your application and should be re-run after each deployment.

If you deploy by hand there is always a risk of forgetting to run these commands. Especially with route:cache, forgetting to re-run this command will cause errors. Your automated deployment script should run these commands after each deployment.

Flush OPCache

OPCache significantly speeds up your application, having it enabled is a no-brainer. The only downside of OPCache is that you have to flush the cache after each deployment. Flushing OPCache should be a step in your automated deployment script.

The best practice for flushing OPCache is by calling the opcache_reset() PHP function. Without a proper deployment script this is a bit of a hassle. You can't call opcache_reset() in the CLI, because that flushes the CLI OPCache, but you have to flush the web OPCache. To flush the web OPCache, the opcache_reset() function has to be called via a web request. You can achieve this with a step in your automated deployment script.

Another way to flush OPCache is by restarting the PHP-FPM process, this is generally fine, but not a best practice. Restarting the PHP-FPM process has a chance of dropping requests, and it will temporarily slow down all PHP applications on the server. Restarting this process also requires sudo, but since your deployment script can't enter a sudo password, you'll have to configure your server to allow this command without sudo. In my experience some managed webhosting providers don't allow this sudo exception, which means you can't use this approach to flush OPCache.