SnailLife does a lot of logging for debugging purposes. Aside from the general laravel.log I have separate loggers for each snail and also write logs for stuff like deleted items etc in separate files.

The problem is all the space this takes up on my Digital Ocean droplet in Laravel’s storage folder. If I leave it for a week or two it fills up and I’ll suddenly find my droplet inaccessible or some recurring commands not being able to finish properly due to insufficient space.

Instead of culling the logs more aggressively I decided to set up a backup to Amazon S3. With Laravel 5’s filesystems this ended up being a pretty simple process.

First I set up an S3 bucket called snaillife-storage with a user that has getObject, createObject, and deleteObject permissions.

I set the S3 credentials in the .env configuration file:

S3_KEY=blah
S3_SECRET=blah
S3_REGION=website-us-east-1
S3_BUCKET=snaillife-storage

Note that I set the region here just in case but in reality I don’t use it. In config/filesystems.php I set up the S3 disk using these credentials (the region setting is removed. I also changed the local disk root to storage_path()):

'local' => [
  'driver' => 'local',
  'root'   => storage_path(),
],

's3' => [
  'driver' => 's3',
  'key'    => env('S3_KEY'),
  'secret' => env('S3_SECRET'),
  'bucket' => env('S3_BUCKET'),
],

{:lang=“php”}

Then I made a new artisan command called BackUpLogs:

<?php namespace App\Console\Commands;

use Illuminate\Console\Command;
use Storage;
use Log;
use Carbon\Carbon;
use App;

class BackUpLogs extends Command {

    /**
     * The console command name.
     *
     * @var string
     */
    protected $name = 'BackUpLogs';

    /**
     * The console command description.
     *
     * @var string
     */
    protected $description = 'Back up logs to Amazon S3';

    /**
     * Execute the console command.
     *
     * @return mixed
     */
    public function handle()
    {
        if (!App::isLocal()) {
            $localDisk = Storage::disk('local');
            $localFiles = $localDisk->allFiles('logs');
            $cloudDisk = Storage::disk('s3');
            $pathPrefix = 'snailLogs' . DIRECTORY_SEPARATOR . Carbon::now() . DIRECTORY_SEPARATOR;
            foreach ($localFiles as $file) {
                $contents = $localDisk->get($file);
                $cloudLocation = $pathPrefix . $file;
                $cloudDisk->put($cloudLocation, $contents);
                $localDisk->delete($file);
            }
        }
        else {
            Log::info('BackUpLogs not backing up in local env');
        }
    }
}

{:lang=“php”}

Note that the directory you specify in $localDisk->allFiles($dir) should be relative to the root path of the local disk - an absolute path does not work.

In Kernel.php I set this to run every hour:

$schedule->command('BackUpLogs')->cron('5 * * * *');

{:lang=“php”}

So now every hour all the files in my storage/logs directory are backed up to my S3 bucket and deleted from the local disk.