Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Task regularly exceeding maxRuntime, skipping beats #52

Open
Swader opened this issue Mar 19, 2016 · 1 comment
Open

Task regularly exceeding maxRuntime, skipping beats #52

Swader opened this issue Mar 19, 2016 · 1 comment
Labels

Comments

@Swader
Copy link
Contributor

Swader commented Mar 19, 2016

I'm running OS X as the host environment, on which I'm powering Homestead Improved. I installed Jobby into one such environment (namely nofw) as an experiment, but skipping the config file etc, basically directly hitting the jobby file. The jobby file looks something like this in this experimental demo stage:

<?php

use Dotenv\Dotenv;
use Jobby\Jobby;
use Monolog\Logger;
use Monolog\Handler\StreamHandler;

require __DIR__ . '/../vendor/autoload.php';

$logger = new Logger('cronlog');
$logger->pushHandler(
    new StreamHandler(__DIR__ . '/../logs/cron-error.log', Logger::ERROR)
);

$d = new Dotenv(__DIR__ . '/../');
$d->load();

try {
    $db = new PDO(
        'mysql:host=' . getenv('MAIN_DB_HOST') . ';dbname=' . getenv(
            'MAIN_DB_NAME'
        ), getenv('MAIN_DB_USER'), getenv('MAIN_DB_PASS')
    );

    $settings = $db->query('SELECT * FROM `cron_settings`')->fetchAll(
        PDO::FETCH_ASSOC
    );
    $crons = $db->query(
        'SELECT * FROM `cron` WHERE `status` = "active"'
    )->fetchAll(
        PDO::FETCH_ASSOC
    );
} catch (PDOException $e) {
    $logger->error('Error while fetching DB contents: ' . $e->getMessage());

    return;
}

$jobby = new Jobby(
//    [
//        'mailer' => 'smtp',
//        'smtpUsername' => getenv('MAILGUN_SMTP_LOGIN'),
//        'smtpPassword' => getenv('MAILGUN_SMTP_PASS'),
//        'smtpHost' => getenv('MAILGUN_SMTP_HOST'),
//    ]
);

$settings = [
    'maxRuntime' => 50,
    'recipients' => '[email protected]',
    'output' => 'cron.log',
];

$crons = [
    [
        'id' => 1,
        'name' => 'Job 1',
        'description' => 'Foo bar',
        'status' => 'enabled',
        'schedule' => '* * * * *',
        'maxRuntime' => 10,
        'output' => 'j1.log',
        'environment' => 'dev',
        'command' => 'dateffds',
    ],
    [
        'id' => 2,
        'name' => 'Job 2',
        'description' => 'Foo baz',
        'status' => 'enabled',
        'schedule' => '/5 * * * *',
        'maxRuntime' => 10,
        'output' => 'j2.log',
        'recipients' => '[email protected]',
        'environment' => 'dev',
        'command' => 'ls',
    ],
];

/** @var array $cron */
foreach ($crons as $cron) {
    $cron = array_merge($settings, array_filter($cron));
    $jobName = $cron['name'];
    unset($cron['name']);
    $cron['output'] = ($cron['output'] ?? false) ? __DIR__ . '/../logs/' . $cron['output'] : __DIR__ . '/../logs/cron.log';
    $jobby->add($jobName, $cron);
}

$jobby->run();

So two fake jobs and a basic "global" settings array. The DB calls above are there just for show, the DB is empty. That entire part can be removed.

Anyway. You'll notice the first job has a bogus command: dateffds. This throws a normal error any time when called on the command line. The jobby file itself does not hang when called directly from the command line, and there's nothing execute, so it's odd to be constantly getting this from job1:

screenshot 2016-03-19 23 22 11

I then looked into lockfiles and explored the Helper class, where I then added some logging calls to each IF block, in order to pinpoint what fails and when. This is what I got, side by side (inside.log are log entries from within Helper, while j1.log is output from job 1):

screenshot 2016-03-20 00 00 34

These are the forced logging messages I added to Helper:

    /**
     * @param string $lockFile
     *
     * @return int
     */
    public function getLockLifetime($lockFile)
    {
        $l = new Logger('foo');
        $l->pushHandler(new StreamHandler(__DIR__.'/../../../../logs/inside.log'));

        if (!file_exists($lockFile)) {
            $l->info("File does not exist");
            return 0;
        }

        $pid = file_get_contents($lockFile);
        if (empty($pid)) {
            $l->info("PID is empty");
            return 0;
        }

        if (!posix_kill(intval($pid), 0)) {
            $l->info("No posix kill");
            return 0;
        }

        $stat = stat($lockFile);

        $l->info("All good");
        return (time() - $stat["mtime"]);
    }

If you have any ideas on what I should test next, please let me know, though you'd probably see it most clearly if you tried it out on Homestead Improved - it's just a basic Ubuntu 14.04 box with PHP 7.

More info:

  • the tmp files are not in a shared folder, it can't be fighting with the host OS for control over lockfiles
  • the tmp files are not visible in my IDE, so they're not being fought over on that front either
  • at first, it worked without these timeouts
  • after a reboot of the VM (once the tmp files are killed off), the situation resumes as soon as the files are recreated. So just one successful call and we're back to the timeouts again.
@Swader
Copy link
Contributor Author

Swader commented Mar 20, 2016

Note that if I omit the maxRuntime config value, the job runs correctly and doesn't block - the output is accurately generated every minute, as opposed to every third time like when maxRuntime is still on.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants