My guess is the setting of cron.php to run every minute has caused a lot of things to stand on top of each other, ie, not finish before the next task scheduled of the same nature or similar is executed. Since both cron.php would not be aware of each state. The same record could be attempted twice causing some odd exception breaking the queue email sends.
With that said there are Mage::Log
in the exceptions of the Queue Mailer, so making sure the logging is enabled would be the best step to help determine if theres any exceptions. It may be wise to also just run php -f cron.php
from CLI to see if it is also throwing any exceptions, you may not be seeing with it running behind the scenes.
I would also start with a simple PHP mail()
test to make sure you're not running into any Spam policies or such. Just to be sure its not something lower in the stack causing the issue.
Just some speculation, hope it helps!
* EDIT *
Use cron.sh
instead of cron.php
as it will do grep ps
to look to see if a previous process is already running.
I'm not sure if you found the solution in the mean time, but I recently came into the same problem on a client's server.
What happened was indeed the same, the cron job core_email_queue_send_all was sending the same emails multiple times and each time the same exception you found was added into the exception log.
The cron job was sending the same emails multiple times because the processed_at
field did not get saved in the core_email_queue
table for the corresponding messages.
I've added some logs into the code and looked into how the query for saving the core_email_queue
message was built, and why it was missing the SET part of it (which should have contained the columns to be updated):
UPDATE `core_email_queue` SET WHERE (message_id='3')'
In Magento, before building the database queries, the columns used in the query get checked against the column descriptions for the respective table inside the Mage_Core_Model_Resource_Abstract::_prepareDataForTable
method by calling:
$fields = $this->_getWriteAdapter()->describeTable($table);
In order not to execute the DESCRIBE query each time, Magento caches the DDL info for the tables. Inside the Varien_Db_Adapter_Pdo_Mysql::describeTable
method the cache is first checked:
public function describeTable($tableName, $schemaName = null)
{
$cacheKey = $this->_getTableName($tableName, $schemaName);
$ddl = $this->loadDdlCache($cacheKey, self::DDL_DESCRIBE);
if ($ddl === false) {
$ddl = array_map(
array(
$this,
'decorateTableInfo'
),
parent::describeTable($tableName, $schemaName)
);
/**
* Remove bug in some MySQL versions, when int-column without default value is described as:
* having default empty string value
*/
$affected = array('tinyint', 'smallint', 'mediumint', 'int', 'bigint');
foreach ($ddl as $key => $columnData) {
if (($columnData['DEFAULT'] === '') && (array_search($columnData['DATA_TYPE'], $affected) !== FALSE)) {
$ddl[$key]['DEFAULT'] = null;
}
}
$this->saveDdlCache($cacheKey, self::DDL_DESCRIBE, $ddl);
}
return $ddl;
}
I found that the columns received from the cache for the core_email_queue
table, were not the ones expected, instead sometimes they were: data
, lifetime
, expire
and priority
.
This pointed to a cache problem and I found that Zend_Cache_Core
was saving data into the DDL cache files, sometimes by calling Zend_Cache_Backend_File::save()
directly and sometimes by calling Zend_Cache_Backend_TwoLevels::save()
.
The two levels cache from Zend uses the _prepareData
method to build a serialized array to store the data and metadata information:
private function _prepareData($data, $lifetime, $priority)
{
$lt = $lifetime;
if ($lt === null) {
$lt = 9999999999;
}
return serialize(array(
'data' => $data,
'lifetime' => $lifetime,
'expire' => time() + $lt,
'priority' => $priority
));
}
Finally, the issue was that the cron (which sent the emails) was called from the command line. Comparing a request from the browser with a command line request, I found that Mage_Core_Model_Cache::getBackendOptions
was returning different options. This server was set up to use APC cache, however when the cron was running ini_get(‘apc.enabled’)
was false.
On this server there were 2 php configuration files fpm/php.ini where apc.enabled was 1, and cli/php.ini where apc.enabled was 0.
The Magento instance that was run from the command line was not able, thus, to use the APC cache, so it did not use a two-level cache, which led to it not knowing how to correctly read the data from the cache files.
The fpm Magento instance used APC and the two level cache and was saving DDL data into the var/cache folder enclosed in an array with the data
, lifetime
, expire
and priority
keys.
When the cron ran and read the DDL cache file, it used the data found there and basically considered for each table, that the columns are data
, lifetime
, expire
and priority
.
Changing apc.enabled to 1 in the cli/php.ini configuration file did the trick and solved the issue.
If you are interested on reading more about how I debugged this issue you can have a look over the more detailed explication I wrote for a blog post: http://ayg.ro/magento-cron-twolevel-cache-issue-pdoexception-sqlstate-42s22-and-sqlstate-42000
Best Answer
I've been researching this area of Magento 1.9.1 and from what I can see all new order or order update emails will now be routed through the queue. For example this is the
sendNewOrderEmail()
function now.And the
queueNewOrderEmail()
function forces the email into the queue.To be sure you can add debug in the class Mage_Core_Model_Email_Template send() function to check that it uses the queue. You can also check the queued email table:
core_email_queue
.However, based on my research so far - I have not been able to find any other emails that are sent through the queue - for example contact form emails appear to send normally, in the same way they used to. Which does not seem consistent with the release notes:
I'll post another question on that as I'd like to be sure.