Segmentation Fault in CLI with Text Over 7000 Bytes – Magento Fix

ee-1.12magento-enterprise

I'm using the following to run a direct insert query in a Magento Shell script.

Mage::getSingleton('core/resource')->getConnection('core_read')->query($q)

One of the string values goes into a text column, and when this string exceeds about 7000 bytes, PHP crashes and a Segmentation fault (core dumped) error occurs.

The 7000 bytes figure is an approximate string size as I was truncating the string in blocks to see at what size an error occurs.

Does anyone have a clue why this happens and how to fix it? The text column accepts much more than 7000 bytes and when I run $q in MySQLyog, it inserts just fine.

The string in question is something like… (array of image URLs imploded with a ,)

http://www.mysite.com/media/catalog/product/cache/0/small_image/135x/images/catalog/product/placeholder/small_image.jpg,http://www.mysite.com/media/catalog/product/cache/0/small_image/135x/images/catalog/product/placeholder/small_image.jpg,http://www.mysite.com/media/catalog/product/cache/0/small_image/135x/images/catalog/product/placeholder/small_image.jpg,http://www.mysite.com/media/catalog/product/cache/0/small_image/135x/images/catalog/product/placeholder/small_image.jpg,... (just repeated ~40 times)

And each of these image URL are obtained via (pesudo code)

foreach list of products {
    $product= Mage::getModel('catalog/product')->load($id);    
    $myImgUrl[] = (string)Mage::helper('catalog/image')->init($product, 'small_image')->resize(135);
}

EDIT:
I'd like to add that segmentation fault error occurs whether that particular query is run on the x-th iteration or 1st iteration of the foreach loop.

I also see that at the end of my shell script, memory_get_usage() is over 670 MB.. I see the usage go up at every iteration starting from about 16MB before the first iteration. I am using the same variables, which are used to write to the DB, at every loop.

Update:

I managed to avoid this problem something like below instead of ...->load($id). Fooman provided some insight into this problem in his answer in this thread [Link];

Mage::getModel('catalog/product')->getCollection()
->addAttributeToSelect('my_atts')
->addAttributeToFilter('entity_id', $id)
->getFirstItem();

I understand. Your gist was very helpful. Although it's something I've done before, I didn't realize the memory impact. I replaced all of the …->load() with your recommendation, …->getCollection() and everything is well. Memory usage is constant around 16MB and noticeably faster.

Best Answer

As suggested by the below SO thread you may have run out of memory if you are (as I suspect) running this from the CLI.

To increase your PHP CLI memory_limit to the minimum recommended 256MB, you must first locate the correct INI file loaded for the CLI. To do so:

$ php -i | grep "Loaded Configuration"

Which will return something akin to:

Loaded Configuration File => /usr/local/etc/php/5.4/php.ini

Then you will edit the php.ini file at that location and increase memory_limit to a minimum 256MB.

After you save and close you can check that it's been updated with:

$ php -i | grep memory_limit

In all luck it should return:

memory_limit => 256M => 256M

Parting thoughts:

This isn't necessarily Magento's (or even PHP's) fault. Magento handles "direct" inserts of many many hundreds of kilobytes larger than your script. I suspect there are other factors at play, like how you're deriving the URLs in the first place. If you're doing Mage::getModel('catalog/product')->load() in a loop, for instance.

In short, you may have created some memory leaks, instantiated objects unnecessarily, loaded massive objects or collections from the db unnecessarily, and coded your way into this position.

Best of luck.


Sources:

https://stackoverflow.com/questions/12191996/php-in-commandline-segmentation-fault-core-dumped-debug-while-running-phpi

https://stackoverflow.com/questions/15714909/php-what-does-segmentation-fault-core-dumped-error-means