phpThumb/Imagemagick convert performance issue for concurrent requests

We are using phpThumb for image resize, crop and other supported operations in PhpThumb. We have configured PhpThumb to use ImageMagick(convert in turn). However, we are observing that convert command is taking too much time and hence it’s degrading performance.We are observing that performance get’s degraded with increase in image file-size and more number of concurrent requests on live/production server. We tried various options to improve performance, but none of these options are working.Following is ImageMagick configuration,

convert -list configure

Path: /usr/local/imagemagick-talos-6.9.1/lib/ImageMagick-6.9.1//config-Q16/configure.xml
CC             gcc -std=gnu99 -std=gnu99
CFLAGS         -I/usr/include/OpenEXR -I/usr/include/freetype2 -g -O2 -Wall -mtune=core2 -fexceptions -pthread -DMAGICKCORE_HDRI_ENABLE=0 -DMAGICKCORE_QUANTUM_DEPTH=16
CODER_PATH     /usr/local/imagemagick-talos-6.9.1/lib/ImageMagick-6.9.1/modules-Q16/coders
CONFIGURE      ./configure  '--prefix=/usr/local/imagemagick-talos-6.9.1' '--disable-openmp' 'PKG_CONFIG_PATH=/usr/lib64/pkgconfig:/usr/share/pkgconfig'
CONFIGURE_PATH /usr/local/imagemagick-talos-6.9.1/etc/ImageMagick-6/
COPYRIGHT      Copyright (C) 1999-2015 ImageMagick Studio LLC
CPPFLAGS       -I/usr/local/imagemagick-talos-6.9.1/include/ImageMagick-6
CXX            g++
CXXFLAGS       -g -O2 -pthread
DEFS           -DHAVE_CONFIG_H
DELEGATES      bzlib djvu mpeg fftw fontconfig freetype jng jpeg lcms openexr pango png ps tiff webp x xml zlib
DISTCHECK_CONFIG_FLAGS --disable-deprecated --with-quantum-depth=16 --with-jemalloc=no --with-umem=no --with-autotrace=no --with-gslib=no --with-fontpath= --with-gvc=no --with-rsvg=no --with-wmf=no --with-perl=no
DOCUMENTATION_PATH /usr/local/imagemagick-talos-6.9.1/share/doc/ImageMagick-6
EXEC-PREFIX    /usr/local/imagemagick-talos-6.9.1
EXECUTABLE_PATH /usr/local/imagemagick-talos-6.9.1/bin
FEATURES       DPC Cipher
FILTER_PATH    /usr/local/imagemagick-talos-6.9.1/lib/ImageMagick-6.9.1/modules-Q16/filters
HOST           x86_64-unknown-linux-gnu
INCLUDE_PATH   /usr/local/imagemagick-talos-6.9.1/include/ImageMagick-6
LDFLAGS        -L/usr/local/imagemagick-talos-6.9.1/lib
LIB_VERSION    0x691
LIB_VERSION_NUMBER 6,9,1,4
LIBRARY_PATH   /usr/local/imagemagick-talos-6.9.1/lib/ImageMagick-6.9.1
LIBS           -llcms2 -ltiff -lfreetype -ljpeg -lpng12 -ldjvulibre -lfftw3 -lfontconfig -lwebp -lXext -lXt -lSM -lICE -lX11 -lbz2 -lIlmImf -lImath -lHalf -lIex -lIlmThread -pthread -lpangocairo-1.0 -lpango-1.0 -lcairo -lgobject-2.0 -lgmodule-2.0 -lgthread-2.0 -lrt -lglib-2.0 -lxml2 -lz -lm
NAME           ImageMagick
PCFLAGS        -DMAGICKCORE_HDRI_ENABLE=0 -DMAGICKCORE_QUANTUM_DEPTH=16
PREFIX         /usr/local/imagemagick-talos-6.9.1
QuantumDepth   16
RELEASE_DATE   2015-08-05
SHARE_PATH     /usr/local/imagemagick-talos-6.9.1/share/ImageMagick-6
SHAREARCH_PATH /usr/local/imagemagick-talos-6.9.1/lib/ImageMagick-6.9.1/config-Q16
SVN_REVISION   18701
TARGET_CPU     x86_64
TARGET_OS      linux-gnu
TARGET_VENDOR  unknown
VERSION        6.9.1
WEBSITE        http://www.imagemagick.org
Path: [built-in]

FEATURES
NAME           ImageMagick
QuantumDepth   16

Please guide us if we are missing anything to improve performance. We are running it on Apache server.We don’t want to compromise image quality using QuatumDepth=8.

You could probably do with adding more detail but changing to Imagick may improve speed although php thumb may not be able to use imagick.

If you are saving all images as a jpg you could try adding -define but again it may not be supported by php thumb:

Set the size hint of a JPEG image, for example, -define jpeg:size=128x128. It is most useful for increasing performance and reducing the memory requirements when reducing the size of a large JPEG image.

Have you tried converting an image directly with Imagemagick or Imagick to see if php thumb has an overhead?

This is absolutely normal and expected for image manipulation operations - if images are large then converting them will take time. If you are talking about degrading performance for concurrent requests this suggests suboptimal solution on your end - image conversions should not be concurrent as you describe it, the correct approach is to convert an image once and then write it to a file and subsequent requests only fetch the cached file.

Hey Rubble,
Thanks for your quick reply…
Here are few more clarifications,
If you are saving all images as a jpg you could try adding -define but again it may not be supported by php thumb:

We are not saving all images as jpgs. We are doing it based on actual image type(jpg, gif, png etc).

Set the size hint of a JPEG image, for example, -define jpeg:size=128x128. It is most useful for increasing performance and reducing the memory requirements when reducing the size of a large JPEG image

We can’t put any size limit as we need images of all sizes(smaller as well as bigger)

Have you tried converting an image directly with Imagemagick or Imagick to see if php thumb has an overhead?

Even we had suspect on this.However, even we try to convert image directly with Imagemagick, performance is hampered…

Our main objective is to reduce time which convert takes for intended operation.
Please suggest.

Thanks Lemon for your quick reply…
We have implemented caching at our end and we are ensuring that subsequent requests are getting served from cache.Also, we have tuned apache to handle max possible concurrent requests.Major issue is convert is taking too much time.Is there anything that we can optimize in that front?

Well, then maybe share with us more details and numbers about the volume of conversions needed for your site - how many conversions per minute/hour, how long a single conversion lasts, how many new images to convert are uploaded, etc. Normally, creating a thumb takes up to 2-3 seconds so if you have server overload problems even while using caching this would suggest you have a site with an insanely huge number of new images coming every day.

Hello Lemon,
Currently, we are evaluating performance using siege(performance testing tool)…
Here are few stats of it,

For Image Size: 1.53 MB
Concurrency: 50 for 60 seconds(-c50 -t60S)
Transactions: 346 hits
Elapsed time: 59.73 secs
Data transferred:267.56 MB
Response time: 7.32 secs
Transaction rate: 5.79 trans/sec
Longest transaction: 21.45 seconds
Shortest transaction: 1.02 seconds

Server configuration
8 cpu cores and 32 GB of memory

Basically, our aim is to increase Transaction rate(current is 5.79 trans/sec) and reduce Longest transaction time(current is 21.45 second )

Please suggest.

I was more interested in image numbers and not general server stats, which I’m not able to interpret very well. How many images per time frame, how large images, how long a single conversion lasts, etc.

I have noticed this; have you tried without it disabled?

yes Rubble, we have tried in both ways…but no luck so far

Hello Lemon, to give you more insight on this, you can say image size will be more than or equal to of size 1.53 MB (3072 x 2048 pixels approx). Usually, with this image size, single transaction time is around 18-20 seconds.Current transaction rate is 5.79 transactions/seconds.However, we want transaction rate near about 15 transactions/seconds.Please suggest.

I had a similar performance problem with creating thumbnails and used Php curl(…). It solved the memory Problems.

thumbs using curl(…)

Please clarify what you mean by a transaction. Is this image conversion? Downsizing to a thumbnail?

If you want to achieve 15 conversions per second then it is 900 conversions per minute, 54,000 conversions per hour, 1,296,000 conversions a day. Do you have this amount of new images being uploaded every day?

Here, we refer transaction is an operation e.g. crop, resize, saturation etc.We might need these many image uploads every day.

So it looks like you are dealing with a site of Facebook size! With such a huge number of conversions I would consider adding another server to handle the processing volume and offload the main server. Or at least upgrading your current machine with a faster CPU, more cores and more RAM if possible.

Maybe some optimizations are possible now but I wouldn’t expect any spectacular results - but if you posted specific ImageMagick commands you are running maybe we would be able to help optimize them. Also -

In this case I would consider this. Depending on what you do with IM it may turn out that the quality degradation with QuatumDepth=8 is imperceptible at all. Try doing a few conversions with QD=8 and higher and compare the results closely.

Maybe some optimizations are possible now but I wouldn’t expect any spectacular results - but if you posted specific ImageMagick commands you are running maybe we would be able to help optimize them

=> We are using phpThumb for image operations which internally uses imageMagick.
We are listing following two phpThumb image operations,

fltr[]=crop| l | r | t | b
fltr[]=size| x | y | s

While these operations are executing, we are getting following imageMagick command running in background,

~/imagemagick-talos-6.9.1/bin/convert -density 150 -background #FFFFFF -quality 95 -interlace line ~/<IMAGE_PATH>/e8879e80-2921-478f-b705-a980db6699af.jpg[0] jpeg:/tmp/pThumbbFblWR

Please let us know if you need any other details…

I am wondering if you are looking at this problem from the wrong angle.

This is a scaling issue, and while it is good that the code/process is running as fast as possible. What you try to archive here, should be set up in such a way that you can easily scale by adding more hardware.

With other words, this is an architecture problem, and not necessarily caused by the execution speed of the code. This becomes especially true, if you think long term.

Assuming the system is not already setup this way, I would recommend to:

Make the thumbnail conversion asynchronous, and use a Queue system to handle the requests. The initial request will save the uploaded image to a shared cache, then create a queue message. The first available thumbnail worker (server), will then take the request, and create the thumbnail. With queue system I mean RabbitMQ, Amazon SQS etc. (There is a lot of alternatives, so make certain to do a proper review to find the best fit for your use case).

The benefit with this system is that you do not have a bottleneck when you need to scale fast. All you need to do is add more servers to create the workers you need. This way you can rapidly scale the operation from creating 100 thumbnails a minute, to 10000 thumbnails a minute, without having to change a line of source code.

And not to mention, many times it is more effective with several smaller servers to do the job, than one large one. Then if you take redundancy into the calculation, it is a no brainer.

This topic was automatically closed 91 days after the last reply. New replies are no longer allowed.