Changes

Jump to navigation Jump to search

Varnish

13,035 bytes added, 10:00, 25 July 2018
Install Varnish 4.0.3 on CentOS 7
[[Image:Liquidweb_728x90<seo title="Varnish 4.jpg|link0 Configuration and Tuning" titlemode=http:"append" keywords="varnish,cache,settings,configuration,centos 7 varnish tweak, varnish performance" description="Guide on how to install and configure Varnish 4.0 and 3.0 "><//liquidweb.com]]seo>==Varnish Overview==
"Varnish Cache is a "web application accelerator also known " that acts as a caching an HTTP reverse proxy. You install it in , basically Varnish is the front of any server that speaks HTTP the line for serving requests. Apache sits behind Varnish and configure it serves requests only to cache Varnish if the content is not already in the contents. Varnish Cache is reallycache, really fastor cannot be cached for whatever reason. It typically speeds up delivery with Images, css, html, and a factor lot of 300 - 1000xother stuff can be '''[http://wiki.mikejung.biz/Category:Caching cached]''' by Varnish which means Apache does less work, depending on your architecturewhich generally means a happier server and a fast website."
Any time you access something from RAM the access time for that file becomes faster, since Varnish can store a large amount of files in RAM, it can significantly improve performance. I've seen '''Varnish is the business[http://wiki.mikejung.biz/Category:Performance performance]'''boosts of 10x to 100x simply by sticking Varnish in front of WordPress. If your site There is slow, and you have some static content, installing Varnish will work wondersa [https://www.varnish-cache.org/docs/4. Installing Varnish is rather simple, and configuration is easy0/reference/vcl. While html VCL] language that you can create some very complex VCLuse to set rules about what to do with a certain type of request, what files to cache, you can also what files not do anything to cache, and Varnish will still help a lotfor how long to cache said file.
Varnish can be used to load balance between multiple backend nodes, which makes it function If you like a basic load balancer that caches items. Varnish is typically RAM hungry and not very CPU hungry unless what you are getting lots and lots of trafficsee so far, so you do not need the latest CPU then continue on reading to get good performance find out of Varnishhow to make the cloud faster, or something.
= Install Installing Varnish 3is a rather simple process.0Configuration can sometimes be a little more tricky, especially with the default.5 On CentOS 6vcl file.5 =While you can create some very complex VCL rules and complex configurations, you can also just leave the default.vcl alone and run with the defaults, it may not be optimal, but it'll still probably be faster than just sticking with '''[[Apache]]'''!
If you aren't already using something like '''[[Php-fpm]]''' or '''[[HHVM]]''' for PHP, you may still notice slow loading times. Varnish can only help so much if your database server is slow or you are using something like SuPHP to run PHP.
Varnish can also be used to load balance HTTP requests between multiple backend servers. Varnish is obviously RAM hungry, the whole point of caching is to use as much RAM as possible so you should give Varnish enough RAM to fit most of your www/ content into it's cache. Varnish is not very CPU intensive unless you are getting large amounts of requests. Varnish running with a single core on a VPS can handle at least 100 request/s if not much more. If you are looking for more information about '''[http://wiki.mikejung.biz/Varnish_Processes how Varnish handles it's processes and threads]''', this page may be for you! ==Varnish Management and Child Processes== Varnish runs with two main processes: ===Management Process=== *'''Management Process''' - The Varnish management process handles configuration changes such as VCL rules, or how much RAM Varnish can use. The management process also compiles VCL, monitors the child process and restarts the child process if there is an issue / no response after a second or so. The management process also handles logging which is done by syslog.  ===Child Process and Threads=== *'''Child Process''' - The Varnish child process consists of many threads, which handle various tasks such as accepting new connections, handling current connections (worker threads), and threads that evict / remove old items from the cache.  In order to reduce contention between all the threads, Varnish uses workspaces which allow each thread to use or modify memory independently of other threads, which avoids locking. The most important workspace is the session workspace which handles and modifies session data.  An example of this would be removing www. from a domain in order to save space in the cache. If this type of modification doesn't happen then you could end up with double the cache since www.domain.com and domain.com would both have their own cached content, this is obviously a waste of space, so the Varnish child process and threads really do a lot of work to improve efficiency.  Keep in mind that even if you have say, 100 threads running and a session workspace size of 1MB, your server usually won’t use all 100MB of RAM. Keep this in mind if you are looking at the total virtual memory usage for Varnish, usually this number will appear very high, but in most cases Varnish is not actually using all this space.  The Varnish child process uses a log which is accessed from the file system and resides in shared memory. Each thread can then log whatever it needs to the shared memory log by obtaining a lock, logging the event and then releasing the lock.  All of the worker threads have a cache of this log to reduce contention, that way each thread can read from their own cached log without having to wait on a lock, and if a thread needs to write something it can do so quickly and populate the cached log file so all threads are aware of what is going on.  The log file is about 80MB in size and split into two parts. Half of the log file handles counters and stats, the other half actually logs what data has been requested. The log file is meant to be written to in RAM, not on disk so the information in the log can be very verbose since there is a low overhead to writing data to RAM versus writing data to disk. You can use utilities to parse this log data later on, or use something like varnishstat to view real time activity.  ==Varnish Storage Backends== Varnish has 3 storage options for the '''[http://wiki.mikejung.biz/Category:Caching cache]''' file, malloc and persistent storage. The most common type is malloc. Varnish mentions that the persistent storage option is still experimental, so there are really only 2 options to use in a production environment. Regardless of whether you use the file or malloc method you should keep in mind that there is an approximate overhead of 1KB of memory per object stored in the Varnish cache. If you had 10,000 objects in the Varnish cache, each with 1KB overhead, there would be a total of 10MB overhead. This means that even if you set the cache size to 100MB, Varnish will go over this limit if the cache ever gets completely full. This shouldn't be a huge issue in most cases, but keep this in mind if you notice a Varnish server swapping a lot.  Varnish uses about 100MB of RAM even if it has nothing in it’s cache. This is relatively small, but still something to keep in mind if you are using a server with not a lot of memory.  ===malloc=== Varnish will request the entire size of the cache with a malloc call. Every object stored by Varnish will be stored in RAM. Keep in mind that if you do not limit RAM usage Varnish might use up all available RAM, forcing the OS to start to swap, at this point performance may degrade. Be sure to size Varnish Malloc allocation accordingly.  Each object that is stored in Varnish will consume about 1KB of RAM, regardless of the size. So many small files may start to bloat memory usage in a hurry. It's best to set a limit when using Malloc. Use malloc if you have enough free memory to store the entire cache. For instance if you have a WordPress blog and only have, say 128MB of images and other static content, and you have 512MB to spare on your server, I’d tell Varnish to use 128MB of cache using the malloc method. If you configure Varnish to use malloc, it will request all of the memory at once and then spread the cache between memory and disk by swapping out the less used objects and swapping in the most requested objects into memory. The malloc method allows Varnish to be very fast and very efficient by only keeping objects in RAM that are accessed frequently.  The malloc method does not retain cached data after a restart, so if you stop Varnish or have to reboot the server it will take some time for the cache to warm up again. For persistent caches, use the persistent storage option. Malloc is the fastest option to use with Varnish. You can configure Varnish to use Malloc as backend storage by modifying '''/etc/sysconfig/varnish'''. You can either define the variable, and pass to the DAEMON_OPTS, or just define DAEMON_OPTS<pre>vim /etc/sysconfig/varnish</pre> <pre>VARNISH_STORAGE="malloc,100M" *OR* DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s malloc,100M \ -u www-data \ -g www-data"</pre> Then restart Varnish<pre>service varnish restart</pre> ===file=== Varnish creates a file on the filesystem to contain the entire cache. It then tells the OS via mmap() to map the entire file into memory if possible. Keep in mind that if you are still using spinning disks, you may see lower performance than using malloc due to write latency. If you are using SSDs, or have a large RAID that can handle some extra IO, the File option may be a good option. You can modify the file location to whatever you want, so if you have a spare disk, create the file and update '''VARNISH_STORAGE_FILE''' with the full path to the file.  *Choose file if you have a large cache that will not fit entirely into RAM AND have fast disks (ssds) Use the file storage method for cache storage if you don’t have a lot of free memory, or have a large amount of files to cache. For example, if you have a '''[[WordPress Optimization|WordPress]]''' blog with 20GB of images, and your server only has 4GB of RAM, using the file method would make more sense.  If you configure Varnish to use the file option for the cache, a file is created on the file system which holds everything in the cache. Varnish will then try to get the OS to map the entire file into memory if there is space.  The file storage method does not retain cached data after a restart, so if you stop Varnish or have to reboot the server it will take some time for the cache to warm up again. For persistent caches, use the persistent storage option.  You can configure Varnish to use File as backend storage by modifying '''/etc/sysconfig/varnish'''. You can either define the variable, and pass to the DAEMON_OPTS, or just define DAEMON_OPTS<pre>vim /etc/sysconfig/varnish</pre> <pre>VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.binVARNISH_STORAGE="file,100M,${VARNISH_STORAGE_FILE}" *OR* DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s file,100M,/var/lib/varnish/varnish_storage.bin \ -u www-data \ -g www-data"</pre> Then restart Varnish<pre>service varnish restart</pre> ==Varnish Shared Memory Log== The Varnish log, sometimes known as “shm-log” should always be stored in memory, and not on disk otherwise it could cause large amounts of IO. You should ensure that this log is indeed in memory by placing it on a tmpfs location in memory.  The shmlog usually is located in /var/lib/varnish and you can feel free to remove any data found in this directory if needed. Varnish suggests make sure this location is mounted on tmpfs by using /etc/fstab to make sure this gets mounted on server reboot / start.  According to the official Varnish documentation, some Linux distros will try to place the cache in the same directory as the shm-log, which Varnish does not suggest doing. If this is the case for you, you should move one of these files out of the directory so that the cache and log file are not located in the same directory.  '''Shared memory log''' -- Not much needs to be done with this besides making sure the log is stored in RAM (/dev/shm) ==Varnish 4.0 Configuration Files== ===Ubuntu and Debian Varnish Server Config=== On most Debian and Ubuntu servers you can find the main configuration file for varnish in <pre>vim /etc/default/varnish </pre> ===CentOS 6 and 7 Varnish Server Config=== On most CentOS 6.x servers you can find the main configuration file for varnish in <pre>/etc/sysconfig/varnish</pre> For CentOS 7 the file is located in<pre>/etc/varnish/varnish.params</pre> ===Varnish 4.0.3 CentOS 7 Main Configuration Options=== If you want to have Varnish use malloc storage you can edit '''/etc/varnish/varnish.params''' and change the storage option. In this example I am using 1GB of memory and the malloc storage option, by default Varnish will use the file backend by default. <pre>RELOAD_VCL=1 VARNISH_VCL_CONF=/etc/varnish/default.vcl VARNISH_LISTEN_ADDRESS=$IP_Varnish_Should_Listen_OnVARNISH_LISTEN_PORT=80 VARNISH_ADMIN_LISTEN_ADDRESS=127.0.0.1VARNISH_ADMIN_LISTEN_PORT=6082 VARNISH_SECRET_FILE=/etc/varnish/secret VARNISH_STORAGE="malloc,1G" VARNISH_TTL=120 VARNISH_USER=varnishVARNISH_GROUP=varnish VARNISH_STORAGE="malloc,512m"</pre>  ==Varnish 3 Configuration== === VARNISH_MIN_THREADS === For Varnish 3.0 the default value is 2 (according to man page). If you have a very busy server, you may want to raise this value. Keep in mind that this is only the minimum value, and really only matters if you get frequent bursts of traffic. This can be found and configured in '''/etc/sysconfig/varnish'''. Varnish 4.0 seems to use a value of 200 threads which should be fine for most servers. <pre>VARNISH_MIN_THREADS=200</pre> === VARNISH_MAX_THREADS === For Varnish 3.0 the default value is 500 (according to man page). If you have a very busy server, you may want to raise this value. Raising this to very high levels may cause some issues, so you want to watch the server after you change this value to make sure varnish doesn't go all crazy on you. This can be found and configured in '''/etc/sysconfig/varnish''' <pre>VARNISH_MAX_THREADS=500</pre> ===VARNISH_THREAD_TIMEOUT=== For Varnish 3.0 the default value is 300 seconds (according to man page). If you prefer to kill your threads while they are young, you can lower the value. If you want your threads to have a long, fulfilling life and experience all the static files they can, raise this value. This can be found and configured in '''/etc/sysconfig/varnish''' <pre>VARNISH_THREAD_TIMEOUT=300</pre> ===VARNISH_TTL=== For Varnish 3.0 the default value is 120 seconds (according to man page). This is the default TTL that is assigned to new objects that no one cared about, or specified a TTL for. This is a minimum, hard value to live in the cache. If you want to clear out the cache more frequently, and don't like to specify ttls elsewhere, then lowering or raising this value is your best bet. This can be found and configured in '''/etc/sysconfig/varnish''' <pre>VARNISH_TTL=120</pre> ==Varnish 4.0 Configuration== ===thread_pools=== By default there are 2 thread pools. The minimum value is 1. This setting sets the amount of worker pools that varnish will use. You can increase this value on the fly without a restart, however you must restart the varnish server if you decrease this value. Technically this is still flagged as an experimental setting, so be careful if you decide to raise the value.  If you have many CPUs, then raising this value to the amount of CPUs probably makes sense because the more thread pools there are, the less locking / contention Varnish has to deal with. Raising thread_pools too high, or well about the amount of CPUs is not a good idea and could end up hurting performance and wasting resources. Increasing this to a sane value will reduce lock contention and therefore performance. <pre>thread_pools=2</pre> ===thread_pool_max=== The Varnish '''thread_pool_max''' value is the maximum number of workers per pool. The default value is 5000. The minimum value is 100. This defines the number of worker threads per pool. You may not want to raise this value at all, as more threads can cause threads to start to step on each others toes. Do you really need more than 5000 threads? Do you? <pre>thread_pool_max=5000</pre> ===thread_pool_min=== The Varnish '''thread_pool_min''' value is the minimum number of workers per pool. The default value is 100 and the maximum value is 5000. This setting works with '''thread_pool_max'''. You cannot set the minimum value to be higher than the max value. The default setting is probably fine in most cases, and you may find better performance by creating more pools with less workers, or vice versa. Varnish is already pretty damn fast.  <pre>thread_pool_min=100</pre> == Varnish backend Misc section ==To configure Varnish using the '''file''' backend, do nothing, it's the default. However, you can change the size of RAM to use by changing the "1GB" value to whatever is appropriate.<pre>vim /etc/sysconfig/varnish # # Cache file locationVARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin## # Cache file size: in bytes, optionally using k / M / G / T suffix,# # or in percentage of available disk space using the % suffix.VARNISH_STORAGE_SIZE=1GB## # Backend storage specificationVARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}"</pre> To configure Varnish to use the '''malloc''' backend, you will need to comment out the original VARNISH_STORAGE setting, create a new one, specify "malloc,$size_of_cache"<pre># # Backend storage specification#VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}"VARNISH_STORAGE="malloc,256MB"</pre> ==default.vcl==*This contains your VCL and backend-definitions. After changing this, you can run either service varnish reload, which will not restart Varnish, or you can run service varnish restart, which empties the cache. <pre>/etc/varnish/default.vcl</pre> <pre>backend default { .host = "localhost"; .port = "8080";}</pre> *A backend server is the server providing the content Varnish will accelerate. '''.host''' = The host that varnish will talk to get content. If this is installed on the same server that '''[[Apache]]''' is using, this would be localhost or 127.0.0.1. If the Apache server is on another server, you would put in the IP of that server, private network is preferable, but a public IP will work as well. '''.port''' = The port that Varnish will connect to to get content to cache. If Apache is listening on port 80, then you would configure this to listen on port 80. If Apache is listening on port 8080, then you would set this to port 8080. ==Varnish Commands== ===Restart Varnish=== This will restart varnish, obviously. If you prefer a more graceful approach and just want to update VCL without clearing cache, try the reload command. <pre>service varnish restart</pre> ===Reload Varnish=== Reloads the vcl file, the cache is not affected, which is obviously preferred over a full restart. <pre>service varnish reload</pre> ===Varnishstat=== Shows a TON of useful information such as cache hits, misses, requests, backend connections and lots of other info. <pre>varnishstat</pre> ===Varnish Command Line Options=== Command line options.<pre>Listen address-a <[hostname]:port> Specifies the vcl file location-f <filename> Set the tunable parameters-p <parameter=value> Authentication secret for management-S <secretfile> Management interface-T <hostname:port> Where and how to store objects-s <storagetype,options></pre> ==Install Varnish 4.0.3 on CentOS 7== Installing Varnish 4.0.3 on CentOS 7 can be tricky since the repos don't always have up to date packages. You can grab all 3 Varnish rpms from Varnish's el7 repo and manually install them. I've found that as of Feb 18th 2015 this is the easiest way to install the latest version of Varnish. You will more than likely need to install the latest version of jemalloc, otherwise the Varnish installation will fail. If you are curious about what jemalloc is / does, please check out the 2006 pdf -- https://www.bsdcan.org/2006/papers/jemalloc.pdf<pre>yum install gccwget https://dl.fedoraproject.org/pub/epel/6/x86_64/Packages/j/jemalloc-3.6.0-1.el6.x86_64.rpmwget https://repo.varnish-cache.org/redhat/varnish-4.0/el7/x86_64/varnish/varnish-4.0.3-1.el7.centos.x86_64.rpmwget https://repo.varnish-cache.org/redhat/varnish-4.0/el7/x86_64/varnish/varnish-libs-4.0.3-1.el7.centos.x86_64.rpmwget https://repo.varnish-cache.org/redhat/varnish-4.0/el7/x86_64/varnish/varnish-libs-devel-4.0.3-1.el7.centos.x86_64.rpmrpm -iv jemalloc-3.6.0-1.el6.x86_64.rpmrpm -iv varn*.rpm</pre>  Over time, the versions will be updated, so please make sure you check that the versions are the same by visiting this link - https://repo.varnish-cache.org/redhat/varnish-4.0/el7/x86_64/varnish/ If there are newer packages than what I have, simply replace the wget commands with the newer packages and the installation process should remain the same. == Install Varnish 3.0.5 On CentOS 6.5 ==  Download and Install the official Repo from Varnish. The repo below is for Varnish 3, if you want to use varnish 4 please use the next repo down this page.
<pre>
rpm --nosignature -i https://repo.varnish-cache.org/redhat/varnish-3.0.el6.rpm
</pre>
 
<pre>
rpm --nosignature -i https://repo.varnish-cache.org/redhat/varnish-4.0.el6.rpm
</pre>
 
Install Varnish
##change to
VARNISH_LISTEN_ADDRESS=$IP_you_want_varnish_to_listen_on
VARNISH_LISTEN_PORT=80
</pre>
</pre>
== Install Varnish 4.0.2 On CentOS 6.5 = =
Download and Install the Repo from Varnish
<pre>
</pre>
==Upgrade Varnish 3.0.5 to Varnish 4.0.2 CentOS 6.5 ==
This is not exactly seamless, or at least as simple as it sounds. All you should really have to do is grab the new repo, install, remove varnish 3, install varnish 4, however if you don't have a few things in place your site is going to be down.
rpm -i jemalloc-3.6.0-1.el6.x86_64.rpm
</pre>
 
'''Backup''' existing config files. A backup is made when you remove Varnish 3, but do it anyway! Varnish 4 is going to get installed with default configs, so it has no idea what the backends are, or were, and all the VCL is going to be gone, so having the original configs to fall back on is nice.
</pre>
'''Remove Varnish'''. If you don't want any downtime, you should probably update '''[[Apache ]]''' or Nginx to listen on port 80, stop Varnish, restart the web server, then remove varnish. Or if you don't care about downtime, and want to do it live, just remove varnish.
<pre>
yum remove varnish
At this point, you can start up varnish. Replace the new Varnish config files with the old ones and you should be back up and running. VCL changes will probably cause varnish to fail if you have a ton of crazy vcl, so you will want to test it out before you do this live.
=Configuration Files=  *Used for parameters and command line arguments. When you change this, you need to run service varnish restart for the changes to take effect. '''Debian based'''<pre>/etc/default/varnish </pre>'''Redhat / CentOS'''<pre>/etc/sysconfig/varnish</pre>  *This contains your VCL and backend-definitions. After changing this, you can run either service varnish reload, which will not restart Troubleshooting Varnish, or you can run service varnish restart, which empties the cache.<pre>/etc/varnish/default.vcl</pre> <pre>backend default { .host = "localhost"; .port = "8080";}</pre> *A backend server is the server providing the content Varnish will accelerate. '''.host''' = The host that varnish will talk to get content. If this is installed on the same server that Apache is using, this would be localhost or 127.0.0.1. If the Apache server is on another server, you would put in the IP of that server, private network is preferable, but a public IP will work as well. '''.port''' = The port that Varnish will connect to to get content to cache. If Apache is listening on port 80, then you would configure this to listen on port 80. If Apache is listening on port 8080, then you would set this to port 8080. ==Main Configuration Options== <pre># # Default address and port to bind to# # Blank address means all IPv4 and IPv6 interfaces, otherwise specify# # a host name, an IPv4 dotted quad, or an IPv6 address in brackets.VARNISH_LISTEN_ADDRESS=VARNISH_LISTEN_PORT=80 #### I've added this to explain these options a bit more thoroughly.  VARNISH_LISTEN_ADDRESS= Leaving this blank means that Varnish will listen on all interfaces. You can specify whether or not to listen on a public IP, or private, if this is the front end server then you will want this to listen on the public IP. VARNISH_LISTEN_PORT= The port that Varnish will listen on, you will more than likely want to set this to port 80.</pre> =Varnish Storage Backends= ==malloc== Varnish will request the entire size of the cache with a malloc call. Every object stored by Varnish will be stored in RAM. Keep in mind that if you do not limit RAM usage Varnish might use up all available RAM, forcing the OS to start to swap, at this point performance may degrade. Be sure to size Varnish Malloc allocation accordingly.  Each object that is stored in Varnish will consume about 1KB of RAM, regardless of the size. So many small files may start to bloat memory usage in a hurry. It's best to set a limit when using Malloc. '''Malloc is the fastest option to use with Varnish''' You can configure Varnish to use Malloc as backend storage by modifying '''/etc/sysconfig/varnish'''. You can either define the variable, and pass to the DAEMON_OPTS, or just define DAEMON_OPTS<pre>vim /etc/sysconfig/varnish</pre> <pre>VARNISH_STORAGE="malloc,100M" *OR* DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s malloc,100M \ -u www-data \ -g www-data"</pre> Then restart Varnish<pre>service varnish restart</pre> ==file== Varnish creates a file on the filesystem to contain the entire cache. It then tells the OS via mmap() to map the entire file into memory if possible. Keep in mind that if you are still using spinning disks, you may see lower performance than using malloc due to write latency. If you are using SSDs, or have a large RAID that can handle some extra IO, the File option may be a good option. You can modify the file location to whatever you want, so if you have a spare disk, create the file and update '''VARNISH_STORAGE_FILE''' with the full path to the file.  *Choose file if you have a large cache that will not fit entirely into RAM AND have fast disks (ssds)  You can configure Varnish to use File as backend storage by modifying '''/etc/sysconfig/varnish'''. You can either define the variable, and pass to the DAEMON_OPTS, or just define DAEMON_OPTS<pre>vim /etc/sysconfig/varnish</pre> <pre>VARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.binVARNISH_STORAGE="file,100M,${VARNISH_STORAGE_FILE}" *OR* DAEMON_OPTS="-a :80 \ -T localhost:6082 \ -f /etc/varnish/default.vcl \ -S /etc/varnish/secret \ -s file,100M,/var/lib/varnish/varnish_storage.bin \ -u www-data \ -g www-data"</pre> Then restart Varnish<pre>service varnish restart</pre>   '''Shared memory log''' -- Not much needs to be done with this besides making sure the log is stored in RAM (/dev/shm) =Varnish 3 Configuration= nf_conntrack== VARNISH_MIN_THREADS == For Varnish 3.0 the default value is 2 (according to man page). If you have a very busy server, you may want to raise this value. Keep in mind that this is only the minimum value, and really only matters if you get frequent bursts of traffic. This can be found and configured in '''/etc/sysconfig/varnish'''<pre>VARNISH_MIN_THREADS=2</pre> == VARNISH_MAX_THREADS == For Varnish 3.0 the default value is 500 (according to man page). If you have a very busy server, you may want to raise this value. Raising this to very high levels may cause some issues, so you want to watch the server after you change this value to make sure varnish doesn't go all crazy on you. This can be found and configured in '''/etc/sysconfig/varnish''' <pre>VARNISH_MAX_THREADS=500</pre> ==VARNISH_THREAD_TIMEOUT== For Varnish 3.0 the default value is 300 seconds (according to man page). If you prefer to kill your threads while they are young, you can lower the value. If you want your threads to have a long, fulfilling life and experience all the static files they can, raise this value. This can be found and configured in '''/etc/sysconfig/varnish''' <pre>VARNISH_THREAD_TIMEOUT=300</pre> ==VARNISH_TTL== For Varnish 3.0 the default value is 120 seconds (according to man page). This is the default TTL that is assigned to new objects that no one cared about, or specified a TTL for. This is a minimum, hard value to live in the cache. If you want to clear out the cache more frequently, and don't like to specify ttls elsewhere, then lowering or raising this value is your best bet. This can be found and configured in '''/etc/sysconfig/varnish''' <pre>VARNISH_TTL=120</pre> =Varnish 4 Configuration= ==thread_pools== By default there are 2 thread pools. The minimum value is 1. This setting sets the amount of worker pools. You can '''increase''' this value on the fly without a restart, however you must restart the varnish server if you decrease this value. If you have many CPUs, then raising this value to the amount of CPUs probably makes sense if Varnish really needs moar power. Raising this too high, or well about the amount of CPUs is not a good idea. Increasing this to a sane value will reduce lock contention and therefore performance. <pre>thread_pools=2</pre>  ==thread_pool_max== The Varnish '''thread_pool_max''' value is the maximum number of workers per pool. The default value is 5000. The minimum value is 100. This defines the number of worker threads per pool. You may not want to raise this value at all, as more threads can cause threads to start to step on each others toes. Do you really need more than 5000 threads? Do you? <pre>thread_pool_max=5000</pre> ==thread_pool_min== The Varnish '''thread_pool_min''' value is the minimum number of workers per pool. The default value is 100 and the maximum value is 5000. This setting works with '''thread_pool_max'''. You cannot set the minimum value to be higher than the max value. The default setting is probably fine in most cases, and you may find better performance by creating more pools with less workers, or vice versa. Varnish is already pretty damn fast.  <pre>thread_pool_min=100</pre> = Varnish backend Misc section =To configure Varnish using the '''file''' backend, do nothing, it's the default. However, you can change the size of RAM to use by changing the "1GB" value to whatever is appropriate.<pre>vim /etc/sysconfig/varnish # # Cache file locationVARNISH_STORAGE_FILE=/var/lib/varnish/varnish_storage.bin## # Cache file size: in bytes, optionally using k / M / G / T suffix,# # or in percentage of available disk space using the % suffix.VARNISH_STORAGE_SIZE=1GB## # Backend storage specificationVARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}"</pre> To configure Varnish to use the '''malloc''' backend, you will need to comment out the original VARNISH_STORAGE setting, create a new one, specify "malloc,$size_of_cache"<pre># # Backend storage specification#VARNISH_STORAGE="file,${VARNISH_STORAGE_FILE},${VARNISH_STORAGE_SIZE}"VARNISH_STORAGE="malloc,256MB"</pre> =Varnish Commands= ==Restart Varnish== This will restart varnish, obviously. If you prefer a more graceful approach and just want to update VCL without clearing cache, try the reload command. <pre>service varnish restart</pre> ==Reload Varnish== Reloads the vcl file, the cache is not affected, which is obviously preferred over a full restart. <pre>service varnish reload</pre> ==Varnishstat== Shows a TON of useful information such as cache hits, misses, requests, backend connections and lots of other info. <pre>varnishstat</pre> ==Varnish Command Line Options== Command line options.<pre>Listen address-a <[hostname]:port> Specifies the vcl file location-f <filename> Set the tunable parameters-p <parameter=value> Authentication secret for management-S <secretfile> Management interface-T <hostname:port> Where and how to store objects-s <storagetype,options></pre> =Troubleshooting===nf_conntrack==
If you notice some odd issues with Varnish, check dmesg, if you see this you should raise the limit.
<pre>
</pre>
To raise the limit:you can modify the '''[[Sysctl tweaks|sysctl.conf]]''' file as follows
<pre>
#See what the current limit is and note it:
</pre>
==Varnish Benchmarks=====Setup===
*Running on CentOS 6
*Apache 2.2 is being used behind Varnish on the same server.
</pre>
===Varnish===
10,000 requests, 10 concurrency
<pre>
</pre>
===Apache===
10,000 requests, 10 concurrency
<pre>
</pre>
==How to Configure Varnish for MediaWiki==
Visit If you want to use Varnish with MediaWiki, this link VCL should work for a Varnish 4 VCL Example .0 and above. I have found that using '''[[Apache|Apache Event]]''' and '''[[http:GooglePageSpeed|mod_pagespeed]]''' provided better page load times and lower application response times compared to using Varnish. <pre>vim /etc/varnish/default.vcl</wikipre> This is an example VCL file<pre># Default backend definition. Set this to point to your content server.backend default { .host = "$back_end_IP"; .port = "8080";}   sub vcl_recv { set req.http.X-Forwarded-For = client.ip; set req.backend_hint= default; if (req.method != "GET" && req.mikejungmethod != "HEAD" && req.biz/indexmethod != "PUT" && req.method != "POST" && req.method != "TRACE" && req.method != "OPTIONS" && req.method != "DELETE") {return(pipe);} if (req.method != "GET" && req.php?titlemethod !=How_to_Configure_Varnish_for_MediaWiki MediaWiki Example VCL]"HEAD") {return(pass);} if (req.http.Authorization || req.http.Cookie) {return(pass);}
if (req.http.If-None-Match) {return(pass);} if (req.http.Accept-Encoding) { if (req.http.User-Agent ~ "MSIE 6") { unset req.http.Accept-Encoding; } elsif (req.http.Accept-Encoding ~ "gzip") { set req.http.Accept-Encoding = "gzip"; } elsif (req.http.Accept-Encoding ~ "deflate") { set req.http.Accept-Encoding = "deflate"; } else { unset req.http.Accept-Encoding; } } return(hash); }   sub vcl_backend_response {set beresp.grace = 120s;if (beresp.ttl < 48h) { set beresp.uncacheable = true; return (deliver); } if (!beresp.ttl > 0s) { set beresp.uncacheable = true; return (deliver); } if (beresp.http.Set-Cookie) { set beresp.uncacheable = true; return (deliver); }} sub vcl_deliver { # Happens when we have all the pieces we need, and are about to send the # response to the client. # # You can do accounting or modifying the final object here.}</pre> ==Varnish and mod_pagespeed== I've found that Varnish and '''[[GooglePageSpeed|mod_pagespeed]]''' don't work too well together unless you are willing to do a lot of configuration with Varnish VCL. Because mod_pagespeed wants to rewrite and optimize requests on the fly, putting a cache in front of pagespeed can actually do more harm than good in some cases. I suggest raising mod_pagespeed's LRU and Shared Memory Caches before trying to use pagespeed and varnish together. ==Varnish req.http.X-Forwarded-For Guide with Apache Example==
Please visit the link below to view the page on how to configure Varnish to pass along IP header info to Apache. This will make it so that the actual IP is logged and not 127.0.0.1
[[http://wiki.mikejung.biz/index.php?title=How_To_Forward_IP_Header_Varnish_Apache How_To_Forward_IP_Header_Varnish_Apache]]
 
[[Category:Varnish]]
[[Category:Optimization]]
[[Category:Caching]]
[[Category:Performance]]
[[Category:Linux]]
[[Category:CentOS]]
[[Category:Ubuntu]]
Anonymous user

Navigation menu