DELAYPOOL PARAMETERS
Usage
|
delay_pools numbers
|
Description This represents the number of delay pools to be used. For example, if you have one- class 2 delay pool and one- class 3 delay pool, you have a total of 2 delay pools. Delay pools allow you to limit traffic for clients or client groups, with various features. Objects retrieved from the cache will not be delayed. Only the object from the server will be delayed.
Example delay_pools 2 # 2 Delay pools
Caution To enable this option, you must use --enable-delay-pools with the # configure script.
|
Usage
|
delay_class number (delay-pool number), number (delay class)
|
Description This defines the class of each delay pool. There must be exactly one delay_class line for each delay pool. For example, to define two delay pools, one of class 2 and one of class 3, the settings will be like as given in the example. For details on the delay pool classes see Glossary.
Example delay_pools 2 # 2 delay pools delay_class 1 2 # pool 1 is a class 2 pool delay_class 2 3 # pool 2 is a class 3 pool
Caution To enable this option, you must use --enable-delay-pools with the # configure script.
|
Usage
|
delay_access allow acl name|deny acl name
|
Description This is used to determine which delay pool a request falls into. The first matched delay pool is always used, i.e., if a request falls into delay pool number one, no more delay are checked, otherwise the rest are checked in order of their delay pool number until they have all been checked. For example, if you want pool_1_acl in delay pool 1 and pool_2_acl in delay pool 2, then look at the example below.
Example To specify which pool a client falls into, create ACLs which specifies the ip ranges for each pool, and use the following:
delay_access 1 allow pool_1_acl delay_access 1 deny all delay_access 2 allow pool_2_acl delay_access 2 deny all
Caution To enable this option, you must use --enable-delay-pools with the # configure script.
|
Tag Name
|
delay_parameters
|
Usage
|
delay_parameters pool aggregate (for delay_class 1 networks) delay_parameters pool aggregate individual (for delay_class 2 networks) delay_parameters pool aggregate network individual (for delay_class 3 networks)
|
Description This defines the parameters for a delay pool. Each delay pool has number of "buckets" associated with it, as explained in the description of delay_class. For a class 1,2 and 3 delay pool, the syntax is given in the usage. For a Glossary of term related to delay_pool see Glossary .
Example 1: acl tech src 192.168.0.1-192.168.0.20/32 acl no_hotmail url_regex -i hotmail acl all src 0.0.0.0/0.0.0.0 delay_pools 1 #Number of delay_pool 1 delay_class 1 1 #pool 1 is a delay_class 1 delay_parameters 1 100/100 delay_access 1 allow no_hotmail !tech
In the above example, hotmail users are limited to the speed specified in the delay_class. IP's in the ACL tech are allowed in the normal bandwidth. You can see the usage of bandwidth through cachemgr.cgi.
Example 2: acl all src 0.0.0.0/0.0.0.0 # might already be defined delay_pools 1 delay_class 1 1 delay_access 1 allow all delay_parameters 1 64000/64000 # 512 kbits == 64 kbytes per second
The above example tells that the squid is limited to the bandwidth of 512k. For ACL you can go Here .
Caution To enable this option, you must use --enable-delay-pools with the # configure script.
|
Tag Name
|
delay_initial_bucket_level(percent, 0-100)
|
Usage
|
delay_initial_bucket_level bytes
|
Description The initial bucket percentage is used to determine how much is put in each bucket when squid starts, is reconfigured, or first notices a host accessing it (in class 2 and class 3, individual hosts and networks only have buckets associated with them once they have been "seen" by squid).
Default
|
delay_initial_bucket_level 50(bytes)
|
Caution This option is only available if Squid is rebuilt with the --enable-delaypools option.
|
Tag Name
|
incoming_icp_average incoming_http_average incoming_dns_average min_icp_poll_cnt min_dns_poll_cnt min_http_poll_cnt
|
Description This describes the algorithms used for the above tags.
INCOMING sockets are the ICP and HTTP ports. We need to check these fairly regularly, but how often? When the load increases, we want to check the incoming sockets more often. If we have a lot of incoming ICP, then we need to check these sockets more than if we just have HTTP. The variables 'incoming_icp_interval' and 'incoming_http_interval'determine how many normal I/O events to process before checking incoming sockets again. Note we store the incoming_interval multiplied by a factor of (2^INCOMING_FACTOR) to have some pseudo-floating point precision.
The variable 'icp_io_events' and 'http_io_events' counts how many normal I/O events have been processed since the last check on the incoming sockets. When io_events >incoming_interval, its time to check incoming sockets.
Every time we check incoming sockets, we count how many new messages or connections were processed. This is used to adjust the incoming_interval for the next iteration. The new incoming_interval is calculated as the current incoming_interval plus what we would like to see as an average number of events minus the number of events just processed.
incoming_interval = incoming_interval + target_average - number_of_events_processed.
There are separate incoming_interval counters for both HTTP and ICP events. You can see the current values of the incoming_interval, as well as a histogram of 'incoming_events' by asking the cache manager for 'comm_incoming', e.g.:
% ./client mgr:comm_incoming
Default
|
incoming_icp_average 6 incoming_http_average 4 incoming_dns_average 4 min_icp_poll_cnt 8 min_dns_poll_cnt 8 min_http_poll_cnt 8
|
Caution -We have MAX_INCOMING_INTEGER as a magic upper limit on incoming_interval for both types of sockets. At the largest value the cache will effectively be idling.
-The higher the INCOMING_FACTOR, the slower the algorithm will respond to load spikes/increases/decreases in demand. A value between 3 and 8 is recommended.
|
Tag Name
|
max_open_disk_fds
|
Usage
|
max_open_disk_fds number
|
Description This specifies the maximum file descriptor squid can use to open files. To avoid having disk as the I/O bottleneck, Squid can optionally bypass the on-disk cache if more than this amount of disk file descriptors are open.
A value of 0 indicates no limit
Default
|
max_open_disk_fds 0
|
|
Usage
|
offline_mode on|off
|
Description Enable this option and Squid will never try to validate cached objects. offline_mode gives access to more cached information than the proposed feature would allow (stale cached versions, where the origin server should have been contacted).
|
Usage
|
uri_whitespace options
|
Description The action to be done on the requests that have whitespace characters in the URI is decided with this tag. Available options:
strip: The whitespace characters are stripped out of the URL. This is the behavior recommended by RFC2616.
deny: The request is denied. The user receives an "Invalid Request" message.
allow: The request is allowed and the URI is not changed. The whitespace characters remain in the URI. Note the whitespace is passed to redirector processes if they are in use.
Encode: The request is allowed and the whitespace characters are encoded according to RFC1738. This could be considered a violation of the HTTP/1.1 RFC because proxies are not allowed to rewrite URI's.
chop: The request is allowed and the URI is chopped at the first whitespace. This might also be considered as a violation.
Default
|
uri_whitespace strip
|
Example uri_whitespace chop
|
Usage
|
broken_posts allow|deny acl name
|
Description A list of ACL elements which, if matched, causes Squid to send a extra CRLF pair after the body of a PUT/POST request. Some HTTP servers have broken implementations of PUT/POST, and rely on an extra CRLF pair sent by some WWW clients.
Example acl buggy_server url_regex ^https://.... broken_posts allow buggy_server
|
Usage
|
mcast_miss_addr enable|disable
|
Description If you enable this option, every "cache miss" URL will be sent out on the specified multicast address. This option is only available if Squid is rebuilt with the -DMULTICAST_MISS_STREAM option.
Default
|
mcast_miss_addr 255.255.255.255
|
Caution This option should be enabled only after a careful understanding. See multicast
|
Usage
|
mcast_miss_ttl time-units
|
Description This is the time-to-live value for packets multicasted when multicasting off cache miss URLs is enabled. This option is only available if Squid is rebuilt with the -DMULTICAST_MISS_TTL option.
Default
|
mcast_miss_ttl 16
|
|
Usage
|
mcast_miss_port port no
|
Description This is the port number to be used in conjunction with 'mcast_miss_addr'. This option is only available if Squid is rebuilt with the -DMULTICAST_MISS_TTL option.
Default
|
mcast_miss_port 3135
|
Caution This tag is used only when you enable mcast_miss_addr
|
Tag Name
|
mcast_miss_encode_key
|
Usage
|
mcast_miss_encode_key enable|disable
|
Description The URLs that are sent in the multicast miss stream are encrypted. This is the encryption key. This option is only available if Squid is rebuilt with the -DMULTICAST_MISS_STREAM option.
Default
|
mcast_miss_encode_key XXXXXXXXXXXXXXX
|
|
Tag Name
|
nonhierarchical_direct
|
Usage
|
nonhierarchical_direct on|off
|
Description By default, Squid will send any non-hierarchical requests (matching hierarchy_stoplist or not cacheable request type) direct to origin servers. If you set this to off, then Squid will prefer to send these requests to parents. Note that in most configurations, by turning this off you will only add latency to this request without any improvement in global hit ratio. If you are inside a firewall then see never_direct instead of this directive.
Default
|
nonhierarchical_direct on
|
|
Usage
|
prefer_direct on|off
|
Description Normally Squid tries to use parents for most requests. If you by some reason like it to first try going direct and only use a parent if going direct fails then set this to off. By combining non hierarchical_direct off and prefer_direct on you can set up Squid to use a parent as a backup path if going direct fails.
Default
|
prefer_direct off
|
|
Tag Name
|
strip_query_terms
|
Usage
|
strip_query_terms on|off
|
Description Squid by default does not log query parameters. These parameters are however forwarded to the server verbatim. If we want to enable logging of query parameters, the strip_query_terms directive can be used .
By default, Squid strips query terms from requested URLs before logging. This protects your user's privacy
Default
|
strip_query_terms on
|
|
Usage
|
coredump_dir directory
|
Description By default Squid leaves core files in the first cache_dir directory. If you set 'coredump_dir' to a directory that exists,Squid will chdir() to that directory at startup and coredump files will be left there.
Example coredump_dir /usr/local
|
Tag Name
|
redirector_bypass
|
Usage
|
redirector_bypass on|off
|
Description When this is 'on', a request will not go through the redirector if all redirectors are busy. If this is 'off' and the redirector queue grows too large, Squid will exit with a FATAL error and ask you to increase the number of redirectors. You should only enable this if the redirectors are not critical to your caching system.If you use redirectors for access control, and you enable this option,then users may have access to pages that they should not be allowed to request.
Default
|
redirector_bypass off
|
|
Tag Name
|
digest_generation
|
Usage
|
digest_generation on|off
|
Description This controls whether the server will generate a Cache Digest of its contents. By default, Cache Digest generation is enabled if Squid is compiled with USE_CACHE_DIGESTS defined. This option is only available if Squid is rebuilt with the --enable-cache-digests option.
Default
|
digest_generation on
|
|
Tag Name
|
ignore_unknown_nameservers
|
Usage
|
ignore_unknown_nameservers on|off
|
Description By default Squid checks that DNS responses are received from the same IP addresses that they are sent to. If they don't match, Squid ignores the response and writes a warning message to cache.log. You can allow responses from unknown nameservers by setting this option to 'off'.
Default
|
ignore_unknown_nameservers on
|
|
Tag Name
|
digest_bits_per_entry
|
Usage
|
digest_bits_per_entry number
|
Description This is the number of bits of the server's Cache Digest, which will be associated with the Digest entry for a given HTTP Method and URL (public key) combination. The default is 5. This option is only available if Squid is rebuilt with the --enable-cache-digests option.
Default
|
digest_bits_per_entry 5
|
|
Tag Name
|
digest_rebuild_period
|
Usage
|
digest_rebuild_period time-units
|
Description This is the number of seconds between Cache Digest rebuilds. By default the server's Digest is rebuilt every hour. This option is only available if Squid is rebuilt with the --enable-cache-digests option.
Default
|
digest_rebuild_period 1 hour
|
|
Tag Name
|
digest_rewrite_period
|
Usage
|
digest_rewrite_period time-units
|
Description This is the number of seconds between Cache Digest writes to disk. By default the server's Digest is written to disk every hour. This option is only available if Squid is rebuilt with the--enable-cache-digests option.
Default
|
digest_rewrite_period 1 hour
|
|
Tag Name
|
digest_swapout_chunk_size
|
Usage
|
digest_swapout_chunk_size bytes
|
Description This is the number of bytes of the Cache Digest to write to disk at a time. It defaults to 4096 bytes (4KB), the Squid default swap page. This option is only available if Squid is rebuilt with the --enable-cache-digests option.
Default
|
digest_swapout_chunk_size 4096 bytes
|
|
Tag Name
|
digest_rebuild_chunk_percentage
|
Usage
|
digest_rebuild_chunk_percentage %(0 to 100)
|
Description This is the percentage of the Cache Digest to be scanned at a time. By default it is set to 10% of the Cache Digest. This option is only available if Squid is rebuilt with the --enable-cache-digests option.
Default
|
digest_rebuild_chunk_percentage 10
|
|
Usage
|
chroot enable|disable
|
Description Squid by default does not fully drop root privileges because it may be required during reconfigure.So use this directive to have Squid do a chroot() while initializing. This also causes Squid to fully drop root privileges after initializing . Squid only drops all root privilegies when chroot_dir is used. Without chroot_dir it runs as root with effective user nobody. This means, for example, that if you use a HTTP port less than 1024 and try to reconfigure, you will get an error .
|
Tag Name
|
client_persistent_connections
|
Usage
|
client_persistent_connections on|off
|
Description Persistent connection support for clients and servers. By default, Squid uses persistent connections (when allowed) with its clients and servers. You can use these options to disable persistent connections with clients and/or server.
Related information :
If the browser is talking to web server directly, socket can be closed after it is done using keep-alive directive in apache configuration file. The same thing can be done in Squid using these directives client_persistent_connections and server_persistent_connections.
Default
|
client_persistent_connections on
|
|
Tag Name
|
pipeline_prefetch
|
Usage
|
pipeline_prefetch on|off
|
Description To boost the performance of pipelined requests to closer match that of a non-proxied environment Squid tries to fetch up to two requests in parallell from a pipeline.
Default
|
pipeline_prefetch on
|
|
Tag Name
|
extension_methods
|
Usage
|
extension_methods request method
|
Description Squid only knows about standard HTTP request methods. Unknown methods are denied, unless you add them to this list. You can add up to 20 additional "extension" methods here.
|
Tag Name
|
high_response_time_warning
|
Usage
|
high_response_time_warning msec
|
Description If the one-minute median response time exceeds this value, Squid prints a WARNING with debug level 0 to get the administrators attention. The value is in milliseconds.
Default
|
high_response_time_warning 0
|
|
Tag Name
|
high_page_fault_warning
|
Usage
|
high_page_fault_warning time-units
|
Description If the one-minute average page fault rate exceeds this value, Squid prints a WARNING with debug level 0 to get the administrators attention. The value is in page faults per second.
Default
|
high_page_fault_warning 0
|
|
Tag Name
|
high_memory_warning
|
Usage
|
high_memory_warning number
|
Description If the memory usage (as determined by mallinfo) exceeds value, Squid prints a WARNING with debug level 0 to get the administrators attention.
Default
|
high_memory_warning 0
|
|
Tag Name
|
store_dir_select_algorithm
|
Usage
|
store_dir_select_algorithm algorithm type
|
Description Squid currently supports two algorithms for selecting cache directories for new objects: least-load and round-robin. Set this to 'round-robin' as an alternative.
Default
|
store_dir_select_algorithm least_load
|
|
Description Microsoft Internet Explorer up until version 5.5 Service Pack 1 has an issue with transparent proxies, where in it is impossible to force a refresh. Turning this on provides a partial fix to the problem, by causing all IMS-REFRESH requests from older IE versions to check the origin server for fresh content. This reduces hit ratio by some amount (~10%), but allows users to actually get fresh content when they want it. Note that because Squid cannot tell if the user is using 5.5 or 5.5SP1, the behavior of 5.5 is unchanged from old versions of Squid (i.e. a forced refresh is impossible).Newer versions of IE will, hopefully, continue to have the new behavior and will be handled based on that assumption. This option defaults to the old Squid behavior, which is better for hit ratios but worse for clients using IE, if they need to be able to force fresh content.
|
|