Main Page
Contents
- 1 Links to other informative wiki pages
- 2 for loop example script
- 3 Sed
- 4 Linux commands to check for DDoS and excessive connections
- 5 Web Permissions | Files
- 6 Modules / Directives
- 7 How To Optimize WordPress
- 8 PHP
- 9 Email
- 10 FTP
- 11 Nginx
- 12 cPanel Tips and Tricks
- 13 DNS
- 14 NFS
- 15 IPTABLES
- 16 Kernel Stuff
- 17 Postfix
- 18 Benchmarking Tools
- 19 ZFS
- 20 View CPU Temps in Cent 6.5
- 21 Linux Memory Usage Overview
- 22 Storm and LiquidWeb API
- 23 Docker run command line examples
- 24 Linux Kernel Networking
Links to other informative wiki pages
Check out this HHVM wiki! Also fresh is Sysdig
Windows 10
- Windows 10 Tech Preview - Currently testing out Windows 10 Tech Preview, Build 9926. This wiki page will eventually have tons of information and updates on Windows 10.
GPU and Video Decoding Stuff
- DyingLight - Some PC screenshots of DyingLight, using Nvidia DSR with a GTX 970. DSR does improve quality, but is very costly in terms of performance.
- MadVR Image Doubling 720p - 720p madVR Image Doubling screenshot comparison page for 720p video playback
- Windows 8.1 MPC-HC and MadVR Setup Guide - A Windows 8.1 based guide on how to properly configure MPC-HC (media player classic home cinema) to work with MadVR as the video renderer. This setup guide includes images for each of the steps and explanations of the main mpc-hc and madvr settings.
- CUDA - GPUs are on track to control the global population. CUDA is how the matrix started. It's pretty cool though, CUDA allows a GPU to accelerate some types of processing that previously only the CPU could compute.
- DXVA2 - A quick article that explains what DXVA2 is and how it interacts with a GPU during the video playback process.
- MadVR - MadVR MPC-HC wiki containing optimization tips and benchmarks for Chroma Upscaling, Image Doubling, Image Upscaling, Image downscaling and many other configuration settings.
- PotPlayer - PotPlayer wiki about how to install, configure and optimize potplayer using MadVR, CUDA and GPU magic
- PotPlayer Advanced Configuration - Slightly more detailed than the main PotPlayer wiki.
- DirectShow - What is directshow? What does directshow do? Want to learn more about directshow? Then please visit this page!
- MadVR Chroma Upscaling - MadVR Chroma Upscaling performance results and general information on the best scaling algorithm to use to upscale Chroma with MadVR.
cPanel Stuff
CloudLinux -- Overview on what CloudLinux is and the types of resources that it limits for cpanel users.
Webserver Stuff
- Litespeed - Information about the litespeed webserver installation process and how to correctly configure litespeed on a cpanel server.
Apache - Do you like websites? You can thank Apache! It's the most common webserver around. Nginx is gaining some steam, but Apache is still pretty awesome!
MySQL, PHP and Caching
- Memcached - Caching makes everything faster! I like fast things, so I use memcached a lot and you should use memcached too! I'll show you how to use memcached to improve website load time and reduce latency when connecting to a database! All of this can be done if you know how to tame the mythical beast know as memcached.
PHP_OPcache - Do you like fast things? Want to make PHP faster? Use opcode caching. For your health!
fcgid - FastCGI will make ur blog faster! Maybe, if you know how to configure Apache to use FastCGI to proxy PHP requests to a dedicated PHP process! If you want to learn more about using the FCGI handler on cpanel, please check out this page!
Php-fpm - Speaking of awesome...PHP-FPM is here. Are you still using mod_php and wondering why apche is slow? It's because you are doing php wrong! Check out this page for information on how to install, tune, and optimize php-fpm with apache
Monitoring and Analysis
Newrelic - Newrelic is pretty awesome. They offer a free tier which lets you monitor server resources for 24 hours. You can also utilize APM which is an application monitoring services which shows the response time if your application and database. If you are looking for common Newrelic agent commands or need help troubleshooting Newrelic's agents, check out this page!
Sysdig - Looking for a utility that will provide insight into application and Linux performance? Sysdig is your tool! I really like it and find it pretty useful so I made a wiki!
Sysstat - Sysstat contains sar which is used to record server resource usage over the course of each day. Sar is really helpful if you care about server performance so knowing how to view data like swap in and swap out activity is critical.
Browser and Front End
Chrome - A list of tweaks (flags) that you can enable in the Chrome and Chromium web browsers which can help to speed up performance. Useful if you notice slow, laggy websites and want to speed up your browser.
HTTP 2.0 - Still in creation mode, this wiki will eventually contain all kinds of information on the new HTTP 2.0 protocol.
S3 Browser - S3 Browser is similar to an FTP client, but it speaks to a REST endpoint of an S3 compatible Object Storage service. AWS S3 is supported, but other compatible Object Storage services are also supported.
Benchmarking and Performance Tuning Stuff
- Benchmarking -- A linux benchmarking reference wiki with many example commands and explanations for sysbench, fio, iozone and ioping tests.
Sysbench -- Similar to the benchmarking wiki but with 100% focus on sysbench and how to benchmark vps and cloud servers.
OS Tuning - You can't tune an application until you tune the operating system. Check out my OS system tuning wiki for tips and tricks on speeding up your slow CentOS server.
Performance_Troubleshooting_Methodologies - http://wiki.mikejung.biz/Performance_Troubleshooting_Methodologies
Dmcache - Caching, SSDs, what could be better? What about using SSDs to cache your slow as balls hdds? Learn how to by checking out this dmcache wiki!
NUMA - http://wiki.mikejung.biz/NUMA
Storage and File System Stuff
LVM Commands - LVM command reference guide. Explains what logical volumes and logical groups are all about and how to create an LVM volume
Ceph - Ceph is a distributed storage system that powers the open cloud and internet of things. Just kidding, it doesn't do all that but it is still pretty awesome technology!
LSI - LSI makes RAID cards. Been around for a long time, recently bought out by seagate. LSI cards are nice, but sometimes slow if you do not configure RAID for performance. If you want to add some performance to your RAID, make sure you configured the card correctly!
DRBD - Data replicating block device, aka DRBD has been a heavyweight in the cloud storage wars for a while now. You got Ceph in one corner, OCFS2 in another, RAID (for backups) and DRBD. DRBD can be tricky to configure and even if you get it to work it might still be somewhat slow. I have created a wiki that covers some basic performance tuning for DRBD.
Big Data - Main page that links to topics like Cassandra and Hadoop.
Other Stuff
Load Balancing - Learn more about the Stingray / Riverbed Traffic Manager! It's pretty cool and has a ton of options, if you are looking for some load balancing information, check out the wiki!
bashmarks - Is a simple tool that allows you to save directory locations and then later return to them using extremely simple commands that even tab complete!
Cassandra - Cassandra is a NOSQL like DB that Apache made. This wiki contains general information about what Cassandra is, how it works and details on the topology.
Hadoop - Also NoSQL like, hadoop is great for running batch jobs against a large amount of data.
Gcc_CentOS - Why is GCC always old on CentOS? Why does CentOS always ship old software? I do not know, but I can show you how to update GCC on CentOS if you visit the GCC CENTOS wiki!
MySQL_Optimization - http://wiki.mikejung.biz/MySQL_Optimization
- http://wiki.mikejung.biz/Processor
- http://wiki.mikejung.biz/Ubuntu
- http://wiki.mikejung.biz/ApacheTheory
- http://wiki.mikejung.biz/Logs
http://wiki.mikejung.biz/Modpagespeed
http://wiki.mikejung.biz/Sysstat
http://wiki.mikejung.biz/Sysctl_tweaks
for loop example script
- These are just basic examples of what you can do with for loops.
Locate files and do things with them
find /location/of/files -n 'file' > somelist for each in `cat somelist` ; do something $each ; done
Sed
Add a word to the begining of a line
sed 's/^/$Wordtoadd/' original.txt > sorted_original.txt
Linux commands to check for DDoS and excessive connections
Check for a basic Dos, or heavy traffic:
netstat -tn 2>/dev/null | grep ':80 ' | awk '{print $5}' | cut -f1 -d: | sort | uniq -c | sort -rn | head
Check for SYN Floods
netstat -nap | grep SYN | wc -l
To display the IPs that have the most SYN connections to the server
netstat -tn 2>/dev/null | grep SYN | awk '{print $5}' | cut -f1 -d: | sort | uniq -c | sort -rn | head
Website connections and stats
---site connections /usr/bin/lynx -dump -width 500 http://127.0.0.1/whm-server-status | awk 'BEGIN { FS = " " } ; { print $12 }' | sed '/^$/d' | sort | uniq -c ---busiest site /usr/bin/lynx -dump -width 500 http://127.0.0.1/whm-server-status | grep GET | awk '{print $12}' | sort | uniq -c | sort -rn | head ---busiest script /usr/bin/lynx -dump -width 500 http://127.0.0.1/whm-server-status | grep GET | awk '{print $14}' | sort | uniq -c | sort -rn | head
One liner that shows connections to all domains during a certain time. Change the "hour" variable to the hour you want to search. 16= 4PM
cd /usr/local/apache/domlogs hour=16;for domain in $(cat /etc/userdomains | grep -v nobody |cut -d':' -f1); do if [ -e "$domain" ]; then for minute in $(seq 10 59); do count=$(cat $domain | grep "$hour:$minute"|wc -l);if [ "$count" -gt 1 ]; then echo "$domain : $hour:$minute : $count" >> /home/domlogreport.$hour;fi;done;echo;echo;fi;done cat /home/domlogreport.$hour | sort -g -k 3
Report is in /home/domlogreport
Apache Status
/usr/bin/lynx -dump -width 500 http://127.0.0.1/whm-server-status | less
Apache connection
/usr/bin/lynx -dump -width 500 http://127.0.0.1/whm-server-status | awk '{print $11" "$12}'| awk NF |grep [0-9].[0-9].[0-9].[0-9]|sort|uniq -c|sort -n|tail -50
Get a list of top IPs accessing the server (some false positives)
tail -n50000 access_log | grep -o "[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}" | sort -n | uniq -c | sort -n
Web Permissions | Files
Default Web Permissions NOTE: MAKE SURE YOU ARE IN A public_html directory!!!!!!
find . -type f -exec chmod 644 {} \; find . -type d -exec chmod 755 {} \;
Find all users php.ini files.
find /home/*/public_html/* -name php.ini
Modules / Directives
speling
mod_speling.c
Once added via Easy Apache, you can simply add these directives to a .htaccess file
CheckCaseOnly On CheckSpelling On
How To Optimize WordPress
For a detailed guide, please visit my [WordPress Optimization Guide]
PHP
Install ssh2 Pecl extension
yum install libssh2 libssh2-devel pecl install ssh2 # You may need to update the channel, if so; pecl channel-update pecl.php.net Then just; vim /etc/php.ini extension=ssh2.so
Upload issues
May need to check two files, the first is the global php.ini file, the next is the modsec file (if applicable)
vim /usr/local/lib/php.ini upload_tmp_dir = /tmp session.save_path = /tmp ---------------------------------------------- vim /usr/local/apache/conf/modsec2/custom.conf SecUploadDir /tmp SecTmpDir /tmp
Parse Error
Parse error: syntax error, unexpected T_STRING
Check the file and remove <?xml version="1.0" encoding="utf-8"?>
Force PHP5
Add to .htaccess:
AddType application/x-httpd-php5 .html .htm
How to enable DKIM for a cpanel account
- DomainKeys Identified Mail (DKIM) defines a mechanism by which email messages can be cryptographically signed, permitting a signing domain to claim responsibility for the introduction of a message into the mail stream. Message recipients can verify the signature by querying the signer's domain directly to retrieve the appropriate public key, and thereby confirm that the message was attested to by a party in possession of the private key for the signing domain.
- To verify that everything is setup correctly you can send an email from an email account on that domain to [email protected] No need to have a subject or body. This service will then reply with a message stating the verification of DKIM, DomainKeys, SPF, SpamAssassin, and Sender-ID. Great tool to test all kinds of email verification systems.
To install on a cPanel server:
/usr/local/cpanel/bin/dkim_keys_install <username> or for i in `ls /var/cpanel/users`; do /usr/local/cpanel/bin/dkim_keys_install $i; done
- Add the Policy Record
_domainkey IN TXT "t=y; o=~; n=Interim Sending Domain Policy; [email protected]"
General webmail and email permission guidelines for cPanel servers
Below are some baseline permissions that should be used with Exim and Dovecot:
/home/user/etc/
domain.com file should have: permissions: 750 ownership: username:mail
/home/user/etc/domain.com/
passwd permissions: 640 ownership: user:mail quota permissions: 640 ownership: user:mail shadow permissions: 640 ownership: user:user
/home/user/mail/
700 user:user cur/ 751 user:user domain.com/ 700 user:user anything else
If email accounts are not showing up in cPanel for a specific cpanel user be sure to check /home/$user/etc to make sure the passwd file and shadow file have proper permissions also make sure they are located in
/home/user/etc/domain.com/
If all the permissions are correct and the directories are owned by the user, try restarting cpanel mail services to see if this helps resolve the issue.
If you run into a Roundcube error like "unable to connect to database", the best thing to do is to drop the database, then re-install roundcube, which automatically re-creates the db. Make sure you backup the database before you drop it, or else you risk lots of possible data loss
cd /home/temp mysqldump roundcube > roundcube.sql mysql -e "drop database roundcube;" /usr/local/cpanel/bin/update-roundcube --force
If you are running into spam issues you can run the command below to find top sending IPs in exim logs:
grep "SMTP connection from" /var/log/exim_mainlog |grep "connection count" |awk '{print $7}' |cut -d ":" -f 1 |cut -d "[" -f 2 |cut -d "]" -f 1 |sort -n |uniq -c | sort -n
Find authenticated users who may be spamming:
find /var/spool/exim/input/ -name '*-H' | xargs grep 'auth_id'
Spam comming from scripts:
grep cwd=\/home\/ /var/log/exim_mainlog| cut -d' ' -f4 | sort | uniq -c | sort -n
Removing all queued messages at once in a safe way:
exim -bp | awk '/^ *[0-9]+[mhd]/{print "exim -Mrm " $3}' | sh
Or you can do the same from the mail queue manager in WHM.
APF SMTP tweak enables mail to be sent only from the mail or mailman GID, and blocks all outbound SMTP, except through the sendmail binary.
Add this bold line of code to /etc/init.d/apf , right underneath the start) case:
/usr/local/sbin/apf --start >> /dev/null 2>&1 '''/scripts/smtpmailgidonly on''' echo_success
FTP
If you are having issues with Proftp connections or with authentication. Check the Proftp configuration file below and make sure that "AuthPAM" is actually on.
vim /etc/proftpd.conf AuthPAM on
If you want to make sure PureFTP is using FTPES, edit /etc/pure-ftpd.conf and uncomment (enable) the PassivePortRange line, like below.
# Port range for passive connections replies. - for firewalling. PassivePortRange 30000 50000
APF - /etc/apf/conf.apf
# Common ingress (inbound) TCP ports IG_TCP_CPORTS="20,21,22,25,53,80,110,143,443,465,993,995,2082,2083,2084,2086,2087,2095,2096,3306,6666,7786,30000_50000" # Common egress (outbound) TCP ports EG_TCP_CPORTS="21,25,80,443,43,30000_50000"
CSF - /etc/csf/csf.conf
# Allow incoming TCP ports TCP_IN = "20,21,22,25,53,80,110,143,443,465,953,993,995,2077,2078,2082,2083,2086,2087,2095,2096,30000:50000" # Allow outgoing TCP ports TCP_OUT = "20,21,22,25,37,43,53,80,110,113,443,587,873,953,2087,2089,2703,30000:50000"
If you are encountering vsftp timeout issues or strange dns like issues with vsftp check the vsftpd configuration file and make sure that reverse_lookup_enable is set to no
/etc/vsftpd/vsftpd.conf: reverse_lookup_enable=NO
Nginx
Common configuration settings
- The main configuration file to edit is /etc/nginx/nginx.conf, which by default also reaches out to include any additional configuration files in the conf.d directory and any virtual host files in the sites-enabled directory.
- worker_processes in /etc/nginx/nginx.conf. This should be equal to the amount of CPU cores the server has.
worker_processes $CPUs;
- worker_connections defines how many connections each worker process is allowed to handle
- worker_processes x worker_connections tells the maximum amount of HTTP connections possible at any moment
File cache settings
http { [...] ## # File Cache Settings ## open_file_cache max=5000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on;
Gzip This will compress content at the expense of a little extra CPU, but it will save a lot of bandwidth.
gzip on; gzip_disable "msie6"; gzip_min_length 1100; gzip_vary on; gzip_proxied any; gzip_buffers 16 8k; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/rss+xml text/javascript image/svg+xml application/x-font-ttf font/opentype application/vnd.ms-fontobject;
Conflicting Server Name Error
Check for duplicates/system users:
grep -i domain.com /var/cpanel/users/*
If there is a domain entry owned by "system" remove this file:
rm /var/cpanel/users/system
Then run:
/scripts/rebuildnginxvhost
cPanel Tips and Tricks
httpd.conf domain errors?
info [rebuildhttpdconf] Unable to determine group for $username, skipping domain $domain.com Check /var/cpanel/userdata/$user/$domain.com Make sure group: is set correctly /scripts/rebuildhttpdconf service httpd restart
Exclude files from being updated.
vim /etc/cpanelsync.exclude
Then add the absolute path for the file. An example would be Roundcube webmail settings:
/usr/local/cpanel/base/3rdparty/roundcube/config/main.inc.php
Databases listed in Cpanel, but do not actually exist
Check the following files and remove any users / dbs that do not exist:
/var/cpanel/databases/ $user.cache $user.yaml
spamd issues
/scripts/perlinstaller IO::Socket::IP --force
DNS
Disable zone transfers with named.conf
acl can_axfr { 127.0.0.1; }; options { allow-recursion { trusted; }; allow-transfer { can_axfr; }; };
WARNING: key file (/etc/rndc.key)
service named stop mv /etc/rndc.conf /etc/rndc.conf.OLD service named start
NFS
yum install nfs* mkdir /$whatever/you/want/to/share vim /etc/exports added: /$whatever/you/want/to/share $IPADDY/Subnetmask(rw,no_root_squash,subtree_check) /etc/init.d/nfs start /etc/init.d/nfslock start /etc/init.d/rpcbind start /etc/init.d/rpcidmapd restart vim /etc/idmapd.conf Uncommented / added: Domain = $local.domain.com chkconfig rpcbind on chkconfig rpcidmapd on chkconfig nfs on chkconfig nfslock on Make sure port 2049 is open as well.
IPTABLES
This is an example of a default IPTABLES set of rules:
*filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :RH-Firewall-1-INPUT - [0:0] :TRUSTED - [0:0] -A INPUT -j RH-Firewall-1-INPUT -A FORWARD -j RH-Firewall-1-INPUT ###Add trusted IPs / hosts / IP blocks here ###Example would be: -A TRUSTED -s 192.168.0.0/24 -A TRUSTED -s $myhomeIP -A TRUSTED -s $someotherserver ###END TRUSTED HOSTS SECTION -A RH-Firewall-1-INPUT -i lo -j ACCEPT -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT -A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT -A RH-Firewall-1-INPUT -p udp -m udp --dport 53 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT ###EXAMPLE FOR ACTIVE/PASSIVE FTP ACCESS FOR TRUSTED HOSTS -A RH-Firewall-1-INPUT -p tcp --dport 21 -j TRUSTED -A RH-Firewall-1-INPUT -p tcp --dport 20 -j TRUSTED -A RH-Firewall-1-INPUT -p tcp --dport 30000:50000 -j TRUSTED ###END FTP EXAMPLE -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited
Kernel Stuff
Tools and Utilities used to build a kernel
gcc --version
- Used to compile the kernel
ld -v
- Tools used to assist when compiling the kernel
make --version
- Used to determine which files are needed to compile the kernel
Tools and Utilities to use the kernel
fdformat --version
- Used to handle mounting of disks
depmod -V
- Used to load kernel modules and remove them
File System Tools
tune2fs
- Used to handle the file systems such as ext4
Command to see what modules are loaded:
lsmod
See all modules, even if they are not loaded:
modprobe -l
Get detailed information on a module:
modinfo $module
Remove a module (assuming no other dependents are using it):
modprobe -r $module
See all kernel settings
sysctl -a
TCP_FIN_TIMEOUT This setting determines the time that must elapse before TCP/IP can release a closed connection and reuse its resources. During this TIME_WAIT state, reopening the connection to the client costs less than establishing a new connection. By reducing the value of this entry, TCP/IP can release closed connections faster, making more resources available for new connections. Addjust this in the presense of many connections sitting in the TIME_WAIT state:
# echo 30 > /proc/sys/net/ipv4/tcp_fin_timeout (default: 60 seconds, recommended 15-30 seconds)
Steps to compile and customize a kernel
The steps below will download the kernel source, decompress it, then will make the kernel with the default options.
mkdir $place to put the kernel cd $place to put the kernel wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.9.tar.xz xz -d linux-3.9.tar.xz tar -xvf linux-3.9.tar cd linux-3.9/ make defconfig
From here, we can customize the kernel further.
make menuconfig
Options when using menuconfig:
[*] = Selected, if no star then not selected <Y> = Select module to be built into the Kernel <M> = Select module to be built as a module to be loaded, but not built into the kernel
Postfix
Log location:
/usr/local/psa/var/log/maillog
Some one liners to figure out what is in the queue and how to remove bullshit emails.
mailq | grep ^[A-Z\|0-9] | awk '{print $7}' | cut [email protected] -f2 | sort | uniq -c | sort -rn | head -15
Once you figure out senders or whatever, you can do something like this to either delete the email or put it in the hold queue
Put in hold queue
mailq | grep $someshittydomain.com | awk '{print $1}' | postsuper -h -
Delete the emails
mailq | grep $someshittydomain.com | awk '{print $1}' | postsuper -d -
If these commands dont remove all the emails, you might need to use cut to get rid of the "!" or "*" which sometimes get placed at the end of the email id
Benchmarking Tools
Please visit this page for more up to date information
ZFS
This section is based off of an excellent guide by Ars.
Creating ZFS Pool
This will list available devices to use
ls -l /dev/disk/by-id
Once you determine what devices to use, this command will create the pool
zpool create -o ashift=12 $name $raidz_type /dev/disk/by-id/$$ /dev/disk/by-id/$$ /dev/disk/by-id/$$
NOTE
- -o ashift=12 means "use 4K blocksizes instead of the default 512 byte blocksizes," which is appropriate on almost all modern drives.
ZFS Commands
This will display raw capacity status
zpool list
This will display usable status
zfs list
You can create "filesystems" which are much like pre-formated paritions or folders.
zfs create $zfs_vol/$folder_name
You can and should create multiple filesystems so that you can manage each partition individually. If you have groups of content that you seperate already, then it makes sense to create multiple filesystems, such as images, movies, txt files, etc. By doing this you can take advantage of ZFS's settings.
zfs set compression=on $zfs_vol/textfiles zfs set quota=200G $zfs_vol/jpegs
View CPU Temps in Cent 6.5
For most new CPUs and Mobos this should be pretty simple to do. For this example, I'm using a newer SuperMicro Motherboard.
## Install the package yum -y install lm_sensors ## Detect the sensors, should be fine to say YES to all the questions sensors-detect ## If everything installed correctly, you should see all the CPU core temps sensors
Example output, for this example I am using an Intel E5-1650v2
coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +47.0°C (high = +80.0°C, crit = +90.0°C) Core 0: +47.0°C (high = +80.0°C, crit = +90.0°C) Core 1: +44.0°C (high = +80.0°C, crit = +90.0°C) Core 2: +41.0°C (high = +80.0°C, crit = +90.0°C) Core 3: +40.0°C (high = +80.0°C, crit = +90.0°C) Core 4: +40.0°C (high = +80.0°C, crit = +90.0°C) Core 5: +39.0°C (high = +80.0°C, crit = +90.0°C)
Linux Memory Usage Overview
- http://virtualthreads.blogspot.com/2006/02/understanding-memory-usage-on-linux.html
- http://stackoverflow.com/questions/7880784/what-is-rss-and-vsz-in-linux-memory-management
There are two commonly displayed values for Linux RAM usage. When using a tool like ps, you often times see VSZ and RSS.
VSZ: "VSZ is the Virtual Memory Size. It includes all memory that the process can access, including memory that is swapped out and memory that is from shared libraries. "
RSS: "RSS is the Resident Set Size and is used to show how much memory is allocated to that process and is in RAM. It does not include memory that is swapped out. It does include memory from shared libraries as long as the pages from those libraries are actually in memory. It does include all stack and heap memory.
- RSS And VSZ do not accurately represent the real RAM usage for a process, they report the total RAM the process would use if it were the only process running, but many processes share memory if they use the same shared libraries.
- Shared libraries like libc are commonly used by many different applications, Linux is able to load the library once into RAM, and then multiple processes can re-use the same library at the same time without having to duplicate the library which would use more RAM. Linux is very efficient because of its ability to share libraries among many processes.
You can use pmap to get more specific memory usage information from a process.
pmap -d $PID
An example command is:
pmap -d 15441 Address Kbytes Mode Offset Device Mapping .... .... 00007f574e0a4000 8 rw--- 0000000000003000 0fc:00003 cStringIO.so 00007f574e0a6000 20 r-x-- 0000000000000000 0fc:00003 stropmodule.so 00007f574e0ab000 2044 ----- 0000000000005000 0fc:00003 stropmodule.so 00007f574e2aa000 8 rw--- 0000000000004000 0fc:00003 stropmodule.so 00007f574e2ac000 12 r-x-- 0000000000000000 0fc:00003 timemodule.so 00007f574e2af000 2048 ----- 0000000000003000 0fc:00003 timemodule.so 00007f574e4af000 8 rw--- 0000000000003000 0fc:00003 timemodule.so 00007f5754477000 540 rw--- 0000000000000000 000:00000 [ anon ] 00007f5754507000 12 rw--- 0000000000000000 000:00000 [ anon ] 00007fff09ca1000 112 rw--- 0000000000000000 000:00000 [ stack ] 00007fff09dff000 4 r-x-- 0000000000000000 000:00000 [ anon ] ffffffffff600000 4 r-x-- 0000000000000000 000:00000 [ anon ] mapped: 196340K writeable/private: 9372K shared: 0K
- The lines that have "r-x--" are considered the code segments.
- The lines that have "rw---" are considered the data segments.
- The important information here is the "writeable/private" value, which is the incremental cost of the process once you remove all the other shared libraries that were already loaded / can be used by other processes.
Using an Apache process for another example:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND nobody 22696 0.0 4.9 649624 49548 ? Sl 17:45 0:01 \_ /usr/local/apache/bin/httpd -k start -DSSL
- VSZ reports 649624K, or about 634MB
- RSS reports 49548K, or about 48MB
Running pmap on that PID we see:
pmap -d 22696 .... .... mapped: 649624K writeable/private: 63292K shared: 184140K
- writeable/private: 63292K, or around 63MB, you can see that much of this process is using shared libraries.
Storm and LiquidWeb API
You can find API documentation at the link listed below.
If you have issues using the Liquid Web API the first step would be to run a simple curl command to make sure you can connect to the API and that are you using the correct user name and password. Please replace $API_USER and $API_PASS with your credentials. PLEASE be aware that this is not the most secure way to test this, you might want to throw this command into a file and run it that way, otherwise your credentials will be on the server's history, obviously this is not preferred.' You can create a temporary API user just to test, then remove the user or update the password.
curl https://$API_USER:[email protected]/v1/utilities/info/ping.json
Docker run command line examples
This command will run a container in interactive mode and will put you in the container as soon as it is started.
docker run -i -t -p $IP:$HostPort:$ContainerPort -v $HostDirectory:$ContainerDirectory $Image $Command
An Example Command would be if you wanted to run a container with Apache that listens on port 80 in the container, and port 9000 on the host. We will also have the container use a directory on the host so that data persists even if the container is stopped or killed
docker run -p 8.8.8.8:9000:80 -v /partition1:/parition1 doge/apache:latest /usr/sbin/apache2ctl -D FOREGROUND
Quick and Dirty script to KILL off all containers
for each in `docker ps | awk '{print $1}'` ; do docker kill $each ; done
Quick and Dirty script to STOP all containers, this is slower than the above command
for each in `docker ps | awk '{print $1}'` ; do docker stop $each ; done
Linux Kernel Networking
A really good article that explains how networking performance in the Linux kernel will need some improvements in the near future. - https://lwn.net/Articles/629155/