Day 57: Tmux, Stayin' alive

When you execute long-running tasks on remote servers, SSH connection may break (poor network conditions, out of battery…). Because the running process is attached to your session, it may be interrupted at the same time.

To prevent your terminal to lost current work, you can use screen (old) or tmux (modern) on the remote server.

Man

  • Open a new tmux session with $ tmux
  • List current tmux sessions with $ tmux list-sessions
  • Attach current terminal to existing tmux session: $ tmux attach

Tmux runs a daemon on the remote machine. Any commands executed inside tmux will not be attached to your tty, but to the daemon.

Example

$ ssh 1.2.3.4
[root] $ tmux
[root ~ tmux] $ sleep 3600

Then kill your terminal or open a new one !

$ ssh 1.2.3.4
[root] $ tmux list-sessions
0: 1 windows (created Fri Sept 29 16:42:42 2017) [181x79]
[root] $ tmux attach

# And you see this:
[root ~ tmux] $ sleep 3600

Please note that your didn’t need to execute sleep 3600 here because it is still running \o/

Day 56: mastering sed with regexps

For a basic use of sed, please read the 2 previous posts ;)

$ cat foobar.txt
foo
bar

$ cat foobar.txt | sed -e 's/o$/f/'
fof
bar

$ cat foobar.txt | sed -e 's/^/- /g'
- foo
- bar

$ cat foobar.txt | sed -e 's#\(a\|o\)##g'
f
br

Variables

You can extract each part of regexp and inject it with \1, \2, \x

(Starting at 1, not 0…)

$ cat foobar.txt
foo
bar

# Inject space between characters and reverse 2 first letters:
$ cat foobar.txt | sed 's#\(.\)\(.\)\(.\)#\2 \1 \3#g'
o f o
a b r

Do not forget backslashes !! (\).

:bulb: When playing with sed and escaping characters with \, I often use # as separator to save my eyes ;)

Day 55: Removing line with sed

For a basic use of sed, please read the previous post ;)

2 ways for removing a line with sed:

With line number

$ cat foobar.txt
foo
bar
baz

# Removes line 2
$ cat foobar.txt | sed '2d'
foo
baz

:bulb: Please note that the first line is the line 1 (not 0).

With pattern

$ cat foobar.txt
foo
bar
baz

# Removes lines containing 'ba'
$ cat foobar.txt | sed '/ba/d'
foo

Day 54: Basic replaces with sed

One of the most powerful unix/linux command for parsing is sed.

The “find and replace” command has the following syntax:

sed 's/<before>/<after>' <file>

Find and replace:

$ cat foobar.txt
foobar
barfoo
$ sed 's/foobar/barfoo/' foobar.txt
barfoo
barfoo

Separator

The / separator is very popular on internet, but do you know that any char can be used as separator ?

$ sed 's#foobar#barfoo#' foobar.txt
barfoo
barfoo

$ sed 's?foobar?barfoo?' foobar.txt
barfoo
barfoo

Multiple occurencies

Just add g at the end of the sed command:

$ sed 's/bar/424242/' foobar.txt
foo424242
barfoo

$ sed 's/bar/424242/g' foobar.txt
foo424242
424242foo

Write changes on disk

This can be done with -i argument:

# override foobar.txt
$ sed -i 's/foobar/barfoo/' foobar.txt

$ cat foobar.txt
barfoo
barfoo
# override foobar.txt and backup previous file
$ sed -i.bak 's/foobar/barfoo/' foobar.txt

$ cat foobar.txt
barfoo
barfoo

$ cat foobar.txt.bak
foobar
barfoo

Day 53: /var/run/*.pid

Most of daemons you start on a server, create a file in /var/run/.pid, such as: `/var/run/mysqld.pid`, `/var/run/sshd.pid`...

It contains the PID of the running process. (see day-18 https://iadvize.github.io/devops-tip-of-the-day/tips/day-18-ps-command).

This is usefull for 2 reasons:

  • This is used as a lock to avoid running twice the same daemon (/etc/init.d/mysqld start will block until /var/run/mysql.pid is removed).
  • This is also used for sending orders to a running process. For example, /etc/init.d/nginx reload will get the PID of main nginx process in /var/run/nginx.pid, then, send a SIGHUP signal to this PID, to order it to reload its configurations. (see day-21 https://iadvize.github.io/devops-tip-of-the-day/tips/day-21-unix-signals-most-used/)

:bulb: What happens if you force-remove /var/run/nginx.pid file and execute /etc/init.d/nginx stop ?

Day 52: Cloud pattern: database read-write splitting

When your database is handling a large amount of queries, the traditional pattern is to setup more database servers, increasing QPS by spreading load across many instances. We basically redirect writes on the master and reads on the slave(s).

In the best scenario, database instances have identical data, whatever your replication method (mostly master/slave). But this is not always true. Most master/slave replications are asynchronous and replication can break or slow down for many reasons.

R/W splitting strategies:

  • Always read from the master for queries with need for strict consistency .
  • Always read from the slave for queries without need for strict consistency.
  • Before read operations, ask to the slave the replication lag. If to long, fallback on master (:warning: this doesn’t fix scaling issues if every requests go to master).
  • Check the replication binlog position on master after write operation, then compare with the position on the slave before reading. This allow to compare if the change has been committed on the slave.
  • Setup a proxy that filter queries and redirect read or write operations to the right server. (Such as ProxySQL for MySQL)

:warning: Using proxy method and magic ORM can be dangerous.

:warning: With the proxy method, please test your transaction model carrefuly: transaction queries must go to the same database server.

:warning: With the proxy method, if you need to read after a write operation, please put your read query in the same transaction, to avoid request splitting and a fucking race conditions.

Day 51: Cloud pattern: Rolling-update deployment

Rolling update deployment is the process of progressively upgrading version of service instances. Example: automatically upgrade 1 instance every 30 seconds.

Database schema migration must be done before rolling update and be compatible with the inital release.

Sometime, we can keep 2 releases running at the same time (90% nodes on v1 and 10% node on v1.1). Monitoring could show failures before general availibility of v1.1. In that case, using a load-balancer with sticky routing may be necessary (a user always requesting the same instance).

Day 50: Cloud pattern: Blue/Green deployment

Blue/green deploy is a deployment process without downtime, that make rollback simple:

  • v1.0.0 running on 10 instances
  • start 10 instances of v1.0.1
  • when every new instances are up, ask to the load balancer to route the traffic onto v1.0.1
  • if everything is working web, destroy old instances (v1.0.0), else, ask the load balancer to route the traffic back onto v1.0.0

Migrations should be made before deploying v1.0.1 and database schema must be compatible with v1.0.0.

Day 49: Cloud pattern: CDN (Content Delivery Network)

The role of a CDN is to:

  • serve static content everywhere on earth, faster and at the same speed,
  • decrease bandwidth cost,
  • caching static heavy content: video, sound…

CDNs are cache servers distributed around the globe as close as possible to the end-user.

CDNs deliver cachable content. Then it won’t improve performance of request that must reach the server, with side effect, such as POST/PUT/PATCH/DELETE/… HTTP requests.

DNS anycast is responsible for routing the request to the closest CDN point of presence.

In case of DDoS attack, CDN are a good way to protect servers (it depends of your infrastructure actually).

CDNs can be used:

  • as a storage bucket
  • as a reverse proxy

Storage bucket

Cache is flushed by pushing a new stateful content, right into the bucket.

Reverse proxy

A larger endpoint can be routed to this kind of CDN. Cached answers are provided by the CDN from the second request. Every other calls are proxied to the origin server.

Depending on the CDN provider, caching can be based on layer 7 rules, such as: HTTP verbs, response status code, path, query-string or headers (ETag, Cache-Control, Origin…).

Flush can be manual (in a web admin console) or automatic (etag, max-age, no-cache, ttl…)

Famous CDN providers:

  • Cloudflare
  • Cloudfront (AWS)
  • Akamai
  • EdgeCast
  • Fastly

Day 48: Cloud pattern: circuit breakers

In a large cloud infrastructure with lot of instances of a service, circuit breaker helps to detect unhealthy nodes when errors raise.

For example, we can implement a circuit breaker in a load balancer, by tracking the number (or ratio) of 5xx http errors a service is returning or the average response time.

Associating circuit breakers to a deployment process allows auto-revert when bad releases are deployed into production. \o/

Implementation examples and papers

  • Netflix: https://github.com/Netflix/Hystrix/wiki/How-it-Works
  • Uber: https://github.com/uber/tchannel/blob/master/docs/circuit-breaking.md
  • Martin Fowler: https://martinfowler.com/bliki/CircuitBreaker.html

Day 47: Generate password

The pwgen command will help you to generate random passwords.

$ pwgen
se7fohR8 oosh4Ahv Iogein9L Diinah0v tohtei9O jie8aeXe ai1buPh8 aasoo1Oa
yei8bu5D ooC9aePo Obo0Aush thaeV8ph ieCoh4sh quoob4Ae uCoopho8 Echaoz2u
...

More complex:

$ pwgen -y
J}5cC#H& /1'\K.F, @2=&(p2R Q=&w_0Kk htJ$=z"6 lL>gM-7i v]^h*7Vs p[g^Cg93
:AVa{Fw9 R?M64Dzi E(8,t}s~ \7H<D8Dg .$5c'M4{ C7bV=QI_ g:,3ur<K 2c[6KD;B
...

1 password of 15 characters.

$ pwgen -y 15 1

Install on debian

$ apt-get update
$ apt-get install pwgen

Day 46: Nmap, ip range scanning

Since yesterday, we know how to get a list of opened port on a machine.

Sometimes, it is usefull to scan a specific port on a larger ip range, to find a machine with a dynamic IP (in a NAT configuration for example).

# Return a list of running machines with an SSH port opened (22):

$ nmap -sS -p 22 10.42.1.0/24

Starting Nmap 6.47 ( http://nmap.org ) at 2017-07-26 09:31 UTC
Nmap scan report for 10.42.1.1
Host is up (0.00019s latency).
PORT   STATE    SERVICE
22/tcp filtered ssh
MAC Address: 06:44:0C:F3:80:93 (Unknown)

Nmap scan report for 10.42.1.89
Host is up (-0.076s latency).
PORT   STATE  SERVICE
22/tcp closed ssh
MAC Address: 06:89:6A:F5:0B:75 (Unknown)

Nmap scan report for 10.42.1.112
Host is up (-0.076s latency).
PORT   STATE SERVICE
22/tcp open  ssh
MAC Address: 06:57:41:2F:0D:23 (Unknown)

Nmap scan report for 10.42.1.121
Host is up (-0.076s latency).
PORT   STATE SERVICE
22/tcp open  ssh
MAC Address: 06:2D:E4:27:3E:2B (Unknown)

Nmap scan report for 10.42.1.150
Host is up (-0.076s latency).
PORT   STATE  SERVICE
22/tcp closed ssh
MAC Address: 06:72:D8:4B:71:45 (Unknown)

Nmap scan report for 10.42.1.201
Host is up (-0.076s latency).
PORT   STATE SERVICE
22/tcp open  ssh
MAC Address: 06:30:ED:68:BB:2F (Unknown)

Nmap scan report for 10.42.1.209
Host is up (0.00022s latency).
PORT   STATE  SERVICE
22/tcp closed ssh
MAC Address: 06:60:87:D5:14:FD (Unknown)

Nmap scan report for 10.42.1.101
Host is up (0.000066s latency).
PORT   STATE SERVICE
22/tcp open  ssh

Nmap done: 256 IP addresses (8 hosts up) scanned in 4.14 seconds

Of course, nmap can be used for scanning public IP ranges. Please don’t do it. It can be prohibited by your ISP, or some local laws: https://nmap.org/book/legal-issues.html

Day 45: Port scanning

The best way to check opened port on a server is to use the nmap binary:

$ nmap api.iadvize.com

Starting Nmap 7.40 ( https://nmap.org ) at 2017-07-26 11:19 CEST
Nmap scan report for api.iadvize.com (35.157.221.55)
Host is up (0.022s latency).
Other addresses for api.iadvize.com (not scanned): 54.93.167.107
rDNS record for 35.157.221.55: ec2-35-157-221-55.eu-central-1.compute.amazonaws.com
Not shown: 998 filtered ports
PORT    STATE SERVICE
80/tcp  open  http
443/tcp open  https

Nmap done: 1 IP address (1 host up) scanned in 4.96 seconds

On this example, we can see that only 2 ports are opened on the iAdvize load balancer: 80 and 443.

Install

On debian:

$ apt-get update
$ apt-get install nmap

Day 44: command execution time

Just write time before your command. The time binary will execute your command and display the execution time:

$ time sleep 1

real    0m1.009s
user    0m0.004s
sys    0m0.007s

Day 43 - Tcpdump: traffic analysis

If you need to analyse network traffic at a low level, you can use the tcpdump unix command.

(Must be executed with root permissions)

$ tcpdump
tcpdump: listening on en0, link-type EN10MB (Ethernet), capture size 262144 bytes
# In a different terminal
$ curl api.iadvize.com

You will see the following output:

16:06:12.711242 IP (tos 0x0, ttl 64, id 8023, offset 0, flags [DF], proto TCP (6), length 64)
    172.16.17.14.50466 > ec2-35-158-49-198.eu-central-1.compute.amazonaws.com.http: Flags [S], cksum 0xcda3 (correct), seq 3596701967, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 659243688 ecr 0,sackOK,eol], length 0
16:06:12.732035 IP (tos 0x0, ttl 243, id 0, offset 0, flags [DF], proto TCP (6), length 60)
    ec2-35-158-49-198.eu-central-1.compute.amazonaws.com.http > 172.16.17.14.50466: Flags [S.], cksum 0xa122 (correct), seq 3251349329, ack 3596701968, win 26847, options [mss 1460,sackOK,TS val 51079274 ecr 659243688,nop,wscale 8], length 0
16:06:12.732144 IP (tos 0x0, ttl 64, id 14042, offset 0, flags [DF], proto TCP (6), length 52)
    172.16.17.14.50466 > ec2-35-158-49-198.eu-central-1.compute.amazonaws.com.http: Flags [.], cksum 0x28a5 (correct), seq 1, ack 1, win 4117, options [nop,nop,TS val 659243709 ecr 51079274], length 0
16:06:12.732537 IP (tos 0x0, ttl 64, id 63996, offset 0, flags [DF], proto TCP (6), length 131)
    172.16.17.14.50466 > ec2-35-158-49-198.eu-central-1.compute.amazonaws.com.http: Flags [P.], cksum 0xbcd9 (correct), seq 1:80, ack 1, win 4117, options [nop,nop,TS val 659243709 ecr 51079274], length 79: HTTP, length: 79
	GET / HTTP/1.1
	Host: api.iadvize.com
	User-Agent: curl/7.51.0
	Accept: */*

16:06:12.753267 IP (tos 0x0, ttl 243, id 53026, offset 0, flags [DF], proto TCP (6), length 52)
    ec2-35-158-49-198.eu-central-1.compute.amazonaws.com.http > 172.16.17.14.50466: Flags [.], cksum 0x37fc (correct), seq 1, ack 80, win 105, options [nop,nop,TS val 51079280 ecr 659243709], length 0
16:06:12.775466 IP (tos 0x0, ttl 243, id 53027, offset 0, flags [DF], proto TCP (6), length 494)
    ec2-35-158-49-198.eu-central-1.compute.amazonaws.com.http > 172.16.17.14.50466: Flags [P.], cksum 0xe54b (correct), seq 1:443, ack 80, win 105, options [nop,nop,TS val 51079285 ecr 659243709], length 442: HTTP, length: 442
	HTTP/1.1 200 OK
	Accept-Ranges: bytes
	Content-Type: text/html
	Date: Tue, 30 May 2017 14:06:12 GMT
	ETag: "56cc643e-9f"
	Last-Modified: Tue, 23 Feb 2016 13:53:02 GMT
	Server: openresty
	Vary: Accept-Encoding
	X-Powered-By: iSystemize
	Content-Length: 159
	Connection: keep-alive

	<html>
	    <head>
	        <title>Welcome on iAdvize!</title>
	    </head>
	    <body>
	        <h1>You should not be getting here buddy!</h1>
	    </body>
	</html>
16:06:12.775552 IP (tos 0x0, ttl 64, id 1139, offset 0, flags [DF], proto TCP (6), length 52)
    172.16.17.14.50466 > ec2-35-158-49-198.eu-central-1.compute.amazonaws.com.http: Flags [.], cksum 0x2676 (correct), seq 80, ack 443, win 4103, options [nop,nop,TS val 659243750 ecr 51079285], length 0
16:06:12.775807 IP (tos 0x0, ttl 64, id 45582, offset 0, flags [DF], proto TCP (6), length 52)
    172.16.17.14.50466 > ec2-35-158-49-198.eu-central-1.compute.amazonaws.com.http: Flags [F.], cksum 0x2675 (correct), seq 80, ack 443, win 4103, options [nop,nop,TS val 659243750 ecr 51079285], length 0

On this dump, we can see my host (172.16.17.14) opening a new tcp connection to ec2-35-158-49-198.eu-central-1.compute.amazonaws.com.

The local port is 50466.

The size of the tcp/ip packets are 64, 60, 52…

The http request is 79 bytes long. The answer is 442 bytes long.

The TTL for IP packets is 64 to go to aws.

More about tcpdump tomorrow ;)

Day 42 - The answer

Today is the day 42 \o/

For this very special day, I just made a little quiz => http://bit.ly/TipOfTheDay-Day42

Enjoy !

Day 41 - Xargs

For complex scripts, sometimes you need to pass to a command the previous command output, as an argument.

xargs, followed by a command, will use every xargs inputs as argument.

$ echo /etc/passwd | xargs wc -l

# is equal to:

$ wc -l /etc/password

The wrong way:

# Print every *.js of a directory (and sub-directories)
$ cat $(find . -name *.js)

The clean way:

# Print every *.js of a directory (and sub-directories)
$ find . -name *.js | xargs cat

:warning: Xargs put inputs in a buffer before running the command.

Running a command for each input

Using the argument -I followed by a pattern make your command executed for each input.

$ find . -name *.js | xargs -I {} wc -l {}

# is equal to

$ wc -l foobar.js
$ wc -l barfoo.js
$ wc -l foofoo.js
$ wc -l barbar.js
...

:bulb: “{}” can replaced by any character/pattern and must be identical everywhere in your command.

Example:

# Fetch every repos on your disk, before going offline.
ls ~/projects | xargs -I {} sh -c 'cd {} && git fetch'

Day 40 - s3cmd

If you want to browse AWS S3 without going on the website, 2 methods are available:

  • a GUI: my favorite is DragonDisk
  • a command line tool: s3cmd

Here, I will talk about s3cmd, I use pretty often for scripting.

$ apt-get install s3cmd
# Display available functions
$ s3cmd -h

# Listing buckets
$ s3cmd --region=eu-central-1 --access_key=foobar --secret_key=barfoo ls

# List first-level files and directories, inside the bucket for iadvize bucket
$ s3cmd --region=eu-central-1 --access_key=foobar --secret_key=barfoo ls s3:///idz-backups

# Create bucket
$ s3cmd --region=eu-central-1 --access_key=foobar --secret_key=barfoo mb s3:///idz-test

# Download s3 file to local directory
$ s3cmd --region=eu-central-1 --access_key=foobar --secret_key=barfoo get s3:///idz-backups/postgresql/dump-2017.05.19.tgz /tmp/dump-2017.05.19.tgz

# Upload local file to s3 bucket
$ s3cmd --region=eu-central-1 --access_key=foobar --secret_key=barfoo put /tmp/dump-2017.05.19.tgz s3:///idz-backups/postgresql/dump-2017.05.19.tgz

# Delete file from s3 bucket
$ s3cmd --region=eu-central-1 --access_key=foobar --secret_key=barfoo del s3:///idz-backups/tmp.sql

# Put file with ttl (expire in 1y)
$ s3cmd --region=eu-central-1 --access_key=foobar --secret_key=barfoo \
        put \
        --add-header="Expires:`date -u +"%a, %d %b %Y %H:%M:%S GMT" --date "+1 years"`" \
        /tmp/dump-2017.05.19.tgz s3:///idz-backups/postgresql/dump-2017.05.19.tgz

# Put entire directory
$ s3cmd --region=eu-central-1 --access_key=foobar --secret_key=barfoo put --recursive large-directory/ s3:///idz-backups/postgresql/large-directory

# Move files from bucket A to bucket B
$ s3cmd --region=eu-central-1 --access_key=foobar --secret_key=barfoo mv --recursive s3:///idz-backups-postgresql/ s3:///idz-backups/postgresql

Please note that you may have some 403 (access denied) error. It is due to limited permissions given to your access token.

Multipart upload is set by default. \o/

Day 37 - Grep: matching with regexp

Grep argument of the day: -E.

It searches pattern by matching with a regular expression \o/

$ git clone git@github.com:iadvize/devops-tip-of-the-day.git
# Displays every grep commands in the repo (lines starting with "$ grep").
$ grep -nr -E '^$ grep' devops-tip-of-the-day/
./_posts/2017-05-10-day-34-grep-recursive.markdown:14:$ grep -n -r 'grep' devops-tip-of-the-day/ | grep -v \.git
./_posts/2017-05-11-day-35-grep-case-insensitive.markdown:20:$ grep foobar foobar.txt
./_posts/2017-05-11-day-35-grep-case-insensitive.markdown:22:$ grep -i foobar foobar.txt
./_posts/2017-05-12-day-36-grep-excluding-files.markdown:17:$ grep -nr grep devops-tip-of-the-day/
./_posts/2017-05-12-day-36-grep-excluding-files.markdown:27:$ grep -nr -I --exclude-dir={.bzr,CVS,.git,.hg,.svn} grep devops-tip-of-the-day/
./_posts/2017-05-15-day-37-grep-regexp.markdown:16:$ grep -nr -E '^$ grep' devops-tip-of-the-day/
# Displays Tips of March and May
$ tree -Cfi | grep -E '2017-(03|05)'
22:./_posts/2017-03-10-day-01-shebang.markdown
23:./_posts/2017-03-13-day-02-docker-week-aufs-layers.markdown
24:./_posts/2017-03-14-day-03-docker-week-pid-1.markdown
25:./_posts/2017-03-15-day-04-docker-week-dockerignore.markdown
26:./_posts/2017-03-16-day-05-docker-week-debugging-docker.markdown
27:./_posts/2017-03-17-day-06-docker-week-docker-entrypoint-and-cmd.markdown
28:./_posts/2017-03-20-day-07-human-readable-outputs.markdown
29:./_posts/2017-03-21-day-08-tree-command.markdown
30:./_posts/2017-03-22-day-09-cd-git-checkout-behind.markdown
31:./_posts/2017-03-23-day-10-ssh-config-basis.markdown
32:./_posts/2017-03-24-day-11-ssh-config-pattern-maching.markdown
33:./_posts/2017-03-27-day-12-ssh-config-bastion-pattern.markdown
34:./_posts/2017-03-28-day-13-ssh-config-multiple-identities.markdown
35:./_posts/2017-03-29-day-14-scp-transporting-files-through-ssh.markdown
36:./_posts/2017-03-30-day-15-ssh-tunnelling-port-forwarding.markdown
37:./_posts/2017-03-31-day-16-ssh-config-tunnelling.markdown
52:./_posts/2017-05-04-day-31-grep-exclude.markdown
53:./_posts/2017-05-05-day-32-grep-count-occurence.markdown
54:./_posts/2017-05-09-day-33-grep-line-number.markdown
55:./_posts/2017-05-10-day-34-grep-recursive.markdown
56:./_posts/2017-05-11-day-35-grep-case-insensitive.markdown
57:./_posts/2017-05-12-day-36-grep-excluding-files.markdown
58:./_posts/2017-05-15-day-37-grep-regexp.markdown

Day 36 - Grep: excluding files

Grep argument of the day: --exclude and --exclude-dir.

It excludes some files or directories.

-I excludes binary files

$ git clone git@github.com:iadvize/devops-tip-of-the-day.git

$ grep -nr grep devops-tip-of-the-day/
../.git/hooks/commit-msg.sample:16:# grep -qs "^$SOB" "$1" || echo "$SOB" >> "$1"
../.git/hooks/commit-msg.sample:20:test "" = "$(grep '^Signed-off-by: ' "$1" |
../.git/hooks/pre-push.sample:44:		commit=`git rev-list -n 1 --grep '^WIP' "$range"`
../.git/hooks/prepare-commit-msg.sample:36:# grep -qs "^$SOB" "$1" || echo "$SOB" >> "$1"
Binary file ../.git/index matches
../_posts/2017-03-21-day-08-tree-command.markdown:61:Pretty usefull for grep parsing:
../_posts/2017-03-21-day-08-tree-command.markdown:64:$ tree -Cfi | grep .js$
...

$ grep -nr -I --exclude-dir={.bzr,CVS,.git,.hg,.svn} grep devops-tip-of-the-day/
../_posts/2017-03-21-day-08-tree-command.markdown:61:Pretty usefull for grep parsing:
../_posts/2017-03-21-day-08-tree-command.markdown:64:$ tree -Cfi | grep .js$
...
# ~/.bashrc
alias grep='grep -I --exclude-dir={.bzr,CVS,.git,.hg,.svn}'

Day 35 - Grep: case insensitive

Grep argument of the day: -i

It makes your search insensitive to the case

$ cat > foobar.txt <<EOF
foobar
FOOBAR
EOF
$ grep foobar foobar.txt
foobar
$ grep -i foobar foobar.txt
foobar
FOOBAR
# ~/.bashrc
alias grep='grep -i'

Day 39 - Fake S3

Some of you may be using AWS S3 as a distributed storage. Did you know you can setup a local server with exactly the same API, for dev or continuous integration ?

It’s called Fake S3.

https://github.com/jubos/fake-s3

$ docker run -v ./data:/var/data/fakes3 -p 3128:3128 leogamas/fakes3

[2017-05-17 12:30:22] INFO  WEBrick 1.3.1
[2017-05-17 12:30:22] INFO  ruby 1.9.3 (2013-11-22) [x86_64-linux]
[2017-05-17 12:30:22] INFO  WEBrick::HTTPServer#start: pid=1 port=3128

Day 38 - Exit code

In a shell, you can get the exit code of the last command executed, using the variable $? (or ${?}).

Examples:

$ bash -c 'exit 42'
$ echo $?
42

$ ls /foobar
ls: /foobar: No such file or directory
$ echo $?
1

$ pwd
/Users/samuelberthe/project/github.com/devops-tip-of-the-day/_posts
$ echo $?
0

Day 34 - Recursive grep

Grep argument of the day => -r

It searches a pattern through sub-directories.

$ git clone git@github.com:iadvize/devops-tip-of-the-day.git
$ grep -n -r 'grep' devops-tip-of-the-day/ | grep -v \.git
6:devops-tip-of-the-day//_posts/2017-03-21-day-08-tree-command.markdown:61:Pretty usefull for grep parsing:
7:devops-tip-of-the-day//_posts/2017-03-21-day-08-tree-command.markdown:64:$ tree -Cfi | grep .js$
8:devops-tip-of-the-day//_posts/2017-04-04-day-18-ps-command.markdown:57:$ ps aux | grep sleep
9:devops-tip-of-the-day//_posts/2017-04-04-day-18-ps-command.markdown:59:root     14096  0.0  0.0 2423376 272  ?        R+   11:35AM 0:00.00 grep --color -i sleep
10:devops-tip-of-the-day//_posts/2017-04-04-day-18-ps-command.markdown:61:$ ps aux | grep sleep
11:devops-tip-of-the-day//_posts/2017-04-04-day-18-ps-command.markdown:62:root     14096  0.0  0.0 2423376 272  ?        R+   11:35AM 0:00.00 grep --color -i sleep
12:devops-tip-of-the-day//_posts/2017-04-06-day-20-unix-signals-fork-behavior.markdown:11:$ ps -axfo pid,ppid,tid,comm | grep nginx
13:devops-tip-of-the-day//_posts/2017-04-06-day-20-unix-signals-fork-behavior.markdown:31:$ ps -axfo pid,ppid,tid,comm | grep nginx
14:devops-tip-of-the-day//_posts/2017-05-04-day-31-grep-exclude.markdown:3:title:  "Day 31 - grep - Exclude results"
15:devops-tip-of-the-day//_posts/2017-05-04-day-31-grep-exclude.markdown:8:Grep is a powerful command. You can reverse the grep command, by excluding a pattern with -v argument:
16:devops-tip-of-the-day//_posts/2017-05-04-day-31-grep-exclude.markdown:23:$ cat foobar.txt | grep cd
17:devops-tip-of-the-day//_posts/2017-05-04-day-31-grep-exclude.markdown:30:$ cat foobar.txt | grep -v cd
18:devops-tip-of-the-day//_posts/2017-05-04-day-31-grep-exclude.markdown:38:$ cat foobar.txt | grep a | grep -v cd
19:devops-tip-of-the-day//_posts/2017-05-05-day-32-grep-count-occurence.markdown:3:title:  "Day 32 - Grep: counting occurences"
20:devops-tip-of-the-day//_posts/2017-05-05-day-32-grep-count-occurence.markdown:8:The next useful grep argument is -c. It counts the number of time a pattern is matched:
21:devops-tip-of-the-day//_posts/2017-05-05-day-32-grep-count-occurence.markdown:12:$ cat /etc/passwd | grep -c /bin/bash
22:devops-tip-of-the-day//_posts/2017-05-05-day-32-grep-count-occurence.markdown:15:$ cat /etc/passwd | grep /bin/bash
23:devops-tip-of-the-day//_posts/2017-05-09-day-33-grep-line-number.markdown:3:title:  "Day 33 - Grep: line number"
24:devops-tip-of-the-day//_posts/2017-05-09-day-33-grep-line-number.markdown:8:Grep argument of the day => -n.
25:devops-tip-of-the-day//_posts/2017-05-09-day-33-grep-line-number.markdown:15:$ cat /etc/passwd | grep -n /bin/bash
26:devops-tip-of-the-day//_posts/2017-05-10-day-34-grep-recursive.markdown:3:title:  "Day 34 - Recursive grep"
27:devops-tip-of-the-day//_posts/2017-05-10-day-34-grep-recursive.markdown:8:Grep argument of the day => -r

Day 33 - Grep: line number

Grep argument of the day => -n.

It displays results with line number.

Example:

$ cat /etc/passwd | grep -n /bin/bash
1:root:x:0:0:root:/root:/bin/bash
28:samber:x:1000:1000::/home/samber:/bin/bash