Linux – Adams Bros Blog http://blog.adamsbros.org Wed, 15 May 2019 01:45:31 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.2 Docker Building – Caching http://blog.adamsbros.org/2017/06/25/docker-building-caching/ http://blog.adamsbros.org/2017/06/25/docker-building-caching/#respond Mon, 26 Jun 2017 04:20:12 +0000 http://blog.adamsbros.org/?p=644 So you’re interested in creating your own docker image?  You keep finding it takes several minutes per run, which is a real pain.  No problem, cache everything!

  1. Wander over to Docker Squid and start using their squid in a docker image.
  2. Once the docker container is up and running, modify their /etc/squid3/squid.conf to change the cache_dir to allow 10G of storage and cache large objects such as 200M (default is only 512kb).  There is an existing line, so search with a regex for “^cache_dir”.  Then change the entire line to something like “cache_dir ufs /var/spool/squid3 10000 16 256 max-size=200000000“, which results in 10G of cache and 200M objects being cachable.
  3. Change your Dockerfile to use a RUN line with exports for http_proxy, https_proxy, and ftp_proxy, to point to your host’s IP and port 3128.  Don’t use ENV, as that will permanently set those for the container once it’s created and running. My line looks like “RUN export http_proxy=http://192.168.2.5:3128; export https_proxy=http://192.168.2.5:3128; export ftp_proxy=http://192.168.2.5:3128; \

Doing this took my docker builds from 6-7 minutes down to about 1.5 minutes.

]]>
http://blog.adamsbros.org/2017/06/25/docker-building-caching/feed/ 0
Linux Disk Usage Exclude Dot Directories http://blog.adamsbros.org/2015/12/13/linux-disk-usage-exclude-dot-directories/ http://blog.adamsbros.org/2015/12/13/linux-disk-usage-exclude-dot-directories/#respond Sun, 13 Dec 2015 21:26:36 +0000 http://blog.adamsbros.org/?p=615 I’m always searching for how to do this because I keep forgetting. So here it is, no more searching.

du -mcs /home/user/* /home/user/.[!.]* | sort -n
]]>
http://blog.adamsbros.org/2015/12/13/linux-disk-usage-exclude-dot-directories/feed/ 0
Making New Linux Disk Visible Without a Reboot http://blog.adamsbros.org/2015/11/16/making-new-linux-disk-visible-without-a-reboot/ http://blog.adamsbros.org/2015/11/16/making-new-linux-disk-visible-without-a-reboot/#respond Mon, 16 Nov 2015 19:28:56 +0000 http://blog.adamsbros.org/?p=574 I was having trouble today getting Linux to see my new partition space that I added in vSphere without rebooting the host. The new disk space was made visible by re-scanning the SCSI bus (below) and then the new partition was made visible by using the partprobe command (below).

 

I asked VMWare to provision my disk to be larger and then asked Linux to refresh the kernel info:

 $ echo 1 > /sys/class/scsi_device/0\:0\:3\:0/device/rescan 
 $ dmesg
 sdd: Write Protect is off 
 sdd: Mode Sense: 61 00 00 00 
 sdd: cache data unavailable 
 sdd: assuming drive cache: write through 
 sdd: detected capacity change from 171798691840 to 343597383680

 

I added another partition and then tried to get LVM to use it:

$ fdisk /dev/sdd
 Command (m for help): n
 Command action
 e extended
 p primary partition (1-4)
 p
 Partition number (1-4): 3

But LVM couldn’t see it:

 $ pvcreate /dev/sdd3
 Device /dev/sdd3 not found (or ignored by filtering).
 $ pvcreate -vvvv /dev/sdd3
 #device/dev-cache.c:578 /dev/sdd3: stat failed: No such file or directory
 #metadata/metadata.c:3546 
 #device/dev-cache.c:578 /dev/sdd3: stat failed: No such file or directory

The solution was to use partprobe to inform the OS of partition table changes:

 $ partprobe /dev/sdd
 $ pvcreate /dev/sdd3
 dev_is_mpath: failed to get device for 8:51
 Writing physical volume data to disk "/dev/sdd3"
 Physical volume "/dev/sdd3" successfully created
]]>
http://blog.adamsbros.org/2015/11/16/making-new-linux-disk-visible-without-a-reboot/feed/ 0
Backups Made Simple http://blog.adamsbros.org/2015/08/31/backups-made-simple/ http://blog.adamsbros.org/2015/08/31/backups-made-simple/#respond Mon, 31 Aug 2015 21:57:03 +0000 http://blog.adamsbros.org/?p=593 I’ve had complex backup solutions in the past, which I wrote myself with rsync and bash.  I’ve recently got sick and tired of the issues that come up now and then with them, so I decided to keep it extremely simple. So, I decided to opt for a couple of rsync only shell scripts. I get emails every time they run, as part of cron.

So, the cronjobs look like this…

0 8-19 * * 1 /home/trenta/backup/monday-rsync.sh
0 8-19 * * 2-5 /home/trenta/backup/daily-rsync.sh

monday-rsync.sh…

#!/bin/bash
# /home/trenta/backup/monday-rsync.sh
/usr/bin/rsync -avH --exclude-from=/home/trenta/backup/daily-excludes.txt /home/trenta/ /media/backup/trenta-home/

daily-rsync.sh…

#!/bin/bash
# /home/trenta/backup/daily-rsync.sh
/usr/bin/rsync -avH --exclude-from=/home/trenta/backup/daily-excludes.txt --link-dest=/media/backup/trenta-home/ /home/trenta/ /media/backup/trenta-home-$(/bin/date +'%w')/
]]>
http://blog.adamsbros.org/2015/08/31/backups-made-simple/feed/ 0
OpenLDAP SSHA Salted Hashes By Hand http://blog.adamsbros.org/2015/06/09/openldap-ssha-salted-hashes-by-hand/ http://blog.adamsbros.org/2015/06/09/openldap-ssha-salted-hashes-by-hand/#respond Tue, 09 Jun 2015 20:07:40 +0000 http://blog.adamsbros.org/?p=575 I needed a way to verify that the OpenLDAP server had the correct hash recorded.  That is, a SSHA Hash Generator that I could run off the command line was in order.  After fiddling through it, I thought it would be worth documenting in a blog post.

We need to find the salt (the last four bytes of the hash), and then concatenate PASSWORD+SALT, take the SHA hash, convert to base64, prepend {SSHA}, and then finally base64 encode that whole string again.

Here is what ldapsearch says is the password hash:

userPassword:: e1NTSEF9OWZ1MHZick15Tmt6MnE3UGh6eW94SDMrNmhWb3llajU=

The first step is to retrieve the (binary) salt which is stored at the end of a double base64 encoded hash.  I wrote a perl SSHA Salt Extractor for that:

$ cat getSalt.pl
#!/usr/bin/perl -w

my $hash=$ARGV[0];
# The hash is encoded as base64 twice:
use MIME::Base64;
$hash = decode_base64($hash);
$hash=~s/{SSHA}//;
$hash = decode_base64($hash);

# The salt length is four (the last four bytes).
$salt = substr($hash, -4);

# Split the salt into an array.
my @bytes = split(//,$salt);

# Convert each byte from binary to a human readable hexadecimal number.
foreach my $byte (@bytes) {
$byte = uc(unpack "H*", $byte);
print "$byte";
}
$ ./getSalt.pl e1NTSEF9OWZ1MHZick15Tmt6MnE3UGh6eW94SDMrNmhWb3llajU=
68C9E8F9

See, that is four ASCII characters represented by human readable hexadecimal. Another way to do this is like this:

echo "e1NTSEF9OWZ1MHZick15Tmt6MnE3UGh6eW94SDMrNmhWb3llajU=" | base64 -d | sed 's/^{SSHA}//' | base64 -d | tail -c4 | od -tx1

Now that we have extracted the salt, we can regenerate the salted hash with what we think the password should be.

$ cat openldap-sha.pl
#!/usr/bin/perl

use Digest::SHA1;
use MIME::Base64;

my $p=$ARGV[0];
my $s=$ARGV[1];

# Convert from hex to binary.
$s_bin=pack 'H*', $s;

# Hash the password with the salt.
$digest = Digest::SHA1->new;
$digest->add("$p");
$digest->add("$s_bin");

# Encode PASSWORD+SALT as Base64.
$hashedPasswd = '{SSHA}' . encode_base64($digest->digest . "$s_bin" ,'');

# Encode as Base64 again:
print 'userPassword:: ' . encode_base64($hashedPasswd) . "\n";

$ ./openldap-sha.pl correcthorsebatterystaple 68C9E8F9
userPassword:: e1NTSEF9OWZ1MHZick15Tmt6MnE3UGh6eW94SDMrNmhWb3llajU=

And there you have it.

]]>
http://blog.adamsbros.org/2015/06/09/openldap-ssha-salted-hashes-by-hand/feed/ 0
Tora Install on Linux Mint 17 http://blog.adamsbros.org/2015/01/05/tora-install-on-linux-mint-17/ http://blog.adamsbros.org/2015/01/05/tora-install-on-linux-mint-17/#comments Mon, 05 Jan 2015 19:25:58 +0000 http://blog.adamsbros.org/?p=562 We need to start by installing the oracle client appropriate for your architecture.  Grab the oracle instant client rpms from oracle’s site.  Then run…

sudo alien --to-deb oracle-instantclient11.2-*.rpm

Patch

Solution found from a tora bug report.

diff -aur tora-2.1.3/src/toextract.h tora-2.1.3patched/src/toextract.h
--- tora-2.1.3/src/toextract.h	2010-02-02 10:25:43.000000000 -0800
+++ tora-2.1.3patched/src/toextract.h	2012-06-22 21:58:45.026286147 -0700
@@ -53,6 +53,7 @@
 #include 
 //Added by qt3to4:
 #include 
+#include 
 
 class QWidget;
 class toConnection;

Build

svn export https://svn.code.sf.net/p/tora/code/tags/tora-2.1.3
 cd tora-2.1.3
 apt install cmake libboost-system1.55-dev libqscintilla2-dev g++
 cmake -DORACLE_PATH_INCLUDES=/usr/include/oracle/11.2/client64 \
-DCMAKE_BUILD_TYPE=Release -DENABLE_PGSQL=false -DENABLE_DB2=false .
 
 patch -p1 # paste above patch in
 
 make
 sudo tar --strip-components 2 -C /usr/local/bin/ -xzvf tora-2.1.3-Linux.tar.gz
]]>
http://blog.adamsbros.org/2015/01/05/tora-install-on-linux-mint-17/feed/ 1
Enumerate All Block Devices http://blog.adamsbros.org/2014/12/07/enumerate-all-block-devices/ http://blog.adamsbros.org/2014/12/07/enumerate-all-block-devices/#respond Sun, 07 Dec 2014 11:04:40 +0000 http://blog.adamsbros.org/?p=560 I found a great little utility that can enumerate through all the block devices, including your lvm, crypt, etc.

It’s found in the util-linux-ng package, and is called lsblk.  Here’s an example run…

[04:04:06 root@tdanas ~]
$ lsblk
NAME                    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                       8:0    0 111.8G  0 disk  
├─sda1                    8:1    0  1000M  0 part  /boot
└─sda2                    8:2    0 110.8G  0 part  
  ├─nas-root (dm-0)     253:0    0  19.5G  0 lvm   /
  ├─nas-swap (dm-1)     253:1    0  1000M  0 lvm   [SWAP]
  ├─nas-log (dm-3)      253:3    0   9.8G  0 lvm   /var/log
  └─nas-tmp (dm-4)      253:4    0   9.8G  0 lvm   /tmp
sdb                       8:16   0   2.7T  0 disk  
├─sdb1                    8:17   0 931.3G  0 part  
│ └─md10                  9:10   0   500G  0 raid1 
│   └─pv0 (dm-5)        253:5    0   500G  0 crypt 
│     └─dat-home (dm-6) 253:6    0   500G  0 lvm   /data
└─sdb2                    8:18   0   1.8T  0 part  
sdc                       8:32   0 931.5G  0 disk  
└─sdc1                    8:33   0 931.5G  0 part  
  └─md10                  9:10   0   500G  0 raid1 
    └─pv0 (dm-5)        253:5    0   500G  0 crypt 
      └─dat-home (dm-6) 253:6    0   500G  0 lvm   /data
sdd                       8:48   0   2.7T  0 disk  
└─sdd1                    8:49   0   2.7T  0 part  
  └─3TBak_crypt (dm-7)  253:7    0   2.7T  0 crypt 
    └─3TB-safe (dm-8)   253:8    0   1.5T  0 lvm   /media/cvsafe
sde                       8:64   0 931.5G  0 disk  
└─sde1                    8:65   0   500G  0 part  
  └─md10                  9:10   0   500G  0 raid1 
    └─pv0 (dm-5)        253:5    0   500G  0 crypt 
      └─dat-home (dm-6) 253:6    0   500G  0 lvm   /data

You can also specify the ‘-f’ flag, if you want UUIDs of the block devices for recovery information.

]]>
http://blog.adamsbros.org/2014/12/07/enumerate-all-block-devices/feed/ 0
No Space Left On Device – Linux http://blog.adamsbros.org/2014/11/22/no-space-left-on-device-linux/ http://blog.adamsbros.org/2014/11/22/no-space-left-on-device-linux/#respond Sun, 23 Nov 2014 02:47:39 +0000 http://blog.adamsbros.org/?p=536 So, perhaps you’ve seen this message, but when you check the space usage, it’s just fine.  It’s usually because your file system is using a whole bunch of inodes.  So, if you’re using your drive space for backups, and those backups require using a lot of inodes, then you’ll use up your disk very quickly.

inodes are the pointers at the beginning of the disk, which point to each datablock.  They also contain extra information like file permissions, user, group, etc

Well, let’s create a file system that has more inodes, so that this doesn’t happen as quickly.

Solution

For my backups, I use rsync, for doing incremental, differential, and full backups, in a single backup.  It does this via hard links, which ends up using inodes, but not very much actual block usage.  I believe this happens because directories cannot be hardlinked, and use up inodes every time one is created.  So, we need a lot more inodes if we’re going to do that.  By default, usually your inodes will point to 8192 byte blocks on the disk.  When you reduce the amount of space for each block, you need an inversely proportional increase in inodes to compensate.  But, this can also reduce throughput, and increase fragmentation, so I don’t recommend doing this for standard file systems.  I do this only for my backup drive.

mkfs.ext4 -b 2048 -i 2048 /dev/3GBackup2/safe

Background

Basically, after reading this, you should be creating file systems based on your needs.  If you are running software that likes creating huge numbers of small files, then you’re going to need smaller datablocks and more inodes.  If you are running software that creates very large files, then you’re wasting space by having lots of inodes, so you should use a much larger -b and -i option to mkfs.

Let’s say you have a file system that has 8192 data blocks, and those data blocks are 1024 bytes each.  That’s a file system of 8 MB.  In order to create this file system, you need 2048 inodes (2 x 512 byte sectors per block) as well, because you just might want to store one file (<= 1024 bytes) in each block.  Well, in order to have pointers to those files, to be able to access them, you also need 2048 inodes at the beginning of the disk. Some of these inodes will be implicitly eaten up by the inode table itself; so you never really get the full 8M.  Now let’s look at some scenarios.  The default command will create such a file system.

mkfs.ext4 /dev/nas/test

Scenario 1 Lots of small files

If you create a bunch of files no bigger than the bytes per inode, you’ll use up all of your inodes at roughly the same time as the data block usage.

Scenario 2 Large files

You can also create a single large file, and use only 1 of the inodes.  This would cause wasted space in the inode section, at the beginning of the disk; which isn’t really that big of a deal, but you should try to optimize your settings based on needs.  The command below will result in a single file being created, and only one extra inode will be used. If you use dumpe2fs to dump the inode usage count before and after, you can see the difference.

dd if=/dev/zero of=blah.bin

Scenario 3 Hard links

Hard links use virtually no space.  From what I can tell, I was able to create 64999 links to one file…

dd if=/dev/zero of="0.bin" bs=2048 count=12
count=1; while true; do ln "0.bin" "$count.bin"; if [ $? -ne 0 ]; then break; fi; ((count++)); done; ls -1tr  | sort -n
]]>
http://blog.adamsbros.org/2014/11/22/no-space-left-on-device-linux/feed/ 0
Linux Software RAID http://blog.adamsbros.org/2014/11/15/linux-software-raid/ http://blog.adamsbros.org/2014/11/15/linux-software-raid/#respond Sun, 16 Nov 2014 03:53:30 +0000 http://blog.adamsbros.org/?p=518 One of my drives in my RAID died, so I went and bought a 3TB drive for a replacement. The RAID is only 1T, so I’ll use the rest of it for something else.

It drives me nuts that every site out there describes in great detail how to do different things. Anyone can read the man page, or issue the parted “help mkpart” for example; but what I usually look for is a quick start.  So, here’s a quick rundown on how I re-set up the RAID drive. At the bottom is a full rundown on how I think I originally setup the RAID, but don’t quote me on it. 😀

We run through the basic use of parted and mdadm.

Create and Add RAID Mirror

parted /dev/sdb
mklabel gpt
mkpart primary 1 1T
set 1 raid on
align-check
mdadm --manage /dev/md10 -a /dev/sdb1
cat /proc/mdstat

 

RAID Setup

# create a mirror with two devices
mdadm --create /dev/md10 --force --level=1 --raid-devices=2 /dev/sdb1 /dev/sde1

That’s really all there is to it. From there, you can just use /dev/md10 as you would any other drive or partition. You can use it as a raw disk with partitions, or turn it into an encrypted volume, or a physical volume for LVM, etc.

 

RAID Cheat Cheat

# create mirror with only one device
mdadm --create /dev/md10 --force --level=1 --raid-devices=1 /dev/sdb3

# create a mirror with two devices
mdadm --create /dev/md10 --force --level=1 --raid-devices=2 /dev/sdb3 /dev/sdd1

# grow and add a mirror instantly.
mdadm --grow /dev/md10 --raid-devices=2 -a /dev/sdd1

# fail and remove a device
mdadm --manage /dev/md10 --fail /dev/sdd1
mdadm --manage /dev/md10 -r /dev/sdd1

# turn a mirror with one disk into a striped array.
mdadm --grow /dev/md10 --level=raid5
mdadm --grow /dev/md10 --raid-devices=2 -a /dev/sdd1
]]>
http://blog.adamsbros.org/2014/11/15/linux-software-raid/feed/ 0
Git Setup with SSH http://blog.adamsbros.org/2014/11/11/git-setup-with-ssh/ http://blog.adamsbros.org/2014/11/11/git-setup-with-ssh/#respond Tue, 11 Nov 2014 19:46:39 +0000 http://blog.adamsbros.org/?p=510 This is just a simple rundown of how to setup git properly for SSH use. SSH specific information about how to connect using SSH keys, and things of that nature, are not within the scope of this post. I will update this as I go.

1. To ensure that you are creating repositories with the correct permissions, you need to set the git config variable core.sharedRepository on the server.  You need the –system, so that it gets stored in /etc/gitconfig.

 git config --system core.sharedRepository group

2. Setup your git commit notification.

After that, it’s just a matter of chowning your repo to the correct group on the server.  If you didn’t originally create your repos with proper group permissions, you’re going to have to do that.

chown -R :group /home/git/myrepo
find /home/git/ -type d | while read -r dir; do chmod g+s "$dir"; done
find /home/git/ -type f | while read -r file; do chmod g+rw "$file"; done
]]>
http://blog.adamsbros.org/2014/11/11/git-setup-with-ssh/feed/ 0