Adams Bros Blog Wed, 15 May 2019 01:45:31 +0000 en-US hourly 1 javascript async and await Fri, 11 May 2018 02:28:08 +0000 I scratched my head for awhile when switching from promises to await/async. Hopefully this post helps you not need to scratch yours! 😀

Async/await is the same as using promises. It’s just syntactical sugar to make it look synchronous. The javascript engine sees the “await” and stops execution there for you automatically until the promise is resolved/rejected.

I find the javascript promise with ‘.then()’ and ‘.catch()’ is much nicer if you have really simple things to do with the result and the error. But, as soon as you start getting into a bunch of conditionals, from your results, you tend to get a lot of nested promise call statements. So, to save you from promise hell, you just label your function as async, and use the await expression so that you don’t have to be too concerned with promise management. Then hopefully what you do with the final result is simple, and you then resort to using promises. See the testPromise() calls below for examples.

 * @param fail if true, throw an error with a promise.  If false, resolve
 * a delayed response with a promise.
 * @returns {Promise<*>} Always a promise resolve or reject.
function testPromise(fail) {
    if (fail) {
        return Promise.reject(new Error('failed'));
    else {
        return new Promise(resolve => setTimeout(function () {
        }, 1000));

 * Test calls with promises, awaits, and async.
 * @param fail if true, fail with error.  If false, succeed normally.
 * @returns {Promise<*>} always a promise, as async/await use promises as
 * does Promise.reject and Promise.resolve.
async function testAsync(fail) {
    let result = await testPromise(fail);
    return result;

    .then(v => console.log('fail: ', v))
    .catch(e => console.log('fail: ', e.message));
    .then(v => console.log('fail w/promise: ', v))
    .catch(e => console.log('fail w/promise: ', e.message));

    .then(v => console.log('succeed: ', v))
    .catch(e => console.log('succeed: ', e.message));
    .then(v => console.log('succeed w/promise: ', v))
    .catch(e => console.log('succeed w/promise: ', e.message));

 * This shows that within an async function you can try/catch instead of
 * then/catch, as long as you use "await".
 * @returns {Promise}
async function testWithTry() {
    try {
        let result = await testPromise(true);
        console.log('fail w/try: ', result);
    } catch (e) {
        console.log('fail w/try: ', e.message)
    try {
        let result = await testPromise(true);
        console.log('fail w/try w/promise: ', result);
    } catch (e) {
        console.log('fail w/try w/promise: ', e.message)

    try {
        let result = await testPromise(false);
        console.log('succeed w/try: ', result);
    } catch (e) {
        console.log('succeed w/try: ', e.message)
    try {
        let result = await testPromise(false);
        console.log('succeed w/try w/promise: ', result);
    } catch (e) {
        console.log('succeed w/try w/promise: ', e.message)

]]> 0
Docker Building – Caching Mon, 26 Jun 2017 04:20:12 +0000 So you’re interested in creating your own docker image?  You keep finding it takes several minutes per run, which is a real pain.  No problem, cache everything!

  1. Wander over to Docker Squid and start using their squid in a docker image.
  2. Once the docker container is up and running, modify their /etc/squid3/squid.conf to change the cache_dir to allow 10G of storage and cache large objects such as 200M (default is only 512kb).  There is an existing line, so search with a regex for “^cache_dir”.  Then change the entire line to something like “cache_dir ufs /var/spool/squid3 10000 16 256 max-size=200000000“, which results in 10G of cache and 200M objects being cachable.
  3. Change your Dockerfile to use a RUN line with exports for http_proxy, https_proxy, and ftp_proxy, to point to your host’s IP and port 3128.  Don’t use ENV, as that will permanently set those for the container once it’s created and running. My line looks like “RUN export http_proxy=; export https_proxy=; export ftp_proxy=; \

Doing this took my docker builds from 6-7 minutes down to about 1.5 minutes.

]]> 0
Spring Building a RESTful Web Service – revisited Thu, 28 Jan 2016 19:33:39 +0000 I do frequently find that Spring documentation reads more like a novel than it does technical documentation. I find you can sometimes take many minutes, or even hours, just wading through stuff to find out how to do something that should have taken 5-10 minutes. Spring’s REST framework is relatively straight forward to use, but there doesn’t seem to be a good quick start on it’s use.

So, we endeavour to have you up and running with their “Building a RESTful Web Service” tutorial in under 5 minutes, assuming you have a basic java development environment going, meeting their requirements. Please quickly review the first two “What you’ll build” and the “What you’ll need” sections at “Building a RESTful Web Service“, then come back here.

We have one additional requirement. It’s assumed you’re able to develop from a Linux command line. If you’re not using Linux as a development platform, you really should be.

git clone

cd gs-rest-service/complete

mvn clean install

java -jar target/gs-rest-service-0.1.0.jar

# on a separate terminal
curl http://localhost:8080/greeting

The key components of this tutorial, which are quite self explanatory with a simple explanation, are…

  • – starts the application with an embedded tomcat server
  • – binds the REST url and handles the request
  • – serializable object for the framework to return JSON
  • pom.xml
    • spring-boot-start-parent – a dependency management parent project
    • spring-boot-starter-web – includes things like the mvc framework and the rest framework
    • For a WAR working in an existing tomcat instance…
      • include spring-boot-starter-tomcat dependency
      • add a pom “packaging” element with a value of “war”
      • make your Application class extend SpringBootServletInitializer


Example for the spring tomcat starter dependency.

]]> 3
Linux Disk Usage Exclude Dot Directories Sun, 13 Dec 2015 21:26:36 +0000 I’m always searching for how to do this because I keep forgetting. So here it is, no more searching.

du -mcs /home/user/* /home/user/.[!.]* | sort -n
]]> 0
Making New Linux Disk Visible Without a Reboot Mon, 16 Nov 2015 19:28:56 +0000 I was having trouble today getting Linux to see my new partition space that I added in vSphere without rebooting the host. The new disk space was made visible by re-scanning the SCSI bus (below) and then the new partition was made visible by using the partprobe command (below).


I asked VMWare to provision my disk to be larger and then asked Linux to refresh the kernel info:

 $ echo 1 > /sys/class/scsi_device/0\:0\:3\:0/device/rescan 
 $ dmesg
 sdd: Write Protect is off 
 sdd: Mode Sense: 61 00 00 00 
 sdd: cache data unavailable 
 sdd: assuming drive cache: write through 
 sdd: detected capacity change from 171798691840 to 343597383680


I added another partition and then tried to get LVM to use it:

$ fdisk /dev/sdd
 Command (m for help): n
 Command action
 e extended
 p primary partition (1-4)
 Partition number (1-4): 3

But LVM couldn’t see it:

 $ pvcreate /dev/sdd3
 Device /dev/sdd3 not found (or ignored by filtering).
 $ pvcreate -vvvv /dev/sdd3
 #device/dev-cache.c:578 /dev/sdd3: stat failed: No such file or directory
 #device/dev-cache.c:578 /dev/sdd3: stat failed: No such file or directory

The solution was to use partprobe to inform the OS of partition table changes:

 $ partprobe /dev/sdd
 $ pvcreate /dev/sdd3
 dev_is_mpath: failed to get device for 8:51
 Writing physical volume data to disk "/dev/sdd3"
 Physical volume "/dev/sdd3" successfully created
]]> 0
git text graph Tue, 08 Sep 2015 15:46:39 +0000 I’ve been looking for a really good command for making a textual graph of my repo, showing the various branches, where they come from, etc. The reason I like this one is that it’s not only a graph, but it shows the branch names like ‘gitk –all’ does, but only the ones that are tied to tags or branches.

git config --global alias.graph \
'log --graph --pretty=oneline --abbrev-commit --simplify-by-decoration --decorate --all'
git graph
]]> 0
Backups Made Simple Mon, 31 Aug 2015 21:57:03 +0000 I’ve had complex backup solutions in the past, which I wrote myself with rsync and bash.  I’ve recently got sick and tired of the issues that come up now and then with them, so I decided to keep it extremely simple. So, I decided to opt for a couple of rsync only shell scripts. I get emails every time they run, as part of cron.

So, the cronjobs look like this…

0 8-19 * * 1 /home/trenta/backup/
0 8-19 * * 2-5 /home/trenta/backup/…

# /home/trenta/backup/
/usr/bin/rsync -avH --exclude-from=/home/trenta/backup/daily-excludes.txt /home/trenta/ /media/backup/trenta-home/…

# /home/trenta/backup/
/usr/bin/rsync -avH --exclude-from=/home/trenta/backup/daily-excludes.txt --link-dest=/media/backup/trenta-home/ /home/trenta/ /media/backup/trenta-home-$(/bin/date +'%w')/
]]> 0
SOAP vs REST Thu, 06 Aug 2015 00:12:29 +0000 I was recently asked the question about the conditions under which I would choose SOAP vs REST for writing a Web Service.  I was thoroughly intrigued by the question, because I was curious in which way the discussion would go, as that would tell me a lot about the other developer.

My reply essentially focused around the concept that some developer, or group of developers, decided that they would write a Web Service framework and do some programming for the sake of programming, instead of actually solving a problem.  I’ve always been curious as to why anyone would actually want to do that, but hey, some people like work more than I do apparently.  I enjoying designing software masterpieces, not working.  In other words, I don’t like work, I like to create art.

You know, people love creating solutions before there’s a problem.  Well, more specifically in this case, the problem of needing a way to deliver Web Services was already solved with a protocol that was created in the early 1990s called HTTP.  SOAP was just a useless wrapper for generic requests/responses, when HTTP was essentially already a superior wrapper protocol for the same.  I then proceeded to talk about how when REST came out I kind of chuckled, because I was thinking “Gee, you mean the standard web protocol HTTP is really more than enough? Who would have thunk?”

Apparently he liked that reply, lol.

]]> 0
OpenLDAP SSHA Salted Hashes By Hand Tue, 09 Jun 2015 20:07:40 +0000 I needed a way to verify that the OpenLDAP server had the correct hash recorded.  That is, a SSHA Hash Generator that I could run off the command line was in order.  After fiddling through it, I thought it would be worth documenting in a blog post.

We need to find the salt (the last four bytes of the hash), and then concatenate PASSWORD+SALT, take the SHA hash, convert to base64, prepend {SSHA}, and then finally base64 encode that whole string again.

Here is what ldapsearch says is the password hash:

userPassword:: e1NTSEF9OWZ1MHZick15Tmt6MnE3UGh6eW94SDMrNmhWb3llajU=

The first step is to retrieve the (binary) salt which is stored at the end of a double base64 encoded hash.  I wrote a perl SSHA Salt Extractor for that:

$ cat
#!/usr/bin/perl -w

my $hash=$ARGV[0];
# The hash is encoded as base64 twice:
use MIME::Base64;
$hash = decode_base64($hash);
$hash = decode_base64($hash);

# The salt length is four (the last four bytes).
$salt = substr($hash, -4);

# Split the salt into an array.
my @bytes = split(//,$salt);

# Convert each byte from binary to a human readable hexadecimal number.
foreach my $byte (@bytes) {
$byte = uc(unpack "H*", $byte);
print "$byte";
$ ./ e1NTSEF9OWZ1MHZick15Tmt6MnE3UGh6eW94SDMrNmhWb3llajU=

See, that is four ASCII characters represented by human readable hexadecimal. Another way to do this is like this:

echo "e1NTSEF9OWZ1MHZick15Tmt6MnE3UGh6eW94SDMrNmhWb3llajU=" | base64 -d | sed 's/^{SSHA}//' | base64 -d | tail -c4 | od -tx1

Now that we have extracted the salt, we can regenerate the salted hash with what we think the password should be.

$ cat

use Digest::SHA1;
use MIME::Base64;

my $p=$ARGV[0];
my $s=$ARGV[1];

# Convert from hex to binary.
$s_bin=pack 'H*', $s;

# Hash the password with the salt.
$digest = Digest::SHA1->new;

# Encode PASSWORD+SALT as Base64.
$hashedPasswd = '{SSHA}' . encode_base64($digest->digest . "$s_bin" ,'');

# Encode as Base64 again:
print 'userPassword:: ' . encode_base64($hashedPasswd) . "\n";

$ ./ correcthorsebatterystaple 68C9E8F9
userPassword:: e1NTSEF9OWZ1MHZick15Tmt6MnE3UGh6eW94SDMrNmhWb3llajU=

And there you have it.

]]> 0
Git Recover Deleted or Staged Files Mon, 09 Feb 2015 21:23:36 +0000 I ran into a situation where I accidentally staged a file I didn’t want to stage, and when I ran “git reset –hard” it was wiped out. After a simple google search (git recover staged files), recovering the file was simple.  I’ve put together a loop, which will check each commit, and look for a string “responsive”, which I know is in the file.

git fsck --lost-found | cut -f 3 -d ' ' | \
  while read -r commit; do 
    git show $commit | grep responsive; 
    if [ $? -eq 0 ]; then 
      echo $commit; 

The response was…

rm -rf web/src/main/webapp/responsive/.sass-cache/

I then just showed the commit blob with…

git show 53fcd5f117f8e19243dfcde24c14524dc99b4c1e

cp -r /media/mount/* web/src/main/webapp/
rm -rf web/src/main/webapp/lost+found/
rm -rf web/src/main/webapp/responsive/.sass-cache/
find web/ -name '._*' | while read -r filename; do 
 rm -f "$filename"
rm -rf web/src/main/webapp/META-INF/
git status
]]> 0