Adams Bros Blog http://blog.adamsbros.org Wed, 15 May 2019 01:45:31 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.2 javascript async and await http://blog.adamsbros.org/2018/05/10/javascript-async-and-await/ http://blog.adamsbros.org/2018/05/10/javascript-async-and-await/#respond Fri, 11 May 2018 02:28:08 +0000 http://blog.adamsbros.org/?p=661 I scratched my head for awhile when switching from promises to await/async. Hopefully this post helps you not need to scratch yours! 😀

Async/await is the same as using promises. It’s just syntactical sugar to make it look synchronous. The javascript engine sees the “await” and stops execution there for you automatically until the promise is resolved/rejected.

I find the javascript promise with ‘.then()’ and ‘.catch()’ is much nicer if you have really simple things to do with the result and the error. But, as soon as you start getting into a bunch of conditionals, from your results, you tend to get a lot of nested promise call statements. So, to save you from promise hell, you just label your function as async, and use the await expression so that you don’t have to be too concerned with promise management. Then hopefully what you do with the final result is simple, and you then resort to using promises. See the testPromise() calls below for examples.

/**
 * @param fail if true, throw an error with a promise.  If false, resolve
 * a delayed response with a promise.
 * @returns {Promise<*>} Always a promise resolve or reject.
 */
function testPromise(fail) {
    if (fail) {
        return Promise.reject(new Error('failed'));
    }
    else {
        return new Promise(resolve => setTimeout(function () {
            resolve('success')
        }, 1000));
    }
}

/**
 * Test calls with promises, awaits, and async.
 *
 * @param fail if true, fail with error.  If false, succeed normally.
 * @returns {Promise<*>} always a promise, as async/await use promises as
 * does Promise.reject and Promise.resolve.
 */
async function testAsync(fail) {
    let result = await testPromise(fail);
    return result;
}

testAsync(true)
    .then(v => console.log('fail: ', v))
    .catch(e => console.log('fail: ', e.message));
testPromise(true)
    .then(v => console.log('fail w/promise: ', v))
    .catch(e => console.log('fail w/promise: ', e.message));

testAsync(false)
    .then(v => console.log('succeed: ', v))
    .catch(e => console.log('succeed: ', e.message));
testPromise(false)
    .then(v => console.log('succeed w/promise: ', v))
    .catch(e => console.log('succeed w/promise: ', e.message));

/**
 * This shows that within an async function you can try/catch instead of
 * then/catch, as long as you use "await".
 *
 * @returns {Promise}
 */
async function testWithTry() {
    try {
        let result = await testPromise(true);
        console.log('fail w/try: ', result);
    } catch (e) {
        console.log('fail w/try: ', e.message)
    }
    try {
        let result = await testPromise(true);
        console.log('fail w/try w/promise: ', result);
    } catch (e) {
        console.log('fail w/try w/promise: ', e.message)
    }

    try {
        let result = await testPromise(false);
        console.log('succeed w/try: ', result);
    } catch (e) {
        console.log('succeed w/try: ', e.message)
    }
    try {
        let result = await testPromise(false);
        console.log('succeed w/try w/promise: ', result);
    } catch (e) {
        console.log('succeed w/try w/promise: ', e.message)
    }
}

testWithTry();
]]>
http://blog.adamsbros.org/2018/05/10/javascript-async-and-await/feed/ 0
Docker Building – Caching http://blog.adamsbros.org/2017/06/25/docker-building-caching/ http://blog.adamsbros.org/2017/06/25/docker-building-caching/#respond Mon, 26 Jun 2017 04:20:12 +0000 http://blog.adamsbros.org/?p=644 So you’re interested in creating your own docker image?  You keep finding it takes several minutes per run, which is a real pain.  No problem, cache everything!

  1. Wander over to Docker Squid and start using their squid in a docker image.
  2. Once the docker container is up and running, modify their /etc/squid3/squid.conf to change the cache_dir to allow 10G of storage and cache large objects such as 200M (default is only 512kb).  There is an existing line, so search with a regex for “^cache_dir”.  Then change the entire line to something like “cache_dir ufs /var/spool/squid3 10000 16 256 max-size=200000000“, which results in 10G of cache and 200M objects being cachable.
  3. Change your Dockerfile to use a RUN line with exports for http_proxy, https_proxy, and ftp_proxy, to point to your host’s IP and port 3128.  Don’t use ENV, as that will permanently set those for the container once it’s created and running. My line looks like “RUN export http_proxy=http://192.168.2.5:3128; export https_proxy=http://192.168.2.5:3128; export ftp_proxy=http://192.168.2.5:3128; \“

Doing this took my docker builds from 6-7 minutes down to about 1.5 minutes.

]]>
http://blog.adamsbros.org/2017/06/25/docker-building-caching/feed/ 0
Spring Building a RESTful Web Service – revisited http://blog.adamsbros.org/2016/01/28/spring-building-a-restful-web-service-revisited/ http://blog.adamsbros.org/2016/01/28/spring-building-a-restful-web-service-revisited/#comments Thu, 28 Jan 2016 19:33:39 +0000 http://blog.adamsbros.org/?p=632 I do frequently find that Spring documentation reads more like a novel than it does technical documentation. I find you can sometimes take many minutes, or even hours, just wading through stuff to find out how to do something that should have taken 5-10 minutes. Spring’s REST framework is relatively straight forward to use, but there doesn’t seem to be a good quick start on it’s use.

So, we endeavour to have you up and running with their “Building a RESTful Web Service” tutorial in under 5 minutes, assuming you have a basic java development environment going, meeting their requirements. Please quickly review the first two “What you’ll build” and the “What you’ll need” sections at “Building a RESTful Web Service“, then come back here.

We have one additional requirement. It’s assumed you’re able to develop from a Linux command line. If you’re not using Linux as a development platform, you really should be.

git clone https://github.com/spring-guides/gs-rest-service.git

cd gs-rest-service/complete

mvn clean install

java -jar target/gs-rest-service-0.1.0.jar

# on a separate terminal
curl http://localhost:8080/greeting

The key components of this tutorial, which are quite self explanatory with a simple explanation, are…

  • Application.java – starts the application with an embedded tomcat server
  • GreetingController.java – binds the REST url and handles the request
  • Greeting.java – serializable object for the framework to return JSON
  • pom.xml
    • spring-boot-start-parent – a dependency management parent project
    • spring-boot-starter-web – includes things like the mvc framework and the rest framework
    • For a WAR working in an existing tomcat instance…
      • include spring-boot-starter-tomcat dependency
      • add a pom “packaging” element with a value of “war”
      • make your Application class extend SpringBootServletInitializer

 

Example for the spring tomcat starter dependency.

        
        
            org.springframework.boot
            spring-boot-starter-tomcat
            provided
        
]]>
http://blog.adamsbros.org/2016/01/28/spring-building-a-restful-web-service-revisited/feed/ 3
Linux Disk Usage Exclude Dot Directories http://blog.adamsbros.org/2015/12/13/linux-disk-usage-exclude-dot-directories/ http://blog.adamsbros.org/2015/12/13/linux-disk-usage-exclude-dot-directories/#respond Sun, 13 Dec 2015 21:26:36 +0000 http://blog.adamsbros.org/?p=615 I’m always searching for how to do this because I keep forgetting. So here it is, no more searching.

du -mcs /home/user/* /home/user/.[!.]* | sort -n
]]>
http://blog.adamsbros.org/2015/12/13/linux-disk-usage-exclude-dot-directories/feed/ 0
Making New Linux Disk Visible Without a Reboot http://blog.adamsbros.org/2015/11/16/making-new-linux-disk-visible-without-a-reboot/ http://blog.adamsbros.org/2015/11/16/making-new-linux-disk-visible-without-a-reboot/#respond Mon, 16 Nov 2015 19:28:56 +0000 http://blog.adamsbros.org/?p=574 I was having trouble today getting Linux to see my new partition space that I added in vSphere without rebooting the host. The new disk space was made visible by re-scanning the SCSI bus (below) and then the new partition was made visible by using the partprobe command (below).

 

I asked VMWare to provision my disk to be larger and then asked Linux to refresh the kernel info:

 $ echo 1 > /sys/class/scsi_device/0\:0\:3\:0/device/rescan 
 $ dmesg
 sdd: Write Protect is off 
 sdd: Mode Sense: 61 00 00 00 
 sdd: cache data unavailable 
 sdd: assuming drive cache: write through 
 sdd: detected capacity change from 171798691840 to 343597383680

 

I added another partition and then tried to get LVM to use it:

$ fdisk /dev/sdd
 Command (m for help): n
 Command action
 e extended
 p primary partition (1-4)
 p
 Partition number (1-4): 3

But LVM couldn’t see it:

 $ pvcreate /dev/sdd3
 Device /dev/sdd3 not found (or ignored by filtering).
 $ pvcreate -vvvv /dev/sdd3
 #device/dev-cache.c:578 /dev/sdd3: stat failed: No such file or directory
 #metadata/metadata.c:3546 
 #device/dev-cache.c:578 /dev/sdd3: stat failed: No such file or directory

The solution was to use partprobe to inform the OS of partition table changes:

 $ partprobe /dev/sdd
 $ pvcreate /dev/sdd3
 dev_is_mpath: failed to get device for 8:51
 Writing physical volume data to disk "/dev/sdd3"
 Physical volume "/dev/sdd3" successfully created
]]>
http://blog.adamsbros.org/2015/11/16/making-new-linux-disk-visible-without-a-reboot/feed/ 0
git text graph http://blog.adamsbros.org/2015/09/08/git-text-graph/ http://blog.adamsbros.org/2015/09/08/git-text-graph/#respond Tue, 08 Sep 2015 15:46:39 +0000 http://blog.adamsbros.org/?p=599 I’ve been looking for a really good command for making a textual graph of my repo, showing the various branches, where they come from, etc. The reason I like this one is that it’s not only a graph, but it shows the branch names like ‘gitk –all’ does, but only the ones that are tied to tags or branches.

git config --global alias.graph \
'log --graph --pretty=oneline --abbrev-commit --simplify-by-decoration --decorate --all'
git graph
]]>
http://blog.adamsbros.org/2015/09/08/git-text-graph/feed/ 0
Backups Made Simple http://blog.adamsbros.org/2015/08/31/backups-made-simple/ http://blog.adamsbros.org/2015/08/31/backups-made-simple/#respond Mon, 31 Aug 2015 21:57:03 +0000 http://blog.adamsbros.org/?p=593 I’ve had complex backup solutions in the past, which I wrote myself with rsync and bash.  I’ve recently got sick and tired of the issues that come up now and then with them, so I decided to keep it extremely simple. So, I decided to opt for a couple of rsync only shell scripts. I get emails every time they run, as part of cron.

So, the cronjobs look like this…

0 8-19 * * 1 /home/trenta/backup/monday-rsync.sh
0 8-19 * * 2-5 /home/trenta/backup/daily-rsync.sh

monday-rsync.sh…

#!/bin/bash
# /home/trenta/backup/monday-rsync.sh
/usr/bin/rsync -avH --exclude-from=/home/trenta/backup/daily-excludes.txt /home/trenta/ /media/backup/trenta-home/

daily-rsync.sh…

#!/bin/bash
# /home/trenta/backup/daily-rsync.sh
/usr/bin/rsync -avH --exclude-from=/home/trenta/backup/daily-excludes.txt --link-dest=/media/backup/trenta-home/ /home/trenta/ /media/backup/trenta-home-$(/bin/date +'%w')/
]]>
http://blog.adamsbros.org/2015/08/31/backups-made-simple/feed/ 0
SOAP vs REST http://blog.adamsbros.org/2015/08/05/soap-vs-rest/ http://blog.adamsbros.org/2015/08/05/soap-vs-rest/#respond Thu, 06 Aug 2015 00:12:29 +0000 http://blog.adamsbros.org/?p=589 I was recently asked the question about the conditions under which I would choose SOAP vs REST for writing a Web Service.  I was thoroughly intrigued by the question, because I was curious in which way the discussion would go, as that would tell me a lot about the other developer.

My reply essentially focused around the concept that some developer, or group of developers, decided that they would write a Web Service framework and do some programming for the sake of programming, instead of actually solving a problem.  I’ve always been curious as to why anyone would actually want to do that, but hey, some people like work more than I do apparently.  I enjoying designing software masterpieces, not working.  In other words, I don’t like work, I like to create art.

You know, people love creating solutions before there’s a problem.  Well, more specifically in this case, the problem of needing a way to deliver Web Services was already solved with a protocol that was created in the early 1990s called HTTP.  SOAP was just a useless wrapper for generic requests/responses, when HTTP was essentially already a superior wrapper protocol for the same.  I then proceeded to talk about how when REST came out I kind of chuckled, because I was thinking “Gee, you mean the standard web protocol HTTP is really more than enough? Who would have thunk?”

Apparently he liked that reply, lol.

]]>
http://blog.adamsbros.org/2015/08/05/soap-vs-rest/feed/ 0
OpenLDAP SSHA Salted Hashes By Hand http://blog.adamsbros.org/2015/06/09/openldap-ssha-salted-hashes-by-hand/ http://blog.adamsbros.org/2015/06/09/openldap-ssha-salted-hashes-by-hand/#respond Tue, 09 Jun 2015 20:07:40 +0000 http://blog.adamsbros.org/?p=575 I needed a way to verify that the OpenLDAP server had the correct hash recorded.  That is, a SSHA Hash Generator that I could run off the command line was in order.  After fiddling through it, I thought it would be worth documenting in a blog post.

We need to find the salt (the last four bytes of the hash), and then concatenate PASSWORD+SALT, take the SHA hash, convert to base64, prepend {SSHA}, and then finally base64 encode that whole string again.

Here is what ldapsearch says is the password hash:

userPassword:: e1NTSEF9OWZ1MHZick15Tmt6MnE3UGh6eW94SDMrNmhWb3llajU=

The first step is to retrieve the (binary) salt which is stored at the end of a double base64 encoded hash.  I wrote a perl SSHA Salt Extractor for that:

$ cat getSalt.pl
#!/usr/bin/perl -w

my $hash=$ARGV[0];
# The hash is encoded as base64 twice:
use MIME::Base64;
$hash = decode_base64($hash);
$hash=~s/{SSHA}//;
$hash = decode_base64($hash);

# The salt length is four (the last four bytes).
$salt = substr($hash, -4);

# Split the salt into an array.
my @bytes = split(//,$salt);

# Convert each byte from binary to a human readable hexadecimal number.
foreach my $byte (@bytes) {
$byte = uc(unpack "H*", $byte);
print "$byte";
}
$ ./getSalt.pl e1NTSEF9OWZ1MHZick15Tmt6MnE3UGh6eW94SDMrNmhWb3llajU=
68C9E8F9

See, that is four ASCII characters represented by human readable hexadecimal. Another way to do this is like this:

echo "e1NTSEF9OWZ1MHZick15Tmt6MnE3UGh6eW94SDMrNmhWb3llajU=" | base64 -d | sed 's/^{SSHA}//' | base64 -d | tail -c4 | od -tx1

Now that we have extracted the salt, we can regenerate the salted hash with what we think the password should be.

$ cat openldap-sha.pl
#!/usr/bin/perl

use Digest::SHA1;
use MIME::Base64;

my $p=$ARGV[0];
my $s=$ARGV[1];

# Convert from hex to binary.
$s_bin=pack 'H*', $s;

# Hash the password with the salt.
$digest = Digest::SHA1->new;
$digest->add("$p");
$digest->add("$s_bin");

# Encode PASSWORD+SALT as Base64.
$hashedPasswd = '{SSHA}' . encode_base64($digest->digest . "$s_bin" ,'');

# Encode as Base64 again:
print 'userPassword:: ' . encode_base64($hashedPasswd) . "\n";

$ ./openldap-sha.pl correcthorsebatterystaple 68C9E8F9
userPassword:: e1NTSEF9OWZ1MHZick15Tmt6MnE3UGh6eW94SDMrNmhWb3llajU=

And there you have it.

]]>
http://blog.adamsbros.org/2015/06/09/openldap-ssha-salted-hashes-by-hand/feed/ 0
Git Recover Deleted or Staged Files http://blog.adamsbros.org/2015/02/09/git-recover-deleted-or-staged-files/ http://blog.adamsbros.org/2015/02/09/git-recover-deleted-or-staged-files/#respond Mon, 09 Feb 2015 21:23:36 +0000 http://blog.adamsbros.org/?p=565 I ran into a situation where I accidentally staged a file I didn’t want to stage, and when I ran “git reset –hard” it was wiped out. After a simple google search (git recover staged files), recovering the file was simple.  I’ve put together a loop, which will check each commit, and look for a string “responsive”, which I know is in the file.

git fsck --lost-found | cut -f 3 -d ' ' | \
  while read -r commit; do 
    git show $commit | grep responsive; 
    if [ $? -eq 0 ]; then 
      echo $commit; 
    fi; 
  done

The response was…

rm -rf web/src/main/webapp/responsive/.sass-cache/
53fcd5f117f8e19243dfcde24c14524dc99b4c1e

I then just showed the commit blob with…

git show 53fcd5f117f8e19243dfcde24c14524dc99b4c1e
#!/bin/bash

cp -r /media/mount/* web/src/main/webapp/
rm -rf web/src/main/webapp/lost+found/
rm -rf web/src/main/webapp/responsive/.sass-cache/
find web/ -name '._*' | while read -r filename; do 
 rm -f "$filename"
done
rm -rf web/src/main/webapp/META-INF/
git status
]]>
http://blog.adamsbros.org/2015/02/09/git-recover-deleted-or-staged-files/feed/ 0