Trenton – Adams Bros Blog http://blog.adamsbros.org Wed, 15 May 2019 01:44:20 +0000 en-US hourly 1 https://wordpress.org/?v=5.2.2 javascript async and await http://blog.adamsbros.org/2018/05/10/javascript-async-and-await/ http://blog.adamsbros.org/2018/05/10/javascript-async-and-await/#respond Fri, 11 May 2018 02:28:08 +0000 http://blog.adamsbros.org/?p=661 I scratched my head for awhile when switching from promises to await/async. Hopefully this post helps you not need to scratch yours! 😀

Async/await is the same as using promises. It’s just syntactical sugar to make it look synchronous. The javascript engine sees the “await” and stops execution there for you automatically until the promise is resolved/rejected.

I find the javascript promise with ‘.then()’ and ‘.catch()’ is much nicer if you have really simple things to do with the result and the error. But, as soon as you start getting into a bunch of conditionals, from your results, you tend to get a lot of nested promise call statements. So, to save you from promise hell, you just label your function as async, and use the await expression so that you don’t have to be too concerned with promise management. Then hopefully what you do with the final result is simple, and you then resort to using promises. See the testPromise() calls below for examples.

/**
 * @param fail if true, throw an error with a promise.  If false, resolve
 * a delayed response with a promise.
 * @returns {Promise<*>} Always a promise resolve or reject.
 */
function testPromise(fail) {
    if (fail) {
        return Promise.reject(new Error('failed'));
    }
    else {
        return new Promise(resolve => setTimeout(function () {
            resolve('success')
        }, 1000));
    }
}

/**
 * Test calls with promises, awaits, and async.
 *
 * @param fail if true, fail with error.  If false, succeed normally.
 * @returns {Promise<*>} always a promise, as async/await use promises as
 * does Promise.reject and Promise.resolve.
 */
async function testAsync(fail) {
    let result = await testPromise(fail);
    return result;
}

testAsync(true)
    .then(v => console.log('fail: ', v))
    .catch(e => console.log('fail: ', e.message));
testPromise(true)
    .then(v => console.log('fail w/promise: ', v))
    .catch(e => console.log('fail w/promise: ', e.message));

testAsync(false)
    .then(v => console.log('succeed: ', v))
    .catch(e => console.log('succeed: ', e.message));
testPromise(false)
    .then(v => console.log('succeed w/promise: ', v))
    .catch(e => console.log('succeed w/promise: ', e.message));

/**
 * This shows that within an async function you can try/catch instead of
 * then/catch, as long as you use "await".
 *
 * @returns {Promise}
 */
async function testWithTry() {
    try {
        let result = await testPromise(true);
        console.log('fail w/try: ', result);
    } catch (e) {
        console.log('fail w/try: ', e.message)
    }
    try {
        let result = await testPromise(true);
        console.log('fail w/try w/promise: ', result);
    } catch (e) {
        console.log('fail w/try w/promise: ', e.message)
    }

    try {
        let result = await testPromise(false);
        console.log('succeed w/try: ', result);
    } catch (e) {
        console.log('succeed w/try: ', e.message)
    }
    try {
        let result = await testPromise(false);
        console.log('succeed w/try w/promise: ', result);
    } catch (e) {
        console.log('succeed w/try w/promise: ', e.message)
    }
}

testWithTry();
]]>
http://blog.adamsbros.org/2018/05/10/javascript-async-and-await/feed/ 0
Docker Building – Caching http://blog.adamsbros.org/2017/06/25/docker-building-caching/ http://blog.adamsbros.org/2017/06/25/docker-building-caching/#respond Mon, 26 Jun 2017 04:20:12 +0000 http://blog.adamsbros.org/?p=644 So you’re interested in creating your own docker image?  You keep finding it takes several minutes per run, which is a real pain.  No problem, cache everything!

  1. Wander over to Docker Squid and start using their squid in a docker image.
  2. Once the docker container is up and running, modify their /etc/squid3/squid.conf to change the cache_dir to allow 10G of storage and cache large objects such as 200M (default is only 512kb).  There is an existing line, so search with a regex for “^cache_dir”.  Then change the entire line to something like “cache_dir ufs /var/spool/squid3 10000 16 256 max-size=200000000“, which results in 10G of cache and 200M objects being cachable.
  3. Change your Dockerfile to use a RUN line with exports for http_proxy, https_proxy, and ftp_proxy, to point to your host’s IP and port 3128.  Don’t use ENV, as that will permanently set those for the container once it’s created and running. My line looks like “RUN export http_proxy=http://192.168.2.5:3128; export https_proxy=http://192.168.2.5:3128; export ftp_proxy=http://192.168.2.5:3128; \“

Doing this took my docker builds from 6-7 minutes down to about 1.5 minutes.

]]>
http://blog.adamsbros.org/2017/06/25/docker-building-caching/feed/ 0
Spring Building a RESTful Web Service – revisited http://blog.adamsbros.org/2016/01/28/spring-building-a-restful-web-service-revisited/ http://blog.adamsbros.org/2016/01/28/spring-building-a-restful-web-service-revisited/#comments Thu, 28 Jan 2016 19:33:39 +0000 http://blog.adamsbros.org/?p=632 I do frequently find that Spring documentation reads more like a novel than it does technical documentation. I find you can sometimes take many minutes, or even hours, just wading through stuff to find out how to do something that should have taken 5-10 minutes. Spring’s REST framework is relatively straight forward to use, but there doesn’t seem to be a good quick start on it’s use.

So, we endeavour to have you up and running with their “Building a RESTful Web Service” tutorial in under 5 minutes, assuming you have a basic java development environment going, meeting their requirements. Please quickly review the first two “What you’ll build” and the “What you’ll need” sections at “Building a RESTful Web Service“, then come back here.

We have one additional requirement. It’s assumed you’re able to develop from a Linux command line. If you’re not using Linux as a development platform, you really should be.

git clone https://github.com/spring-guides/gs-rest-service.git

cd gs-rest-service/complete

mvn clean install

java -jar target/gs-rest-service-0.1.0.jar

# on a separate terminal
curl http://localhost:8080/greeting

The key components of this tutorial, which are quite self explanatory with a simple explanation, are…

  • Application.java – starts the application with an embedded tomcat server
  • GreetingController.java – binds the REST url and handles the request
  • Greeting.java – serializable object for the framework to return JSON
  • pom.xml
    • spring-boot-start-parent – a dependency management parent project
    • spring-boot-starter-web – includes things like the mvc framework and the rest framework
    • For a WAR working in an existing tomcat instance…
      • include spring-boot-starter-tomcat dependency
      • add a pom “packaging” element with a value of “war”
      • make your Application class extend SpringBootServletInitializer

 

Example for the spring tomcat starter dependency.

        
        
            org.springframework.boot
            spring-boot-starter-tomcat
            provided
        
]]>
http://blog.adamsbros.org/2016/01/28/spring-building-a-restful-web-service-revisited/feed/ 3
Linux Disk Usage Exclude Dot Directories http://blog.adamsbros.org/2015/12/13/linux-disk-usage-exclude-dot-directories/ http://blog.adamsbros.org/2015/12/13/linux-disk-usage-exclude-dot-directories/#respond Sun, 13 Dec 2015 21:26:36 +0000 http://blog.adamsbros.org/?p=615 I’m always searching for how to do this because I keep forgetting. So here it is, no more searching.

du -mcs /home/user/* /home/user/.[!.]* | sort -n
]]>
http://blog.adamsbros.org/2015/12/13/linux-disk-usage-exclude-dot-directories/feed/ 0
git text graph http://blog.adamsbros.org/2015/09/08/git-text-graph/ http://blog.adamsbros.org/2015/09/08/git-text-graph/#respond Tue, 08 Sep 2015 15:46:39 +0000 http://blog.adamsbros.org/?p=599 I’ve been looking for a really good command for making a textual graph of my repo, showing the various branches, where they come from, etc. The reason I like this one is that it’s not only a graph, but it shows the branch names like ‘gitk –all’ does, but only the ones that are tied to tags or branches.

git config --global alias.graph \
'log --graph --pretty=oneline --abbrev-commit --simplify-by-decoration --decorate --all'
git graph
]]>
http://blog.adamsbros.org/2015/09/08/git-text-graph/feed/ 0
Backups Made Simple http://blog.adamsbros.org/2015/08/31/backups-made-simple/ http://blog.adamsbros.org/2015/08/31/backups-made-simple/#respond Mon, 31 Aug 2015 21:57:03 +0000 http://blog.adamsbros.org/?p=593 I’ve had complex backup solutions in the past, which I wrote myself with rsync and bash.  I’ve recently got sick and tired of the issues that come up now and then with them, so I decided to keep it extremely simple. So, I decided to opt for a couple of rsync only shell scripts. I get emails every time they run, as part of cron.

So, the cronjobs look like this…

0 8-19 * * 1 /home/trenta/backup/monday-rsync.sh
0 8-19 * * 2-5 /home/trenta/backup/daily-rsync.sh

monday-rsync.sh…

#!/bin/bash
# /home/trenta/backup/monday-rsync.sh
/usr/bin/rsync -avH --exclude-from=/home/trenta/backup/daily-excludes.txt /home/trenta/ /media/backup/trenta-home/

daily-rsync.sh…

#!/bin/bash
# /home/trenta/backup/daily-rsync.sh
/usr/bin/rsync -avH --exclude-from=/home/trenta/backup/daily-excludes.txt --link-dest=/media/backup/trenta-home/ /home/trenta/ /media/backup/trenta-home-$(/bin/date +'%w')/
]]>
http://blog.adamsbros.org/2015/08/31/backups-made-simple/feed/ 0
SOAP vs REST http://blog.adamsbros.org/2015/08/05/soap-vs-rest/ http://blog.adamsbros.org/2015/08/05/soap-vs-rest/#respond Thu, 06 Aug 2015 00:12:29 +0000 http://blog.adamsbros.org/?p=589 I was recently asked the question about the conditions under which I would choose SOAP vs REST for writing a Web Service.  I was thoroughly intrigued by the question, because I was curious in which way the discussion would go, as that would tell me a lot about the other developer.

My reply essentially focused around the concept that some developer, or group of developers, decided that they would write a Web Service framework and do some programming for the sake of programming, instead of actually solving a problem.  I’ve always been curious as to why anyone would actually want to do that, but hey, some people like work more than I do apparently.  I enjoying designing software masterpieces, not working.  In other words, I don’t like work, I like to create art.

You know, people love creating solutions before there’s a problem.  Well, more specifically in this case, the problem of needing a way to deliver Web Services was already solved with a protocol that was created in the early 1990s called HTTP.  SOAP was just a useless wrapper for generic requests/responses, when HTTP was essentially already a superior wrapper protocol for the same.  I then proceeded to talk about how when REST came out I kind of chuckled, because I was thinking “Gee, you mean the standard web protocol HTTP is really more than enough? Who would have thunk?”

Apparently he liked that reply, lol.

]]>
http://blog.adamsbros.org/2015/08/05/soap-vs-rest/feed/ 0
Git Recover Deleted or Staged Files http://blog.adamsbros.org/2015/02/09/git-recover-deleted-or-staged-files/ http://blog.adamsbros.org/2015/02/09/git-recover-deleted-or-staged-files/#respond Mon, 09 Feb 2015 21:23:36 +0000 http://blog.adamsbros.org/?p=565 I ran into a situation where I accidentally staged a file I didn’t want to stage, and when I ran “git reset –hard” it was wiped out. After a simple google search (git recover staged files), recovering the file was simple.  I’ve put together a loop, which will check each commit, and look for a string “responsive”, which I know is in the file.

git fsck --lost-found | cut -f 3 -d ' ' | \
  while read -r commit; do 
    git show $commit | grep responsive; 
    if [ $? -eq 0 ]; then 
      echo $commit; 
    fi; 
  done

The response was…

rm -rf web/src/main/webapp/responsive/.sass-cache/
53fcd5f117f8e19243dfcde24c14524dc99b4c1e

I then just showed the commit blob with…

git show 53fcd5f117f8e19243dfcde24c14524dc99b4c1e
#!/bin/bash

cp -r /media/mount/* web/src/main/webapp/
rm -rf web/src/main/webapp/lost+found/
rm -rf web/src/main/webapp/responsive/.sass-cache/
find web/ -name '._*' | while read -r filename; do 
 rm -f "$filename"
done
rm -rf web/src/main/webapp/META-INF/
git status
]]>
http://blog.adamsbros.org/2015/02/09/git-recover-deleted-or-staged-files/feed/ 0
Tora Install on Linux Mint 17 http://blog.adamsbros.org/2015/01/05/tora-install-on-linux-mint-17/ http://blog.adamsbros.org/2015/01/05/tora-install-on-linux-mint-17/#comments Mon, 05 Jan 2015 19:25:58 +0000 http://blog.adamsbros.org/?p=562 We need to start by installing the oracle client appropriate for your architecture.  Grab the oracle instant client rpms from oracle’s site.  Then run…

sudo alien --to-deb oracle-instantclient11.2-*.rpm

Patch

Solution found from a tora bug report.

diff -aur tora-2.1.3/src/toextract.h tora-2.1.3patched/src/toextract.h
--- tora-2.1.3/src/toextract.h	2010-02-02 10:25:43.000000000 -0800
+++ tora-2.1.3patched/src/toextract.h	2012-06-22 21:58:45.026286147 -0700
@@ -53,6 +53,7 @@
 #include 
 //Added by qt3to4:
 #include 
+#include 
 
 class QWidget;
 class toConnection;

Build

svn export https://svn.code.sf.net/p/tora/code/tags/tora-2.1.3
 cd tora-2.1.3
 apt install cmake libboost-system1.55-dev libqscintilla2-dev g++
 cmake -DORACLE_PATH_INCLUDES=/usr/include/oracle/11.2/client64 \
-DCMAKE_BUILD_TYPE=Release -DENABLE_PGSQL=false -DENABLE_DB2=false .
 
 patch -p1 # paste above patch in
 
 make
 sudo tar --strip-components 2 -C /usr/local/bin/ -xzvf tora-2.1.3-Linux.tar.gz
]]>
http://blog.adamsbros.org/2015/01/05/tora-install-on-linux-mint-17/feed/ 1
Enumerate All Block Devices http://blog.adamsbros.org/2014/12/07/enumerate-all-block-devices/ http://blog.adamsbros.org/2014/12/07/enumerate-all-block-devices/#respond Sun, 07 Dec 2014 11:04:40 +0000 http://blog.adamsbros.org/?p=560 I found a great little utility that can enumerate through all the block devices, including your lvm, crypt, etc.

It’s found in the util-linux-ng package, and is called lsblk.  Here’s an example run…

[04:04:06 root@tdanas ~]
$ lsblk
NAME                    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                       8:0    0 111.8G  0 disk  
├─sda1                    8:1    0  1000M  0 part  /boot
└─sda2                    8:2    0 110.8G  0 part  
  ├─nas-root (dm-0)     253:0    0  19.5G  0 lvm   /
  ├─nas-swap (dm-1)     253:1    0  1000M  0 lvm   [SWAP]
  ├─nas-log (dm-3)      253:3    0   9.8G  0 lvm   /var/log
  └─nas-tmp (dm-4)      253:4    0   9.8G  0 lvm   /tmp
sdb                       8:16   0   2.7T  0 disk  
├─sdb1                    8:17   0 931.3G  0 part  
│ └─md10                  9:10   0   500G  0 raid1 
│   └─pv0 (dm-5)        253:5    0   500G  0 crypt 
│     └─dat-home (dm-6) 253:6    0   500G  0 lvm   /data
└─sdb2                    8:18   0   1.8T  0 part  
sdc                       8:32   0 931.5G  0 disk  
└─sdc1                    8:33   0 931.5G  0 part  
  └─md10                  9:10   0   500G  0 raid1 
    └─pv0 (dm-5)        253:5    0   500G  0 crypt 
      └─dat-home (dm-6) 253:6    0   500G  0 lvm   /data
sdd                       8:48   0   2.7T  0 disk  
└─sdd1                    8:49   0   2.7T  0 part  
  └─3TBak_crypt (dm-7)  253:7    0   2.7T  0 crypt 
    └─3TB-safe (dm-8)   253:8    0   1.5T  0 lvm   /media/cvsafe
sde                       8:64   0 931.5G  0 disk  
└─sde1                    8:65   0   500G  0 part  
  └─md10                  9:10   0   500G  0 raid1 
    └─pv0 (dm-5)        253:5    0   500G  0 crypt 
      └─dat-home (dm-6) 253:6    0   500G  0 lvm   /data

You can also specify the ‘-f’ flag, if you want UUIDs of the block devices for recovery information.

]]>
http://blog.adamsbros.org/2014/12/07/enumerate-all-block-devices/feed/ 0