Category: Application Development

  • AppDev: Zookeeper Port Forwarding to all servers from local machine

    To simply testing with Zookeeper on a remote Kafka cluster, one must connect to the client application ports on the backend.  When the remote Kafka cluster has multiple nodes and behind a firewall and a SSH jump server, the complexity is fairly high.  Note, the SSH jump server is the permitted man in the middle.    The client must allow application access to Zookeeper on Kafka – listening locally. Current techniques allow for a single port hosted on the developers machine for instance, 2181 listening on the local machine, and a single remote server.  This approach is not reliable – servers are taken out of service, added back, fail, or reroute to the master (another separate server).   

    PortDescription
    88/tcpKerberos
    2181/tcpzookeeper.property.clientPort

    A typical connection looks like: 

    ssh -J jump-server kafka-1 -L 2181:kafka-1:2181 "while true; do echo "waiting"; sleep 180; done"

      I worked to develop a small proxy. Setup hosts file. 1 – Edit /etc/hosts 2 – Add entry to hosts file

    127.0.0.1 kafka-1
    127.0.0.2 kafka-2
    127.0.0.3 kafka-3
    127.0.0.4 kafka-4
    127.0.0.5 kafka-5

    3 – Save the hosts file 4 – Setup Available interfaces (1 for each unique service) 1 is already up and in use (you only need to add the extras)

    sudo ifconfig lo0 alias 127.0.0.2 up
    sudo ifconfig lo0 alias 127.0.0.3 up
    sudo ifconfig lo0 alias 127.0.0.4 up
    sudo ifconfig lo0 alias 127.0.0.5 up

    5 – Setup the port forwarding, forward to jump server ssh -L 30991:localhost:30991 jump-server 6 – Forward to Kafka server ssh -L 30991:localhost:2181 kafka-1 7 – Loop while on kafka server while true; do echo “waiting”; sleep 180; done 8 – Repeat for each kafka server increasing the port by 1 (refer to ports section for mapping) 9 – Setup the Terminal – node krb5-tcp.js 10 – Setup the Terminal – node proxy_socket.js

    echo stats | nc kafka-1 2181
    Zookeeper version: 3.4.6-IBM_4–1, built on 06/17/2016 01:58 GMT
    Clients:
    /192.168.12.47:50404[1](queued=0,recved=1340009,sent=1360508)
    /192.168.12.46:48694[1](queued=0,recved=1348346,sent=1368936)
    /192.168.12.48:39842[1](queued=0,recved=1341655,sent=1362178)
    /0:0:0:0:0:0:0:1:39644[0](queued=0,recved=1,sent=0)

    Latency min/avg/max: 0/0/2205
    Received: 4878752
    Sent: 4944171
    Connections: 4
    Outstanding: 0
    Zxid: 0x1830001944e
    Mode: follower
    Node count: 442

    11 – Use your code to access Zookeeper ServerReferences https://github.com/nodejitsu/node-http-proxy

    sudo ifconfig lo0 alias 127.0.0.6 up
    sudo ifconfig lo0 alias 127.0.0.7 up
    sudo ifconfig lo0 alias 127.0.0.8 up

    Configuration

    {

    "2181": {

    "type": "socket",

    "members": [

    { "hostname": "kafka-1", "port": 30991 },

    { "hostname": "kafka-2", "port": 30992 },

    { "hostname": "kafka-3", "port": 30993 },

    { "hostname": "kafka-4", "port": 30994 },

    { "hostname": "kafka-5", "port": 30995 }

    ]

    }

    }
    Jaas Configuration
    ./kerberos/src/main/java/demo/kerberos/jaas.conf
    TestClient {
    com.sun.security.auth.module.Krb5LoginModule required
    principal="ctest4@test.COM"
    debug=true
    useKeyTab=true
    storeKey=true
    doNotPrompt=false
    keyTab="/Users/paulbastide/tmp/kerberos/test.headless.keytab"
    useTicketCache=false;
    };

    Java – App.java

    package demo.kerberos;

    import javax.security.auth.*;
    import javax.security.auth.login.*;
    import javax.security.auth.callback.*;
    import javax.security.auth.kerberos.*;
    import java.io.*;

    public class App {
    public static void main(String[] args) {

    System.setProperty("java.security.auth.login.config",
    "/Users/paulbastide/tmp/kerberos/src/main/java/demo/kerberos/jaas.conf");
    System.setProperty("java.security.krb5.conf", "/Users/paulbastide/tmp/kerberos/krb5.conf");

    Subject mysubject = new Subject();
    LoginContext lc;

    try {

    lc = new LoginContext("TestClient", mysubject, new MyCallBackHandler());
    lc.login();

    } catch (LoginException e) {
    e.printStackTrace();
    }

    }

    }

    Java - MyCallBackHandler.java
    package demo.kerberos;

    import javax.security.auth.*;
    import javax.security.auth.login.*;
    import javax.security.auth.callback.*;
    import javax.security.auth.kerberos.*;
    import java.io.*;

    public class MyCallBackHandler implements CallbackHandler {
    public void handle(Callback[] callbacks)
    throws IOException, UnsupportedCallbackException {

    for (int i = 0; i < callbacks.length; i++) {
    System.out.println(callbacks[i]);
    }
    }
    }
  • AppDev: Forwarding DGram in node.js

    For a project I am working on I needed to rewrite a DGram port. I moved the ports around and found a few quick tests.

    Testing with NC

    my-machine:~$ echo -n “data-message” | nc -v -4u -w1 localhost 88
    found 0 associations
    found 1 connections:
    1: flags=82<CONNECTED,PREFERRED>
    outif (null)
    src 127.0.0.1 port 53862
    dst 127.0.0.1 port 88
    rank info not available
    Connection to localhost port 88 [udp/radan-http] succeeded!
    

    Rewriting incoming datagrams to another port

    You can run the sample, and get the results as follows

    server listening 0.0.0.0:88
    server got: j��0����
    
  • Testing: Dynamic Test-NG Tests

    In my last few projects, I have used Test-NG.  Uniquely in my current project, I had to generate tests programmatically.  Instead of writing one test for each element in the project, I am able to generate a bunch at-will using the following pattern:

    Factory

    package test;
    
    import org.testng.annotations.Factory;
    
    public class DynamicTestFactory {
    
    
        @Factory
        public Object[] createInstances() {
            
         Object[] result = new Object[10]; 
         for (int i = 0; i < 10; i++) {
            result[i] = new ExampleProcessorTest(Integer.toString(i * 10)+ "A",Integer.toString(i*10) + "B");
          }
          return result;
        }
        
    }

    Test

    
    package test;
    
    import static org.testng.Assert.assertTrue;
    
    import org.testng.annotations.Test;
     
    public class ExampleProcessorTest {
    
        private String a; 
        private String b; 
        
        public ExampleProcessorTest(String a, String b) {
            this.a = a;
            this.b = b;
            
        }
        
        @Test
        public void testServer() {
            System.out.println("TEST");
            assertTrue(true);;
        }
    }

    Running Code

    [RemoteTestNG] detected TestNG version 6.9.10
    [TestNG] Running:
    /private/var/folders/07/sw3n5r3170q202d5j4tx8fhw0000gn/T/testng-eclipse--1754065412/testng-customsuite.xml
    
    TEST
    TEST
    TEST
    TEST
    TEST
    TEST
    TEST
    TEST
    TEST
    TEST
    PASSED: testServer
    PASSED: testServer
    PASSED: testServer
    PASSED: testServer
    PASSED: testServer
    PASSED: testServer
    PASSED: testServer
    PASSED: testServer
    PASSED: testServer
    PASSED: testServer
    
    ===============================================
    Default test
    Tests run: 10, Failures: 0, Skips: 0
    ===============================================
    
    
    ===============================================
    Default suite
    Total tests run: 10, Failures: 0, Skips: 0
    ===============================================
    
    [TestNG] Time taken by org.testng.reporters.EmailableReporter2@76a4d6c: 9 ms
    [TestNG] Time taken by [FailedReporter passed=0 failed=0 skipped=0]: 5 ms
    [TestNG] Time taken by org.testng.reporters.JUnitReportReporter@32cf48b7: 4 ms
    [TestNG] Time taken by org.testng.reporters.jq.Main@130f889: 23 ms
    [TestNG] Time taken by org.testng.reporters.XMLReporter@6e2c9341: 8 ms
    [TestNG] Time taken by org.testng.reporters.SuiteHTMLReporter@58a90037: 10 ms
    I can also trigger using testng.xml
    <class name="DynamicTestFactory" />

    Reference

  • Gatsby & Carbon: Build with Github Action

    As some of you know, I work on the IBM FHIR Server and with my colleagues, I have started automating some of the actions we take – Build, Test, Deploy, Deploy our website.

    More specific to the “Deploy our website” automation, our website uses technologies, such as Gatsby, Carbon, Gatsby Carbon Theme. Fundamentally, a static site generation technology, like Jekyll, Gatsby uses Node, Yarn and some nice React code.

    To build our site with GitHub actions, I built out a site workflow.  The key elements to this workflow are:

    • Triggers
    • Node.js and Ubuntu Images
    • Build
    • Add, Commit and Push to GH Pages
    • Debugging and Replicating Locally

    Triggers

    For the Triggers, I recommend limiting the site generation to master branches.  The master branch filter and on push, limits the re-deployment, also keep your site building on on docs/** changes.

    on:
    push:
    paths:
    – “docs/**”
    branches:
    – master
     
    There is a subtlety the websites are cached for 10 minutes, confirmed on the site – Caching assets in website served from GitHub pages

    Node.js and Ubuntu Images

    I opted to use Ubuntu with Node.js
     
    jobs:
    build:
    runs-on: ubuntu-latest

    strategy:
    matrix:
    node-version: [12.x]
     
    The important thing is ubuntu-latest which has some incompatibility with Gatsby Carbon’s build. 

    Build

    I build the system as follows:

    Checkout the repo to a folder

    steps:
    -name: Grab the Master Branch
    uses: actions/checkout@v1
    with:
    working-directory: fhir
    ref: refs/heads/master
    fetch-depth: 1
    path: fhir
     
    Activate Node
    name: Use Node.js ${{ matrix.node-version }}
    uses: actions/setup-node@v1
    with:
    node-version: ${{ matrix.node-version }}
     
    Setup the build
     
    echo “Check on Path”
    pwd
    cd docs/
    npm install -g gatsby-cli
    gatsby telemetry –disable
     
    Install the packages, note, fsevents is not used on linux images, so use–no-optional (these plugins are suspect).
     
    npm install –no-optional –save react react-copy-to-clipboard react-dom react-ga classnames carbon @carbon/addons-website carbon-components carbon-components-react carbon-addons-cloud carbon-icons gatsby gatsby-theme-carbon-starter markdown-it gatsby-plugin-manifest gatsby-plugin-slug gatsby-plugin-sitemap gatsby-plugin-sharp
     
    With ubuntu, you can’t use gatsby build directly per https://github.com/gatsbyjs/gatsby/issues/17557, 
    so I use the suggestion as a workaround due to path/issues in the gatsby component dependency of fsevents.
     
    npm –prefix-paths run build
    cp -R public/ ../../public/
     
    Grab the GH-Pages branch
     
    – name: Grab the GH Pages Branch
    uses: actions/checkout@v1
    with:
    working-directory: gh-pages
    ref: refs/heads/gh-pages
    fetch-depth: 1
    path: docs
    token: ${{ secrets.GITHUB_TOKEN }}

    Per Bypassing Jekyll on GitHub Pages, be sure to add the .nojekll to the root of the gh-pages. I added a guard in the shell script to check if the file is there, and create the file if it does not exist.

    If you need Environment variables, you should add the environment variables to the step.

    Add, Commit and Push to GH Pages

    I add the gitignore and nojekll files while removing any cached files, before moving in the new files.

    I also like to make sure when this runs there is a build.txt file to trace when the site is built. (This file contains the build time Thu Nov 21 19:39:49 UTC 2019

    I then use the github environment variables passed in to push the contents to the repo the branch is from. 

    -name: Commit and Add GH Pages

    run: |

    echo "cleaning up the prior files on the branch"

    if [ ! -f .nojekyll ]

    then

    touch .nojekyll

    rm -f _config.yml

    fi

    rm -f *.js webpack.stats.json styles-*.js styles-*.js.map webpack-runtime-*.js.map webpack-runtime-*.js manifest.webmanifest component---*.js* app-*.js*

    rm -rf docs/node_modules docs/public docs/.cache

    echo "Moving the files around for gh-pages"

    cp -Rf ../public/* ./

    find .

    date > build.txt

    git config --global user.email "${{ secrets.GITHUB_ACTOR }}@users.noreply.github.com"

    git config --global user.name "Git Hub Site Automation"

    git add .

    git commit -m "Update to GH-Pages"

    - name: Push changes to GH Pages

    run: |

    echo "Push Changes"

    git branch

    remote_repo="https://${GITHUB_ACTOR}:${GITHUB_TOKEN}@github.com/${GITHUB_REPOSITORY}.git"

    git push "${remote_repo}" HEAD:gh-pages

    env:

    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

    GITHUB_REPOSITORY: ${{ secrets.GITHUB_REPOSITORY }}

    GITHUB_ACTOR: ${{ secrets.GITHUB_ACTOR }}

    CI: true

    Debugging and Replicating Locally

    If you are troubleshooting, you can use a couple of approaches: 

    1 – Create a Docker Image

    Create the Image

    docker run -itd –name gatsby-node -v docs:/data node:latest

    Copy the Files

    docker cp ~/FHIR/docs 6d810efb3b586739932166d424641003ee9b238de506543fcdd47eb7e7d41699:/data

    Launch the shell and try the build

    npm install –no-optional –save react react-copy-to-clipboard react-dom react-ga classnames carbon @carbon/addons-website carbon-components carbon-components-react carbon-addons-cloud carbon-icons gatsby gatsby-theme-carbon-starter markdown-it gatsby-plugin-manifest gatsby-plugin-slug gatsby-plugin-sitemap gatsby-plugin-sharp

    Run the gatsby build

    npm –prefix-paths run build

    2. If you want complicated navigation, refer to https://github.com/gatsbyjs/gatsby/blob/master/www/src/data/sidebars/doc-links.yaml however… gatsby-carbon-theme’s sidebar links only uses the to value not the href value.

    3.  If you have an issue with your deployment check a couple of things:

    Check your Deployed Environment. You should see a deployment in the last few seconds.

    Check your Settings You should see no issues, else investigate the site locally on the gh-pages branch, and check Troubleshooting Jekyll build errors for GitHub Pages sites

    Best of luck with your build!

  • Migrating Source Code Git-to-Git

    Migrating source code is a pain in the butt, I know.  There are about 9 million variations, and one of interest to me – git to github.com. 

    There are a number of tools to clean up your git history and prepare to move.

    • Git and Scripting
    • BFG Repo Cleaner
    • Git-Python
    • JGit

    I found Git-Python a bit cumbersome, BFG Repo Cleaner more than I needed/wanted, and Git / Scripting too much work. After some prototyping, I opted for JGit from Eclipse and some Git knowhow.

    First, I switched to the source Git Repo branch I wanted to migrate and exported the commit list.

    git rev-list HEAD > commits.txt

    which results in

    7452e8eb1f287e2ad2d8c2d005455197ba4183f2

    baac5e4d0ce999d983c016d67175a898f50444b3

    2a8e2ec7507e05555e277f214bf79119cda4f025

    This commits.txt is useful down the line.

    I am a Maven disciple so, I created a maven java project with Java 1.8 and the following dependencies:

            <dependency>

                <groupId>org.eclipse.jgit</groupId>

                <artifactId>org.eclipse.jgit</artifactId>

                <version>${jgit.version}</version>

            </dependency>

            <dependency>

                <groupId>com.google.guava</groupId>

                <artifactId>guava</artifactId>

                <version>20.0</version>

            </dependency>

            <dependency>

                <groupId>org.slf4j</groupId>

                <artifactId>slf4j-nop</artifactId>

                <version>1.7.25</version>

            </dependency>

    I used the JGit to check the list of commits (note the REPO here must have .git at end).

    try (Git git = Git.open(new File(SOURCE_GIT_REPO))) {

                printHeaderLine();

                System.out.println("Starting Branch is " + git.getRepository().getBranch());

                Iterator<RevCommit> iter = git.log().call().iterator();

                while (iter.hasNext()) {

                    RevCommit commit = iter.next();

                    String binSha = commit.name();

                    commits.add(binSha);

                }

            }

    I flip it around, so I can process OLDEST to NEWEST

            Collections.reverse(commits);

    I used the git log (LogCommand in JGit) to find out all the times a FILE was modified, and do custom processing:

    try (Git git = Git.open(new File(REPO))) {

                LogCommand logCommand = git.log().add(git.getRepository().resolve(Constants.HEAD)).addPath(fileName.replace(REPO, ""));

                Set<String> years = new HashSet<>();

                for (RevCommit revCommit : logCommand.call()) {

                    Instant instant = Instant.ofEpochSecond(revCommit.getCommitTime());

    // YOUR PROCESSING

                }

            }

    }

    To find out the files specifically in the HEAD of the repo, gets the files and paths, and puts it in a List

    try (Git git = Git.open(new File("test/.git"))) {

                Iterator<RevCommit> iter = git.log().call().iterator();

                if (iter.hasNext()) {

                    RevCommit commit = iter.next();

                    try (RevWalk walk = new RevWalk(git.getRepository());) {

                        RevTree tree = walk.parseTree(commit.getId());

                        try (TreeWalk treeWalk = new TreeWalk(git.getRepository());) {

                            treeWalk.addTree(tree);

                            treeWalk.setRecursive(true);

                            while (treeWalk.next()) {

                                headFiles.add(treeWalk.getPathString());

                            }

                        }

                    }

                } 

            }

    }

    I built a history of changes.

    try (Git git = Git.open(new File("test/.git"))) {

                Iterator<RevCommit> iter = git.log().call().iterator();

                while (iter.hasNext()) {

                    RevCommit commit = iter.next();

                    try (DiffFormatter df = new DiffFormatter(DisabledOutputStream.INSTANCE);) {

                        df.setRepository(git.getRepository());                    df.setDiffComparator(RawTextComparator.DEFAULT);

                        df.setDetectRenames(true);


                        CommitHistoryEntry.Builder builder =                            CommitHistoryEntry.builder().binsha(commit.name()).commitTime(commit.getCommitTime()).authorEmail(commit.getAuthorIdent().getEmailAddress()).shortMessage(commit.getShortMessage()).fullMessage(commit.getFullMessage());

                        RevCommit[] parents = commit.getParents();

                        if (parents != null && parents.length > 0) {

                            List<DiffEntry> diffs = df.scan(commit.getTree(), parents[0]);

                            builder.add(diffs);

                        } else {

                            builder.root(true);

                            try (RevWalk walk = new RevWalk(git.getRepository());) {

                                RevTree tree = walk.parseTree(commit.getId());

                                try (TreeWalk treeWalk = new TreeWalk(git.getRepository());) {

                                    treeWalk.addTree(tree);

                                    treeWalk.setRecursive(true);

                                    while (treeWalk.next()) {                                   

    builder.file(treeWalk.getPathString());

                                    }

                                }

                            }

                        }

                        entries.add(entry);

                    }

                }

    I did implement the Visitor pattern to optimize the modifications to the commit details and cleanup any bad Identity mappings (folks had many emails and names which I unified) and cleanedup the email addresses.

    Next, I created a destination git (all fresh and ready to go):

    try (Git thisGit = Git.init().setDirectory(new File(REPO_DIR)).call()) {

       git = thisGit;

         }

    One should make sure the path exists, and it doesn’t matter if you have files in it.

    Commit the files in the git directory… you can commit without FILES!

    CommitCommand commitCommand = git.commit();

            // Setup the Identity and date

            Date aWhen = new Date(entry.getCommitTime() * 1000);

            PersonIdent authorIdent =

                    new PersonIdent(entry.getAuthorName(), entry.getAuthorEmail(), aWhen, TimeZone.getDefault());

            commitCommand.setCommitter(authorIdent);

            commitCommand.setAllowEmpty(true);

            commitCommand.setAuthor(authorIdent);

            commitCommand.setMessage(entry.getShortMessage());

            commitCommand.setNoVerify(true);

            commitCommand.setSign(false);

            commitCommand.call();

    Note, you can set to almost any point in time.  As long as you don’t sign it, it’ll be OK.  I don’t recommend this as a general practice.

    To grab the file, you can do a tree walk, and resolve to the object ID.

    try (TreeWalk treeWalk = new TreeWalk(git.getRepository());) {

                                treeWalk.addTree(tree);

                                treeWalk.setRecursive(true);

                                int localCount = 0;

                                while (treeWalk.next()) {

                                    String fileName = treeWalk.getPathString();

    ObjectId objectId = treeWalk.getObjectId(0);

                ObjectLoader loader = git.getRepository().open(objectId);

                String fileOutput = GIT_OUTPUT + "/" + binSha + "/" + fileNameWithRelativePath;

                int last = fileOutput.lastIndexOf('/');

                String fileOutputDir = fileOutput.substring(0, last);

                File dir = new File(fileOutputDir);

                dir.mkdirs();

                // and then one can the loader to read the file

                try (FileOutputStream out =

                        new FileOutputStream(GIT_OUTPUT + "/" + binSha + "/"

                                + fileNameWithRelativePath);) {

                    // System.out

                    byte[] bytes = loader.getBytes();

                    if (hasBeenModified(bytes, fileNameWithRelativePath)) {

                        loader.copyTo(out);

                        count++;

                        result = true;

                    }

                }

    Note, I did check if the file was duplicate, it saved a couple of steps.

    If you want to add files, you can set:

    commitCommand.setAll(addFiles);

    git.add().addFilepattern(file).call();

    Git in the background builds the DIFFs for any file that is not specially treated as binary in the .gitattributes file.

    For each commit, I loaded the file – checking for stop-words, checked the copyright header, check the file type, and compared it against the head . 

    Tip, if you need to reset from a bad test.

    git reset --hard origin # reset the branch

    rm -rf .git # reset the repo (also be sure to remove the files)

    The moving of the branch, you can execute

    cd <GIT_REPO>

    git checkout <BRANCH_TO_MIGRATE>

    git reset --hard origin

    git pull

    git gc --aggressive --prune=now

    git push git@github.com:<MyOrg>/<DEST_REPO>.git <BRANCH_TO_MIGRATE>:master

    Note, I did rename the master branch in the repo prior.  Voila.  550M+ Repo moved and cleaned up.

    The repo is now migrated, and up-to-date.  I hope this helps you.

    References

    Rename Branch in Git

    https://multiplestates.wordpress.com/2015/02/05/rename-a-local-and-remote-branch-in-git/

     

    Rewrite History

    https://help.github.com/en/articles/removing-sensitive-data-from-a-repository

    https://stackoverflow.com/questions/tagged/git-rewrite-history

     

    BFG Repo Cleaner

    https://github.com/rtyley/bfg-repo-cleaner

    https://rtyley.github.io/bfg-repo-cleaner/

     

    JGit

    https://www.vogella.com/tutorials/JGit/article.html

    https://github.com/eclipse/jgit

    http://wiki.eclipse.org/JGit/User_Guide#Repository

    https://www.programcreek.com/java-api-examples/?class=org.eclipse.jgit.revwalk.RevWalk&method=parseCommit

    https://www.eclipse.org/forums/index.php/t/213979/

    https://stackoverflow.com/questions/46727610/how-to-get-the-list-of-files-as-a-part-of-commit-in-jgit

    https://github.com/centic9/jgit-cookbook/blob/master/src/main/java/org/dstadler/jgit/porcelain/ListNotes.java

    https://stackoverflow.com/questions/9683279/make-the-current-commit-the-only-initial-commit-in-a-git-repository

    https://stackoverflow.com/questions/40590039/how-to-get-the-file-list-for-a-commit-with-jgit

    https://doc.nuxeo.com/blog/jgit-example/

    https://github.com/centic9/jgit-cookbook/blob/master/src/main/java/org/dstadler/jgit/api/ReadFileFromCommit.java

    https://github.com/eclipse/jgit/blob/master/org.eclipse.jgit.test/tst/org/eclipse/jgit/api/AddCommandTest.java

    https://stackoverflow.com/questions/12734760/jgit-how-to-add-all-files-to-staging-area

    https://github.com/centic9/jgit-cookbook/blob/master/src/main/java/org/dstadler/jgit/porcelain/DiffFilesInCommit.java

     

  • Code Graph showing the Layout of the Code base

    I’ve been mixing data analysis and Java programming recently.  I wrote a tool to do the analysis (Maven/Python).

    Here is the obfuscated output of the analysis, showing the hotspots.  I opted to show a thumbnail of the image here to protect the confidentiality of the project.  The generated image was also 78 Megabytes.  (a bit much, but you can zoom right in).

    Complicated Graph

    If you use a smaller set of classes and imports, the Maven plugin generates a reasonable diagram.csv file using

    mvn example:generate-diagram:99-SNAPSHOT:generate-diagram -f ./myproj/pom.xml

    You then see the output diagram.csv.

    To generate the layout dependencies of classes in your project.  Use the snippets, and the Jupyter Notebook – https://github.com/prb112/examples/blob/master/code-graph/code-graph.ipynb and view at https://nbviewer.jupyter.org/github/prb112/examples/blob/master/code-graph/code-graph.ipynb

    Simple Graph

    Reference

    Code Graph on Git https://github.com/prb112/examples/tree/master/code-graph

    CSV Sample Data diagram.csv

    POM

    <project xmlns="http://maven.apache.org/POM/4.0.0"
    	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    	<modelVersion>4.0.0</modelVersion>
    
    	<groupId>example</groupId>
    	<artifactId>generate-diagram</artifactId>
    	<version>99-SNAPSHOT</version>
    	<packaging>maven-plugin</packaging>
    
    	<name>generate-diagram</name>
    
    	<properties>
    		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    		<maven.compiler.source>1.8</maven.compiler.source>
    		<maven.compiler.target>1.8</maven.compiler.target>
    		<version.roaster>2.21.0.Final</version.roaster>
    	</properties>
    
    
    
    	<build>
    		<pluginManagement>
    			<plugins>
    				<plugin>
    					<artifactId>maven-jar-plugin</artifactId>
    					<version>2.6</version>
    					<executions>
    						<execution>
    							<goals>
    								<goal>test-jar</goal>
    							</goals>
    						</execution>
    					</executions>
    					<configuration>
    						<archive>
    							<manifest>
    								<addDefaultImplementationEntries>true</addDefaultImplementationEntries>
    								<addDefaultSpecificationEntries>true</addDefaultSpecificationEntries>
    							</manifest>
    						</archive>
    					</configuration>
    				</plugin>
    				<plugin>
    					<groupId>org.apache.maven.plugins</groupId>
    					<artifactId>maven-plugin-plugin</artifactId>
    					<version>3.6.0</version>
    					<configuration>
    						<skipErrorNoDescriptorsFound>true</skipErrorNoDescriptorsFound>
    					</configuration>
    					<executions>
    						<execution>
    							<id>mojo-descriptor</id>
    							<goals>
    								<goal>descriptor</goal>
    							</goals>
    							<phase>process-classes</phase>
    							<configuration>
    								<skipErrorNoDescriptorsFound>true</skipErrorNoDescriptorsFound>
    							</configuration>
    						</execution>
    					</executions>
    				</plugin>
    			</plugins>
    		</pluginManagement>
    
    		<plugins>
    			<plugin>
    				<!-- Embeds the dependencies in fhir-tools into the jar. -->
    				<groupId>org.apache.maven.plugins</groupId>
    				<artifactId>maven-shade-plugin</artifactId>
    				<version>3.2.1</version>
    				<executions>
    					<execution>
    						<phase>package</phase>
    						<goals>
    							<goal>shade</goal>
    						</goals>
    						<configuration>
    							<artifactSet>
    								<excludes>
    									<exclude>org.testng:testng</exclude>
    									<exclude>org.apache.maven:lib:tests</exclude>
    									<exclude>org.apache.maven</exclude>
    								</excludes>
    							</artifactSet>
    						</configuration>
    					</execution>
    				</executions>
    			</plugin>
    		</plugins>
    	</build>
    
    	<dependencies>
    		<dependency>
    			<groupId>org.jboss.forge.roaster</groupId>
    			<artifactId>roaster-api</artifactId>
    			<version>${version.roaster}</version>
    		</dependency>
    		<dependency>
    			<groupId>org.jboss.forge.roaster</groupId>
    			<artifactId>roaster-jdt</artifactId>
    			<version>${version.roaster}</version>
    			<scope>runtime</scope>
    		</dependency>
    		<dependency>
    			<groupId>org.apache.maven</groupId>
    			<artifactId>maven-plugin-api</artifactId>
    			<version>3.6.1</version>
    		</dependency>
    		<dependency>
    			<groupId>org.apache.maven.plugin-tools</groupId>
    			<artifactId>maven-plugin-annotations</artifactId>
    			<version>3.6.0</version>
    			<optional>true</optional>
    			<scope>provided</scope>
    		</dependency>
    		<dependency>
    			<groupId>org.apache.maven</groupId>
    			<artifactId>maven-core</artifactId>
    			<version>3.6.1</version>
    		</dependency>
    		<dependency>
    			<groupId>org.apache.maven</groupId>
    			<artifactId>maven-artifact</artifactId>
    			<version>3.6.0</version>
    		</dependency>
    		<dependency>
    			<groupId>org.apache.maven</groupId>
    			<artifactId>maven-model</artifactId>
    			<version>3.6.0</version>
    		</dependency>
    		<dependency>
    			<groupId>org.apache.maven</groupId>
    			<artifactId>maven-compat</artifactId>
    			<version>3.6.1</version>
    			<scope>test</scope>
    		</dependency>
    		<dependency>
    			<groupId>org.apache.maven.plugin-testing</groupId>
    			<artifactId>maven-plugin-testing-harness</artifactId>
    			<version>3.3.0</version>
    			<scope>test</scope>
    		</dependency>
    	</dependencies>
    </project>
    
    package demo;
    
    import java.io.File;
    import java.util.Properties;
    
    import org.apache.maven.execution.MavenSession;
    import org.apache.maven.plugin.AbstractMojo;
    import org.apache.maven.plugin.MojoExecutionException;
    import org.apache.maven.plugin.MojoFailureException;
    import org.apache.maven.plugins.annotations.Execute;
    import org.apache.maven.plugins.annotations.LifecyclePhase;
    import org.apache.maven.plugins.annotations.Mojo;
    import org.apache.maven.plugins.annotations.Parameter;
    import org.apache.maven.plugins.annotations.ResolutionScope;
    import org.apache.maven.project.MavenProject;
    
    import com.ibm.watsonhealth.fhir.tools.plugin.diagram.DiagramFactory;
    import com.ibm.watsonhealth.fhir.tools.plugin.diagram.impl.IDiagramGenerator;
    
    /**
     * This class coordinates the calls to the Diagram generation plugin
     * 
     * The phase is initialize. To find a list of phases -
     * https://maven.apache.org/guides/introduction/introduction-to-the-lifecycle.html#Lifecycle_Reference
     * 
     * Run the following to setup the plugin: <code>
     * mvn clean package install -f generate-diagram/pom.xml
     * </code>
     * 
     * Run the following to setup the classes in fhir-model: <code> 
     * mvn example:generate-diagram:99-SNAPSHOT:generate-diagram -f ./myproj/pom.xml
     * </code>
     * 
     * @author PBastide
     * 
     * @requiresDependencyResolution runtime
     *
     */
    @Mojo(name = "generate-diagram", //$NON-NLS-1$
            requiresProject = true, requiresDependencyResolution = ResolutionScope.RUNTIME_PLUS_SYSTEM, requiresDependencyCollection = ResolutionScope.RUNTIME_PLUS_SYSTEM, defaultPhase = LifecyclePhase.GENERATE_SOURCES, requiresOnline = false, threadSafe = false, aggregator = true)
    @Execute(phase = LifecyclePhase.GENERATE_SOURCES)
    public class DiagramPlugin extends AbstractMojo {
    
        @Parameter(defaultValue = "${project}", required = true, readonly = true) //$NON-NLS-1$
        protected MavenProject mavenProject;
    
        @Parameter(defaultValue = "${session}")
        private MavenSession session;
    
        @Parameter(defaultValue = "${project.basedir}", required = true, readonly = true) //$NON-NLS-1$
        private File baseDir;
    
        @Override
        public void execute() throws MojoExecutionException, MojoFailureException {
            if (baseDir == null || !baseDir.exists()) {
                throw new MojoFailureException("The Base Directory is not found.  Throwing failure. ");
            }
    
            // Grab the Properties (the correct way)
            // https://maven.apache.org/plugin-developers/common-bugs.html#Using_System_Properties
            Properties userProps = session.getUserProperties();
            String useTestsDirectoryStr = userProps.getProperty("useTestsDirectory", "false");
    
            // Converts Limit value to boolean value.
            boolean useTestsDirectory = Boolean.parseBoolean(useTestsDirectoryStr);
    
            // Grab the right generator and set it up.
            IDiagramGenerator generator = DiagramFactory.getDiagramGenerator();
    
            // Set the use of tests directory
            generator.use(useTestsDirectory);
    
            // Get the base directory .
            generator.setTargetProjectBaseDirectory(baseDir.getAbsolutePath() + "/target");
    
            // Passes the Log to the implementation code.
            generator.setLog(getLog());
    
            // Add the project
            generator.add(mavenProject);
    
            // Builds the Diagram
            generator.generateDiagram();
        }
    }
    package example.impl;
    
    import java.io.File;
    import java.io.FileOutputStream;
    import java.io.IOException;
    import java.io.OutputStream;
    import java.util.ArrayList;
    import java.util.Comparator;
    import java.util.HashMap;
    import java.util.List;
    import java.util.Map;
    import java.util.StringJoiner;
    import java.util.stream.Collectors;
    
    import org.apache.maven.plugin.logging.Log;
    import org.apache.maven.project.MavenProject;
    import org.jboss.forge.roaster.Roaster;
    import org.jboss.forge.roaster.model.JavaType;
    import org.jboss.forge.roaster.model.source.Import;
    import org.jboss.forge.roaster.model.source.JavaAnnotationSource;
    import org.jboss.forge.roaster.model.source.JavaClassSource;
    import org.jboss.forge.roaster.model.source.JavaEnumSource;
    import org.jboss.forge.roaster.model.source.JavaInterfaceSource;
    import org.jboss.forge.roaster.model.source.JavaSource;
    import org.w3c.dom.DOMImplementation;
    import org.w3c.dom.Document;
    import org.w3c.dom.Element;
    
    public class DiagramImpl implements IDiagramGenerator {
    
        private String absolutePath = null;
        private Log log = null;
        private Boolean useTestFiles = false;
    
        private List<MavenProject> projects = new ArrayList<>();
    
        private List<String> sourceDirectories = new ArrayList<>();
    
        private List<String> sourceFiles = new ArrayList<>();
    
        private List<String> countWithNested = new ArrayList<>();
    
        private Map<String, List<String>> sourceFileImports = new HashMap<>();
    
        @Override
        public void add(MavenProject mavenProject) {
            if (mavenProject == null) {
                throw new IllegalArgumentException("no access to the maven project's plugin object");
            }
    
            log.info("Projects added...");
            projects = mavenProject.getCollectedProjects();
    
        }
    
        @Override
        public void use(Boolean useTestFiles) {
            this.useTestFiles = useTestFiles;
    
        }
    
        @Override
        public void setLog(Log log) {
            this.log = log;
        }
    
        @Override
        public void generateDiagram() {
            if (absolutePath == null) {
                throw new IllegalArgumentException("Bad Path " + absolutePath);
            }
    
            if (log == null) {
                throw new IllegalArgumentException("Unexpected no log passed in");
            }
    
            for (MavenProject project : projects) {
    
                List<String> locations = project.getCompileSourceRoots();
                for (String location : locations) {
                    log.info("Location of Directory -> " + location);
    
                }
                sourceDirectories.addAll(locations);
    
                if (useTestFiles) {
                    log.info("Adding the Test Files");
                    sourceDirectories.addAll(project.getTestCompileSourceRoots());
                }
    
            }
    
            // Find the Files in each directory.
            // Don't Follow links.
            for (String directory : sourceDirectories) {
                findSourceFiles(directory);
            }
    
            processCardinalityMap();
    
            printOutCardinality();
    
            log.info("Total Number of Java Files in the Project are: " + sourceFiles.size());
            log.info("Total Number of Classes/Interfaces/Enums in the Project are: " + countWithNested.size());
    
            generateImageFile();
    
        }
    
        private void generateImageFile() {
            Comparator<String> byName = (name1, name2) -> name1.compareTo(name2);
    
            try(FileOutputStream fos = new FileOutputStream("diagram.csv");) {
                
                for (String key : sourceFileImports.keySet().stream().sorted(byName).collect(Collectors.toList())) {
                    
                    StringJoiner joiner = new StringJoiner(",");
                    for(String val : sourceFileImports.get(key)) {
                        joiner.add(val);
                    }
                    
                    String line = key + ",\"" + joiner.toString() + "\"";
                    fos.write(line.getBytes());
                    fos.write("\n".getBytes());
                }
    
            } catch (Exception e) {
                log.warn("Issue processing", e);
            }
    
        }
    
        public void printOutCardinality() {
    
            Comparator<String> byName = (name1, name2) -> name1.compareTo(name2);
    
            //
            log.info("Cardinality count - imports into other classes");
            for (String key : sourceFileImports.keySet().stream().sorted(byName).collect(Collectors.toList())) {
                log.info(key + " -> " + sourceFileImports.get(key).size());
            }
    
        }
    
        public void processCardinalityMap() {
            // Import > List<Classes>
            // Stored in -> sourceFileImports
    
            for (String source : sourceFiles) {
                File srcFile = new File(source);
                try {
                    JavaType<?> jtFile = Roaster.parse(srcFile);
    
                    String parentJavaClass = jtFile.getQualifiedName();
    
                    if (jtFile instanceof JavaClassSource) {
                        countWithNested.add(parentJavaClass);
                        log.info("[C] -> " + parentJavaClass);
                        JavaClassSource jcs = (JavaClassSource) jtFile;
    
                        helperImports(parentJavaClass, jcs.getImports());
    
                        for (JavaSource<?> child : jcs.getNestedTypes()) {
    
                            String childLoc = child.getQualifiedName();
                            countWithNested.add(childLoc);
                            log.info("  [CC] -> " + childLoc);
                        }
                    }
    
                    else if (jtFile instanceof JavaEnumSource) {
                        log.info("[E] -> " + parentJavaClass);
                        countWithNested.add(parentJavaClass);
                        JavaEnumSource jes = (JavaEnumSource) jtFile;
    
                        helperImports(parentJavaClass, jes.getImports());
    
                        for (Object child : jes.getNestedTypes()) {
    
                            String childLoc = child.getClass().getName();
                            countWithNested.add(childLoc);
                            log.info("  [EC] -> " + childLoc);
                        }
    
                    } else if (jtFile instanceof JavaInterfaceSource) {
                        countWithNested.add(parentJavaClass);
    
                        log.info("[I] -> " + parentJavaClass);
                        JavaInterfaceSource jis = (JavaInterfaceSource) jtFile;
    
                        helperImports(parentJavaClass, jis.getImports());
    
                        for (Object child : jis.getNestedTypes()) {
    
                            String childLoc = child.getClass().getName();
                            countWithNested.add(childLoc);
                            log.info("  [IC] -> " + childLoc);
                        }
                    } else if (jtFile instanceof JavaAnnotationSource) {
                        countWithNested.add(parentJavaClass);
    
                        log.info("[A] -> " + parentJavaClass);
                        JavaAnnotationSource jis = (JavaAnnotationSource) jtFile;
    
                        helperImports(parentJavaClass, jis.getImports());
                    }
    
                    else {
                        log.info("[O] -> " + parentJavaClass);
                    }
    
                } catch (IOException e) {
                    log.info("unable to parse file " + srcFile);
                }
    
            }
            log.info("Parsed the Cardinality Map:");
    
        }
    
        private void helperImports(String parentJavaClass, List<Import> imports) {
            // sourceFileImports
            List<String> importOut = sourceFileImports.get(parentJavaClass);
            if (importOut == null) {
                sourceFileImports.put(parentJavaClass, new ArrayList<String>());
            }
    
            for (Import importX : imports) {
                String importXStr = importX.getQualifiedName();
                importOut = sourceFileImports.get(importXStr);
                if (importOut == null) {
                    importOut = new ArrayList<>();
                    sourceFileImports.put(importXStr, importOut);
                }
    
                importOut.add(parentJavaClass);
    
            }
    
        }
    
        public static void main(String... args) {
            SvgDiagramImpl impl = new SvgDiagramImpl();
            String proc = "Test.java";
            Log log = new Log() {
    
                @Override
                public boolean isDebugEnabled() {
                    return false;
                }
    
                @Override
                public void debug(CharSequence content) {
    
                }
    
                @Override
                public void debug(CharSequence content, Throwable error) {
    
                }
    
                @Override
                public void debug(Throwable error) {
    
                }
    
                @Override
                public boolean isInfoEnabled() {
                    return false;
                }
    
                @Override
                public void info(CharSequence content) {
    
                }
    
                @Override
                public void info(CharSequence content, Throwable error) {
    
                }
    
                @Override
                public void info(Throwable error) {
    
                }
    
                @Override
                public boolean isWarnEnabled() {
                    return false;
                }
    
                @Override
                public void warn(CharSequence content) {
    
                }
    
                @Override
                public void warn(CharSequence content, Throwable error) {
    
                }
    
                @Override
                public void warn(Throwable error) {
    
                }
    
                @Override
                public boolean isErrorEnabled() {
                    return false;
                }
    
                @Override
                public void error(CharSequence content) {
    
                }
    
                @Override
                public void error(CharSequence content, Throwable error) {
    
                }
    
                @Override
                public void error(Throwable error) {
    
                }
            };
            impl.setLog(log);
            impl.addSourceFile(proc);
            impl.processCardinalityMap();
        }
    
        private void addSourceFile(String proc) {
            sourceFiles.add(proc);
    
        }
    
        public void findSourceFiles(String directory) {
            File dir = new File(directory);
            if (dir.exists()) {
    
                File[] listFiles = dir.listFiles((file, name) -> {
                    return name.endsWith(".java");
                });
    
                // Add to source directory
                if (listFiles != null) {
                    for (File file : listFiles) {
                        sourceFiles.add(file.getAbsolutePath());
                        log.info(" File Added to Processing: " + file.getAbsolutePath());
                    }
                }
    
                File[] listFilesFolders = dir.listFiles((file, name) -> {
                    return file.isDirectory();
                });
    
                if (listFilesFolders != null) {
                    for (File tmpDir : listFilesFolders) {
                        findSourceFiles(tmpDir.getAbsolutePath());
                    }
                }
    
            } else {
                log.warn("Directory does not exist " + directory);
            }
        }
    
        @Override
        public void setTargetProjectBaseDirectory(String absolutePath) {
            this.absolutePath = absolutePath;
    
        }
    
    }
    
  • Raspberry Pi: Setting up backup

    I have a Raspberry Pi providing household automation and productivity services – WebDav, Backups and Calendar. I always worry about a jolt of power, a failed byte and something that is unrecoverable. Time for a Backup solution.

    I plugged in a USB stick – 64GB, and immediately checked the file system is there and visible as SDA (unmounted).

    pi@raspberrypi:~# sudo su - 
    
    root@raspberrypi:~# lsblk 
     NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
     sda           8:0    1 58.2G  1 disk 
     └─sda1        8:1    1 58.2G  1 part 
     mmcblk0     179:0    0 14.9G  0 disk 
     ├─mmcblk0p1 179:1    0 43.9M  0 part /boot
     └─mmcblk0p2 179:2    0 14.8G  0 part /

    I check to see which one is assigned to the SD card slot (mmc), I really don’t want to reformat Raspbian.  I see the USB stick is on /dev/sda. All of my subsequent commands use /dev/sda as part of the command.

    root@raspberrypi:~# parted -ls
    Warning: The driver descriptor says the physical block size is 2048 bytes, but Linux says it is 512 bytes.
     Model: JetFlash Transcend 64GB (scsi)
     Disk /dev/sda: 62.5GB
     Sector size (logical/physical): 512B/512B
     Partition Table: unknown
     Disk Flags: 
    
     Model: SD SC16G (sd/mmc)
     Disk /dev/mmcblk0: 15.9GB
     Sector size (logical/physical): 512B/512B
     Partition Table: msdos
     Disk Flags: 
     Number  Start   End     Size    Type     File system  Flags
      1      4194kB  50.2MB  46.0MB  primary  fat32        lba
      2      50.3MB  15.9GB  15.9GB  primary  ext4

    If you don’t see the relevant information on your HD or run into issues formatting the hard-drive, install hdparm.  and check with hdparm -r0 /dev/sda  .

    TIP: I did run into an issue with an ISO written to a USB drive which locks the partition table and makes it unwriteable.

    root@raspberrypi:~# apt-get install hdparm
     Reading package lists… Done
     Building dependency tree       
     Reading state information… Done
     The following package was automatically installed and is no longer required:
       realpath
     Use 'apt autoremove' to remove it.
     The following additional packages will be installed:
       powermgmt-base
     Suggested packages:
       apmd
     The following NEW packages will be installed:
       hdparm powermgmt-base
     0 upgraded, 2 newly installed, 0 to remove and 148 not upgraded.
     Need to get 114 kB of archives.
     After this operation, 278 kB of additional disk space will be used.
     Do you want to continue? [Y/n] y
     Get:1 http://raspbian.mirror.constant.com/raspbian stretch/main armhf hdparm armhf 9.51+ds-1+deb9u1 [105 kB]
     Get:2 http://raspbian-us.ngc292.space/raspbian stretch/main armhf powermgmt-base all 1.31+nmu1 [9,240 B]
     Fetched 114 kB in 0s (120 kB/s)           
     Selecting previously unselected package hdparm.
     (Reading database … 135688 files and directories currently installed.)
     Preparing to unpack …/hdparm_9.51+ds-1+deb9u1_armhf.deb …
     Unpacking hdparm (9.51+ds-1+deb9u1) …
     Selecting previously unselected package powermgmt-base.
     Preparing to unpack …/powermgmt-base_1.31+nmu1_all.deb …
     Unpacking powermgmt-base (1.31+nmu1) …
     Setting up powermgmt-base (1.31+nmu1) …
     Setting up hdparm (9.51+ds-1+deb9u1) …
     Processing triggers for man-db (2.7.6.1-2) …
    Preparing to unpack …/hdparm_9.51+ds-1+deb9u1_armhf.deb …
    
    Unpacking hdparm (9.51+ds-1+deb9u1) …
    
    Selecting previously unselected package powermgmt-base.
    
    Preparing to unpack …/powermgmt-base_1.31+nmu1_all.deb …
    
    Unpacking powermgmt-base (1.31+nmu1) …
    
    Setting up powermgmt-base (1.31+nmu1) …
    
    Setting up hdparm (9.51+ds-1+deb9u1) …
    
    Processing triggers for man-db (2.7.6.1-2) …
    root@raspberrypi:~# hdparm -r0 /dev/sda
     /dev/sda:
      setting readonly to 0 (off)
      readonly      =  0 (off)

    Now that I know the drive is writeable, I need to create the partition. I used

    cfdisk

    Navigate through the menu and select the maximum size
                                                                                            Disk: /dev/sda
    Size: 58.2 GiB, 62495129600 bytes, 122060800 sectors
    Label: dos, identifier: 0x00000000

    Device Boot Start End Sectors Size Id Type
    >> /dev/sda1 2048 122060799 122058752 58.2G 83 Linux

    Once you see “Syncing disks.”, you can format the disk. I formatted the partition sda1 with ext4 (I may want to encrypt in the future). Unmount and then format.

    root@raspberrypi:~# umount /dev/sda1
    root@raspberrypi:~# mkfs.ext4 /dev/sda1
    mke2fs 1.43.4 (31-Jan-2017)
    Found a dos partition table in /dev/sda1
    Proceed anyway? (y,N) y
    Creating filesystem with 2828032 4k blocks and 707136 inodes
    Filesystem UUID: 363f1b4a-b0f5-4c7b-bf91-66f3823032d6
    Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

    Allocating group tables: done
    Writing inode tables: done
    Creating journal (16384 blocks): done
    Writing superblocks and filesystem accounting information: done

    root@raspberrypi:~#

    Make the backup directory, edit fstab and mount the directory.  Insert into fstab with your uuid “UUID=363f1b4a-b0f5-4c7b-bf91-66f3823032d6 /backups auto nosuid,nodev,nofail 0 0
    ”   The second to last avoids backup and the last one enables fsck on reboot.

    root@raspberrypi:~# blkid 
    /dev/mmcblk0p1: LABEL="boot" UUID="DDAB-3A15" TYPE="vfat" PARTUUID="b53687e8-01"
    /dev/mmcblk0p2: LABEL="rootfs" UUID="5fa1ec37-3719-4b25-be14-1f7d29135a13" TYPE="ext4" PARTUUID="b53687e8-02"
    /dev/mmcblk0: PTUUID="b53687e8" PTTYPE="dos"
    /dev/sdb: UUID="363f1b4a-b0f5-4c7b-bf91-66f3823032d6" TYPE="ext4"
    root@raspberrypi:~# mkdir /backups 
    root@raspberrypi:~# vim /etc/fstab
    root@raspberrypi:~# mount -a
    root@raspberrypi:~# mount
    UUID=363f1b4a-b0f5-4c7b-bf91-66f3823032d6 /backups auto nosuid,nodev,nofail 0 0


    You should see backups listed. (Note: I bricked my Raspberry Pi with a bad FSTAB entry, and mounted it on my Mac using Paragon and removed the bad fstab entry. )

    Update Crontab with daily backups.

    crontab -e

    Setup an editor for crontab.

    root@raspberrypi:~# crontab -e
    no crontab for root - using an empty one

    Select an editor. To change later, run 'select-editor'.
    1. /bin/ed
    2. /bin/nano <---- easiest
    3. /usr/bin/vim.basic
    4. /usr/bin/vim.tiny

    Choose 1-4 [2]: 3
    crontab: installing new crontab

    I added this line and copied it to /etc/cron.daily/

    0 1 * * * /usr/bin/rsync -r /data/ /backups/`date +%w-%A`

    crontab -l > pi-backup
    mv /root/pi-backup /etc/cron.daily
    run-parts /etc/cron.daily

    Note, I had to add #!/bin/bash after I copied and remove the timing of the job.

    Also, check to see if rsync is installed with which rsync and apt-get install rsync.

    This enables backups on a daily basis rotating every 7 days.

    Check back on the following day to see your backups

    root@raspberrypi:~# /usr/bin/rsync -r /data/ /backups/`date +%w-%A`
    root@raspberrypi:~# find /backups
    /backups
    /backups/lost+found
    /backups/0-Sunday
    /backups/0-Sunday/startup.sh

    Good luck, I hope this helps you with your Raspberry Pi.

    References

  • Jupyter Notebook: Email Analysis to a Lotus Notes View

    I wanted to do an analysis of my emails since I joined IBM, and see the flow of messages in-and-out of my inbox.

    With my preferences for Jupyter Notebooks, I built a small notebook for analysis.

    Steps
    Open IBM Lotus Notes Rich Client

    Open the Notes Database with the View you want to analyze.

    Select the View you are interested in ‘All Documents’. For instance the All Documents view, like my inbox *obfuscated* with a purpose.

    Click File > Export

    Enter a file name – email.csv

    Select Format “Comma Separate Value”

    Click Export

    Upload the Notebook to your Jupyter server

    The notebook is describes the flow through my process. If you encounter ValueError: (‘Unknown string format:’, ’12/10/2018 08:34 AM’), you can refer to https://stackoverflow.com/a/8562577/1873438

    iconv -c -f utf-8 -t ascii email.csv > email.csv.clean

    You can break the data into month-year-day analysis with the following, and peek the results with df_emailA.head()

    When you run the final cell, the code generates a Year-Month-Day count as a bar graph.

        # Title: Volume in Months when emails are sent.
        # Plots volume based on known year-mm-dd
        # to be included in the list, one must have data in those years.
        # Kind is a bar graph, so that the (Y - YYYY,MM can be read)
        y_m_df = df_emailA.groupby(['year','month','day']).year.count()
        y_m_df.plot(kind="bar")
    
        plt.title('Numbers submitted By YYYY-MM-DD')
        plt.xlabel('Email Flow')
        plt.ylabel('Year-Month-Day')
        plt.autoscale(enable=True, axis='both', tight=False)
        plt.rcParams['figure.figsize'] = [20, 200]

    You’ll see the trend of emails I receive over the years.

    Trends of Email
  • CURL and LDAPS – How to Search and Debug

    I hit an issue where I needed to Search LDAP from a machine I didn’t have access to install new RPMs on. I found this cool article on CURL and LDAP Search. I had to make some minor modifications to get it to work with a secure connection (–insecure ldaps:// and 636). I also added -v to diagnosis some connection problems.

    curl "ldaps://127.0.0.1:636/DC=IBM.COM?cn,objectClass?sub?(objectClass=)" -u "cn=user1,ou=test_org3,o=dr,DC=IBM.COM" --insecure -v
    Enter host password for user 'cn=user1,ou=test_org3,o=dr,DC=IBM.COM':
    * Trying 127.0.0.1...
    * Connected to 127.0.0.1 (127.0.0.1) port 636 (#0)
    * LDAP local: LDAP Vendor = OpenLDAP ; LDAP Version = 20428
    * LDAP local: ldaps://127.0.0.1:636/DC=IBM.COM?cn,objectClass?sub?(objectClass=
    )
    * LDAP local: trying to establish encrypted connection
    DN: dc=ibm.com
    objectClass: domain
    objectClass: top

    DN: o=dr,dc=ibm.com
    objectClass: organization
    objectClass: top

    DN: ou=test_org3,o=dr,dc=ibm.com
    objectClass: organizationalunit
    objectClass: top

    You can then find the userids you need quickly. I left them off the output intentionally.

    If you see connected, but no results, I suggest changing to the top level of the ldap, and using this string – ldaps://127.0.0.1:636/DC=IBM.COM?cn,objectClass?sub?(objectClass=*)

  • OAuth 2.0 Flow – A Metaphor

    The following video helps explain the OAuth 2.0 Flow and authorization. This video was originally shared as part of a prior project on Social Application Development.