2018 SAP TechEd Live Interview

I had a nice Live Studio interview with Britt Womelsdorf at the 2018 SAP TechEd event chatting about the SAP Cloud Platform SDK’s. Check it out here:

(The video has since been taken down, but here are a few photos).

Machine Learning Definitions

  • Artificial intelligence (when machines mimic human, intelligent behavior)
  • Machine Learning (programming machines to learn how to learn)
  • Deep Learning (networks and algorithms inspired by the functions of the brain)
  • Neural Networks (interconnected networks, also based on the human brain)
  • Algorithms (set of rules that defines computational operations)   
  • read more

    GSVU Student Presentation

    Mentoring, coaching and sharing is an opportunity I am always willing to go the extra mile for. When one of my friends Sihma from Grand Valley State University (a fellow SAP mentor and part of the SAP University Alliance Program) asked me to do a personal presentation for their Computer Science students, I took a untraditional approach and shared the presentation below.

    Student-@-GSVUDownload

    Framework vs Library – what’s the diff?

    How are Frameworks and Library Different from each other?

  • Inversion of Control is a key part which makes a framework different from a library. When we call a method from a library we are in control, but with the framework the control is inverted, the framework calls our code.  (E.g a GUI framework calls our code through the event handlers)
  • A library is essentially a set of functions (well defined operations) that we can call (organized into classes). Each does some work and then returns the control to the client
  • A framework embodies some abstract design with more behavior built in. In order to use it, we need to insert our behavior into various places in the framework either by subclassing or by plugging in our code. The framework code then calls our code at these points.
  • A framework can also be considered as a skeleton where the application defines the meat of the operation by filling out the skeleton. The skeleton still has code to link up the parts
  • read more

    DevOps/CI/CD for SAP HANA

    DevOps, and particularly Continuous integration (CI) and continuous delivery (CD), are terms which has been adopted widely and over the past few years in the software development industry. It’s use is currently creeping outside the scope of development and into various departments wanting to decrease long delivery cycles and increase product iteration. For those unfamiliar with the terms in the software space, or premise behind the idea: CI/CD is essentially the automation of the delivery of your software development projects into production and the broader goal is to bring Development and Operations closer together. In this blog we will mainly focus on CD as it pertains to HANA XS development.

    Inside SAP IT, and like a lot of other IT departments, we are trying to increase and simplify the deployment of our HANA XS projects and move to a more agile approach in delivering solutions for our internal customers.

    Firstly, some background: As many of the XS Developers out there know, the HANA repo is not necessarily the easiest “file system” to work with. The fact that data is stored in the DB, in a propriety format and each file needs to be activated, makes it tough to automate basic operations like moving or copying. In order to work around this topic, we decided that all of our code deployments were going to be done using HANA’s preferred native “packaged” file type known as delivery units (DU). These contain the entire active code base of a project (or subset of a project), changed as well as unchanged files.

    In the past we manually deployed code to each of our instances individually, this required manual intervention and developer/operations availability which we hoped we could streamline. The DU import process we decided to use is a feature which was introduced in SPS09 through a command line tool called HDBALM. This allows any PC which has the HANA client installed to import packages and delivery units to a HANA server. While there are options to commit and activate individual files from a traditional file system folder system (using the FILE API), we felt the benefits of a DU were better suited to our use case (for example, hdb* objects which may need to be created in specific order).

    Since we have the ability to deploy our code to our various servers using HDBALM, we needed something to get the job done! We used our internal instance of Atlassian Bamboo. We use this server for our HCP HTML5 and Java apps which make it a logical choice to keep our projects grouped to together. Our build agents are redhat distros and have the HANA client installed. We also copy over the SSL cert since our hana servers are using https and these are needed for hdbalm.

    In this case, and example, our landscape is relatively simple with a Dev, Test and Production dedicated HANA HCP instances.

    Screen Shot 2016-03-14 at 8.52.56 PM.png

    We store our “ready for deployment” delivery unit on our Internal Github instance, we do this so that the file is version controlled and also visible and accessible to our dev team. The premise is that the dev team would push a version of a delivery unit after their sprint, and its ready for deployment to Test. This could easily be a file system as well. However, we like to use the push/tag/release of our Github repo to trigger the build deployment.

    FYI: Bamboo is a great tool and a nearly zero cost barrier ($10). If you are considering a build server which has a quick and easy installation (*nix) and setup, I would highly recommend it.

    Since we have gone through some of the logical details, here are some technical details covering the topic:

    As mentioned previously, our build process is trigged by a new delivery unit being committed to our github repo. Our bamboo build agent picks the build up, saves it on the build server and deploys it to our Test instance. A email is sent to our dev and test team with the results. Before the import process, we check the version identifier on the existing delivery unit which was on the server, and we subsequently check it again after the import for comparisons and decide if the import was successful (along with the results of the import statement)

    (Keep in mind the below commands include $bamboo_variables) but would work just fine to replace them with actual values.

    You can find the code here in a github gist (where it will be maintained)

    export HDBALM_PASSWD="$bamboo_DestXSPassword" export https_proxy=http://proxy.wdf.sap.corp:8080 echo " " echo " " echo " " preversion="$(/home/i847772/sap/hdbclient/hdbalm -s -y -h $bamboo_DestHostname -p $bamboo_DestHostPort -u $bamboo_DestXSUsername -c $bamboo_DestSSLCert du get $bamboo_DeliveryUnitName $bamboo_DeliveryUnitVendor)" if [[ $preversion == "" ]]; then echo "Initial install of the DU"; preinstallversion="0.0.0" else preinstallversion=$(echo $preversion | grep -Po '(?<=version:)[^r]+' | xargs) fi echo "Pre Install version: $preinstallversion" IMPORT_LOG="$(/home/i847772/sap/hdbclient/hdbalm -s -y -j -h $bamboo_DestHostname -p $bamboo_DestHostPort -u $bamboo_DestXSUsername -c $bamboo_DestSSLCert import "$bamboo_DeliveryUnitFilename")" postversion="$(/home/i847772/sap/hdbclient/hdbalm -s -y -h $bamboo_DestHostname -p $bamboo_DestHostPort -u $bamboo_DestXSUsername -c $bamboo_DestSSLCert du get $bamboo_DeliveryUnitName $bamboo_DeliveryUnitVendor)" if [[ $postversion == "" ]]; then echo "Unable to query installed delivery unit version" postinstallversion="-1" else postinstallversion=$(echo $postversion | grep -Po '(?<=version:)[^r]+' | xargs) fi echo "Post Install version: $postinstallversion" export HDBALM_PASSWD="" LOG="${IMPORT_LOG##* }" if grep -q "Successfully imported delivery units" $LOG && [[ $postinstallversion == $preinstallversion ]]; then echo " " echo " " echo "******************************************************* Import of the DU completed, but the version has not changed *******************************************************" echo " " echo "Its possible you have not incremented the version numbers" echo " " echo "******************************************************* Log File $LOG *******************************************************" echo " " echo " " if [ $LOG != "" ]; then cat $LOG else echo "No log file, ensure the job is running on a machine with HDBALM" fi echo " " echo " " echo "******************************************************* //Log File *****************************************************" echo " " echo " " exit 0 elif [ $postinstallversion == "-1" ]; then echo " " echo " " echo "******************************************************* Import of the DU Has failed *******************************************************" echo " " echo "******************************************************* Log File *******************************************************" echo " " echo " " if [ $LOG != "" ]; then cat $LOG else echo "No log file, ensure the job is running on a machine with HDBALM" fi echo " " echo " " echo "******************************************************* //Log File *****************************************************" echo " " echo " " exit 1 else echo " " echo " " echo "******************************************************* Import of the DU has completed successfully *******************************************************" echo " " echo "Installation successful" echo " " echo " " exit 0 fi exit 0 read more

    My initial thoughts on SAP HANA SPS11

    Having been a long time follower and developer on the last 5 releases of HANA, I don’t believe I have ever been more excited to get my hands on a HANA release than SPS11. The inclusion of an entirely new XS runtime (XSA) and the ability to use Node.js and Java were just some of the features I was looking forward to. After figuring out that the SPS11 download was in the support/download portal on the 23rd November, I set out to get my machine up and running as quickly as I could …

    After getting my SLES 11 SP4 machine installed/configured (I used the SAP SLES 11 SP4 Image from Suse), I downloaded the HANA SPS11 files, but after multiple failed attempts (even using the SAP Download Tool) the archives did not correctly uncompress and subsequently resulted in installation errors. After re-downloading the files no less than 5 times, I had a clean uncompress and the install was simple and straightforward, and worked like it should. I initially installed just a basic HANA instance, but struggled to subsequently install XSA. I ended up logging out of the root user account, logged in as my *adm account which seemed to get the install to successfully complete. While its been mentioned before, I will still reference the below statement from the documentation, since it will effect many admins wanting to install SPS11 on a SLES SP3 or lower: read more

    Migrating a BTP MaxDB Database to SAP Hana

    Overview: This blog describes migrating a simple MaxDB database from HCP to a dedicated HCP HANA instance. Also has some details around a open source tool called XSImport – a free, open source native XS app for importing CSV files into HANA.

    HCPMaxDBtoHANA.png

    In the Enterprise Mobility team at SAP we are currently working on an exciting project migrating one of our existing applications to a HANA backend, while going through a major code rewrite. During this process, I had the task of migrating a HCP based MaxDB over to a dedicated HCP HANA DB. The dataset I was working with was not particularly large (@1.3 GB) with the majority of the data residing in just a few tables. (For reference this is a standalone Java HCP app). Since remote tunnel access to production HCP Databases is not available, the only way to get the raw data was through a HCP ticket, the support team were helpful and responsive, and after a couple of hours I had my exported dataset. read more