Connecting to an Amazon EMR cluster from Domino


Domino supports connecting to an Amazon EMR cluster through the addition of cluster-specific binaries and configuration files to your Domino environment.

At a high level, the process is as follows:

  1. Connect to the EMR Master Node and gather the required binaries and configuration files, then download them to your local machine.
  2. Upload the gathered files into a Domino project to allow access by the Domino environment builder.
  3. Create a new Domino environment that uses the uploaded files to enable connections to your cluster.
  4. Enable YARN integration for the Domino projects that you want to use with the EMR cluster.


Domino supports the following types of connections to an EMR cluster:




Gathering the required binaries and configuration files

You will find the necessary files for setting up your Domino environment on the EMR Master Node. To get started, connect to your Master Node via SSH, then follow the steps below.

  1. Create a directory named hadoop-binaries-configs at /tmp.
    mkdir /tmp/hadoop-binaries-configs
  2. Create a subdirectory named configs in /tmp/hadoop-binaries-configs.
    mkdir /tmp/hadoop-binaries-configs/configs
  3. Copy the contents of the hive, spark, and hadoop directories from /etc to /tmp/hadoop-binaries-configs/configs.
    cp -R /etc/hadoop /tmp/hadoop-binaries-configs/configs/
    cp -R /etc/hive /tmp/hadoop-binaries-configs/configs/
    cp -R /etc/spark /tmp/hadoop-binaries-configs/configs/
  4. Add the following lines to the end of the newly copied file at /tmp/hadoop-binaries-configs/configs/hadoop/conf/hdfs-site.xml. This is necessary since the Domino executor will not be able to connect to HDFS on the same private IPs that the Master Node uses.
    cd /tmp/hadoop/binaries/configs/hadoop/conf/
    echo "<property>" >> hdfs-site.xml
    echo "<name>dfs.client.use.datanode.hostname</name>" >> hdfs-site.xml
    echo "<value>true</value>" >> hdfs-site.xml
    echo "</property>" >> hdfs-site.xml
  5. (Optional) If your EMR cluster uses Kerberos authentication, create a subdirectory named kerberos at /tmp/hadoop-binaries-configs.
    mkdir /tmp/hadoop-binaries-configs/kerberos
    Then copy the Kerberos configuration file krb5.conf from /etc to /tmp/hadoop-binaries-configs/kerberos.
    cp /etc/krb5.conf /tmp/hadoop-binaries-configs/kerberos/
  6. Once you've copied and edited all of the above files into /tmp/hadoop-binaries-configs, zip up the directory for transfer to your local machine.
    cd /tmp
    tar -zcf hadoop-binaries-configs.tar.gz hadoop-binaries-configs
    Then use SCP from your local machine to download the zipped archive. Refer back to the AWS documentation on connecting to a Master Node via SSH for credentialing and address information.




Uploading the binaries and configuration files to Domino

Use the following procedure to upload the files you retrieved in the previous step to a public Domino project. This will make the files available to the Domino environment builder.

  1. Log in to Domino, then create a new public project.


  2. Open the Files page for the new project, then click to browse for files and select the archive of binaries and configuration files you downloaded from the ECR Master Node. Then click Upload.


  3. After your upload has completed, click the gear menu next to the uploaded file, then right click Download and click Copy Link Address. Save this URL in your notes, as you will need it in the next step.


    Once you have recorded the download URL of the binaries and configuration files archive, you're ready to build a Domino environment for connecting to EMR.




Creating a Domino environment for connecting to EMR


  1. First, you need to visit the Spark downloads page to copy a download URL for the Spark binaries. Use the dropdown menus to select the correct version of the binaries for your EMR cluster, then right click the download link and click Copy Link Address. Record the copied URL for use in a later step.


  2. Click Environments from the Domino main menu, then click Create Environment.


  3. Give the environment an informative name, then choose a base environment that includes the version of Python that is installed on the nodes of your EMR cluster. Most Linux distributions ship with Python 2.7 by default, so you will see the Domino Analytics Distribution for Python 2.7 used as the base image in the following examples. Click Create when finished.


  4. After creating the environment, click Edit Definition. Copy the below example into your Dockerfile Instructions, then be sure to edit it wherever necessary with values specific to your deployment and cluster.


    In this Dockerfile, wherever you see a hyphenated instruction enclosed in carats like <paste-your-domino-download-url-here>, be sure to replace it with the corresponding value you recorded in previous steps.

    You may also need to edit commands that follow to match downloaded filenames.

    USER root

    ### Give ubuntu user ability to sudo as any user including root.
    RUN echo "ubuntu ALL=(ALL:ALL) NOPASSWD: ALL" >> /etc/sudoers

    ### Set up directories.
    RUN mkdir /tmp/domino-hadoop-downloads

    ### Download the binaries and configs gzip from Domino project.
    ### This downloaded gzip archive should contain a configs directory with
    ### hadoop, hive, and spark subdirectories directories.
    ### Make sure the URL is edited to reflect where you uploaded your configs.
    ### You should have this saved from previous steps .
    RUN wget --no-check-certiticate <paste-your-domino-file-download-url-here> -O /tmp/domino-hadoop-downloads/hadoop-binaries-configs.tar.gz && \
    tar xzf /tmp/domino-hadoop-downloads/hadoop-binaries-configs.tar.gz -C /tmp/domino-hadoop-downloads/

    ### Install Spark binaries from Spark repository.
    ### Update the download URL and subsequent commands based on your Spark version.
    ### The below example assumes the download URL points to a file for Spark 2.1.0 and Hadoop 2.6.
    RUN rm -rf /opt/spark-*
    RUN \
    wget <paste-your-spark-download-url-here> && \
    tar xvzf spark-2.1.0-bin-hadoop2.6.tgz && \
    mv spark-2.1.0-bin-hadoop2.6 /opt && \
    rm spark-2.1.0-bin-hadoop2.6.tgz && \
    chown -R ubuntu:ubuntu /opt/spark-2.1.0-bin-hadoop2.6

    ### Copy hadoop, hive, and spark configurations
    RUN cp -r /tmp/domino-hadoop-downloads/hadoop-binaries-configs/configs/hadoop /etc/hadoop && \
    cp -r /tmp/domino-hadoop-downloads/hadoop-binaries-configs/configs/hive /etc/hive && \
    cp -r /tmp/domino-hadoop-downloads/hadoop-binaries-configs/configs/spark /etc/spark

    ### Update SPARK and HADOOP environment variables.
    ### Make sure the py4j file name matches what exists on the nodes of your cluster.
    RUN \
    echo 'export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.6' >> /home/ubuntu/.domino-defaults && \
    echo 'export PYTHONPATH=${PYTHONPATH:-}:${SPARK_HOME:-}/python/' >> /home/ubuntu/.domino-defaults && \
    echo 'export PYTHONPATH=${PYTHONPATH:-}:${SPARK_HOME:-}/python/lib/' >> /home/ubuntu/.domino-defaults && \
    echo 'export PATH=${PATH:-}:${SPARK_HOME:-}/bin' >> /home/ubuntu/.domino-defaults
  5. Click Build when finished editing the Dockerfile instructions. If the build completes successfully, you are ready to try using the environment.




Configure a Domino project for use with the EMR cluster 


  1. Open the Domino project you want to use with your EMR cluster, then click Settings from the project menu.

  2. On the Integrations tab, click to select YARN integration from the Apache Spark panel.

  3. Use root as the Hadoop user name.

  4. If your EMR cluster is in the same AWS VPC as your Domino deployment, you do not need to list the hosts in the Custom /etc/hosts entries field. If your Domino deployment is in a separate network from the EMR cluster, list the hostnames of the nodes in your cluster.



    If your work with the cluster generates many warnings about missing Java packages, you can suppress these by adding the following to Spark Configuration Options.

    Key: spark.hadoop.yarn.timeline-service.enabled
    Value: false

  5. After inputting your YARN configuration, click Save.

  6. On the Hardware & Environment tab, change the project default environment to the one you built earlier with the binaries and configuration files.


You are now ready to start Runs from this project that interact with your EMR cluster.

Was this article helpful?
0 out of 0 found this helpful