Running local Spark on a Domino executor

Using a local Spark cluster


Typically, users interested in Hadoop and Spark have data volumes and workloads that demand the power of cluster computing. However, some people use Spark for its expressive API, even if their data volumes are small or medium. Because Domino lets you run code on powerful VM infrastructure, with up the 32 cores in AWS, you can use Domino to create a local Spark cluster and easily parallelize your tasks across all 32 cores.

 

Configuring Spark in Local mode


To configure Spark integration in Local mode, open your project and go to “Project settings.” Under “Integrations”, choose the “Local mode” option for Apache Spark. Click “Save” to save your changes.

Was this article helpful?
0 out of 0 found this helpful