Splice Machine Best Practices for Importing Data

This section contains best practice and troubleshooting information related to importing data into our On-Premise Database product, in these topics:

   Learn more

Using Bulk Import on a KMS-Enabled Cluster

If you are a Splice Machine On-Premise Database customer and want to use bulk import on a cluster with Cloudera Key Management Service (KMS) enabled, you must complete these extra configuration steps:

  1. Make sure that the bulkImportDirectory is in the same encryption zone as is HBase.
  2. Add these properties to hbase-site.xml to load secure Apache BulkLoad and to put its staging directory in the same encryption zone as HBase:

    Replace <YourStagingDirectory> with the path to your staging directory, and make sure that directory is in the same encryption zone as HBase; for example:


For more information about KMS, see https://www.cloudera.com/documentation/enterprise/latest/topics/cdh_sg_kms.html.

Bulk Import of Very Large Datasets with Spark

When using Splice Machine with Spark with Cloudera, bulk import of very large datasets can fail due to direct memory usage. Use the following settings to resolve this issue:

Update Shuffle-to-Mem Setting

Modify the following setting in the Cloudera Manager’s Java Configuration Options for HBase Master:


Update the YARN User Classpath

Modify the following settings in the Cloudera Manager’s YARN (MR2 Included) Service Environment Advanced Configuration Snippet (Safety Valve):