Using the Splice Machine Native Spark DataSource

This topic provides general information about the Splice Machine Native Spark DataSource (aka the Splice Machine Spark Adapter), in these subsections:

The other topics in this section provide additional information about the Native Spark DataSource:

About the Splice Machine Native Spark DataSource

The Splice Machine Native Spark DataSource allows you to directly connect Spark DataFrames and Splice Machine database tables. You can efficiently insert, upsert, select, update, and delete data in your Splice Machine tables directly from Spark in a transactionally consistent manner.

To use the adapter in your code, you simply instantiate a SplicemachineContext object in your Spark code. You can run Spark applications that interface with your Splice Machine database interactively in the Spark shell or Zeppelin notebooks, or you can launch a Spark app by using our Spark Submit script.

You can craft applications that use Spark and our Native Spark DataSource in Scala, Python, and Java.

Native Spark DataSource Access to Database Objects

By default, Native Spark DataSource queries execute in the Spark application, which is highly performant and allows access to almost all Splice Machine features. However, when your Native Spark DataSource application uses our Access Control List (ACL) feature, there is a restriction with regard to checking permissions.

The specific problem is that the Native Spark DataSource does not have the ability to check permissions at the view level or column level; instead, it checks permissions on the base table. This means that your Native Spark DataSource application doesn’t have access to the table underlying a view or column, it will not have access to that view or column; as a result, a query against the view or colunn fails and throws an exception.

The workaround for this problem is to tell the Native Spark DataSource to use internal access to the database; this enables view/column permission checking, at a slight cost in performance. With internal access, the adapter runs queries in Splice Machine and temporarily persists data in HDFS while running the query.

The ACL feature is enabled by splice.authentication.token.enabled = true.

Native Spark DataSource JDBC Options

To specify optional properties for your JDBC connection, Map those options using a SpliceJDBCOptions object, and then create your SplicemachineContext with that map. For example:

val options = Map(
  JDBCOptions.JDBC_URL -> "jdbc:splice://<jdbcUrlString>",
        SpliceJDBCOptions.JDBC_INTERNAL_QUERIES -> "true"
)

spliceContext  = new SplicemachineContext( options )

A typicala JDBC URL looks like this:

jdbc:splice://myhost:1527/splicedb;user=myUserName;password=myPswd

The SpliceJDBCOptions properties that you can currently specify are:

Option Default Value Description
JDBC_INTERNAL_QUERIES false A string with value true or false, which indicates whether or not to run queries internally by default.
JDBC_TEMP_DIRECTORY /tmp

The path to the temporary directory that you want to use when persisting temporary data from internally executed queries.

The user running a query must have write permission on this directory, or your connected application may freeze or fail.

Prerequisites and Permissions for Using the DataSource

To use the adapter, you must make sure that each user who is going to use the Splice Machine Native Spark DataSource has execute permission on the SYSCS_UTIL.SYSCS_HDFS_OPERATION system procedure.

SYSCS_UTIL.SYSCS_HDFS_OPERATION is a Splice Machine system procedure that is used internally to efficiently perform direct HDFS operations. This procedure is not documented because it is intended only for use by the Splice Machine code itself; however, the Native Spark DataSource uses it, so any user of the Adapter must have permission to execute the SYSCS_UTIL.SYSCS_HDFS_OPERATION procedure.

Here’s an example of granting execute permission for two users:

splice> grant execute on procedure SYSCS_UTIL.SYSCS_HDFS_OPERATION to someuser;
0 rows inserted/updated/deleted
splice> grant execute on procedure SYSCS_UTIL.SYSCS_HDFS_OPERATION to anotheruser;
0 rows inserted/updated/deleted

If you’re using the Native Spark DataSource on a Kerberized cluster, you must set this property value in your hbase-site.xml settings file:

splice.authentication.token.enabled=true

See Also