[progress News] [progress Openedge Abl] Configuring Arcplan With Apache Spark Sql

Status
Not open for further replies.
N

Nishanth Kadiyala

Guest
Check out the latest blog from our tutorial rumble and learn how to Configure Arcplan with Apache SQL in 12 easy steps!

We recently had a tutorial rumble to discover what problems you, our customers, were having or how we could make your life easier. Here is the latest solution courtesy of Rashmi Gupta!

What is Arcplan?


Arcplan is a Business Intelligence software for budgeting, planning, analytics and collaborative Business Intelligence. It supports more than 20 data sources and also provides a generic ODBC interface to connect to any backend that has an ODBC driver.

This tutorial shows a 32-bit Arcplan Application Designer 8.6 working with the 32-bit Progress DataDirect for ODBC for Apache Spark SQL driver. The same steps are applicable to other ODBC drivers available from Progress.

Step-by-step

  1. Download and install the Arcplan application
  2. Download and install a 15-day evaluation copy of the Progress DataDirect for ODBC for Apache Spark SQL driver
  3. Create an ODBC Data Source to connect to Apache Spark SQL. Refer to the Progress DataDirect for ODBC for Apache Spark Wire Protocol Driver User's Guide and Reference, under “Getting Started” > “Quick Start Connect” > “Configuring and Connecting on Windows” > “Configuring a Data Source” for details on setting up a data source. The below screenshot shows a System data source configuration using the ODBC Administrator.
    create-an-odbc-data-source-to-connect-to-apache-spark-sql.png


  4. Open Arcplan and select Windows > Databases from the menu bar
    open-arcplan-and-select-windows-databases-from-the-menu-bar.png


  5. In the Databases window, click on the “New” button to create a new connection
    in-the-databases-window-click-on-the-new-button-to-create-a-new-connection.png


  6. In the “Create new connection” dialog box, select “external data source” option from the Data Source section
    create-new-connection.png


  7. Select “ODBC” from the drop down list in the “Interface for” section for the External Data Source
    select-odbc-from-the-drop-down-list.png


  8. Depending on the type of data source created in step three, select an appropriate option in the External Data Source Interface for ODBC section.

    If you set up a User DSN or System DSN, select the Data Sources option. If you set up a File DSN, select File Data source option. If you did not set up a data source in step three, you may set up one by clicking on the “ODBC …” button, which invokes the ODBC Administrator.
    click-on-the-odbc-button-which-invokes-the-odbc-administrator.png


    In this example, since a System DSN named “SparkSQL” was created in step three, with the Data Sources option selected, select the “SparkSQL” ODBC DSN from the drop down list.

    select-the-sparksql-odbc-dsn.png


  9. Save the Arcplan Connection file
    save-the-arcplan-connection-file.png


  10. A login dialog will be displayed to enter the database login information for the Apache Spark SQL data source.
    enter-the-database-login-information.png


    NOTE: If you have AutheticationMethod=-1 (No Authentication) or 1 (KERBEROS) set in the DataDirect Apache Spark SQL data source, you will not be prompted for login information and the driver will directly connect to the Apache Spark SQL database using the specified authentication mechanism.

  11. If the connection has been established successfully, it will appear in the Connection dropdown list and the Databases window will be populated with the list of tables and views from Apache Spark SQL
    established-successfully.png


  12. You can then access the database data in your Arcplan documents and reports

access-the-database.png


access-the-database258141d9c3234ae7ade3d0e14bf63b8e.png


Tips


If your database has a large number of objects, it may be difficult to view the tables/views you need to access using the application. Setting the Use Current Schema for Catalog Functions option improves performance by returning only database objects owned by the current user when executing catalog functions. Refer again to the User's Guide and Reference, under “Reference” > “Connection Option Descriptions for Apache Spark SQL” > “Use Current Schema for Catalog Functions,” for the connection option details and valid values.

To try this out yourself, check out our free trial, and don't forget to refer to the​
Quick Start Guide​
for other connection option settings to improve performance.​


Continue reading...
 
Status
Not open for further replies.
Top