DB V2 executes SELECT, INSERT, DELETE and UPDATE operations and also makes PROCEDURES calls, returning the values to a JSON structure.

Technology

Authentication via Kerberos

It's possible to make an authentication via Kerberos in database components. For that, all you have to do is:

  • inform a KERBEROS-type account

  • configure a Kerberos principal

  • configure a keytab (that must be the base64 of the own keytab generated file)

Take a look at the supported scenarios of use:

Scenario 1: QUERY operation SELECT command
Let's say you have a configured database and a data table and that you want to invoke a DB V2 with a query with the SELECT command. With it, you'll have access to the operation result in the pipeline through JSON.

Example


1. Create a pipeline and add DB V2
2. Open the configurations of the component
3. Select the QUERY operation
4. Choose an ACCOUNT to be able to authenticate in the database
5. Define a DATABASE URL (eg.: jdbc:mysql://myserver/mydb)
6. Define a SQL STATEMENT (eg.: SELECT * FROM CLIENTES)
7. Click on CONFIRM to save the configurations of the component
8. Connect the trigger to DB V2
9. Execute a test in the pipeline (you can use the CTRL + ENTER command)
10. The result of the query will be presented:

{
  "data": [
    {
      "uf": "SP",
      "codigo": 1,
      "cidade": "Osasco",
      "logradouro": "Rua Laranjeiras",
      "name": "João Bosco",
      "due_date": "2019-03-27",
      "email": "joao.bosco@digibee.com.br",
      "cep": "XXXXX-XXX"
    },
    {
      "uf": "SP",
      "codigo": 2,
      "cidade": "São Paulo",
      "logradouro": "Rua Jõao Bosco",
      "name": "Roberto Carlos",
      "due_date": "2019-03-27",
      "email": "roberto.carlos@digibee.com.br",
      "cep": "XXXXX-XXX"
    }
  ],
  "updateCount": 0,
  "rowCount": 2
}

     
The result of the query returns a JSON with 3 properties on its root: data, updateCount and rowCount.
    

  • data: object array representing the lines returned from the database according to the defined query. Each element of this array has properties that represent the columns defined in the SELECT operation. If the declaration was "SELECT name FROM CLIENTS", there'd be only one property in each of the objects in data the array.

  • updateCount: indicates how many lies of the table were affected by the executed query. With this example the value would be zero - once SELECTED was the operation, no table was affected.

  • rowCount: indicates how many lines were returned by the query.     

      

Scenario 2: QUERY operation UPDATE command
Let's say you have a configured database and a data table and that you want to invoke a DB V2 with a query with the UPDATE operation. With it, you'll have access to the result of this operation through JSON.    
     

   
Example

1. Create a pipeline and add a DB V2
2. Open the configurations of the component
3. Select the QUERY operation
4. Choose an ACCOUNT to be able to authenticate in the database
5. Define a DATABASE URL (eg.: jdbc:mysql://myserver/mydb)
6. Define a SQL STATEMENT (eg.: UPDATE CLIENTES SET uf='SP' WHERE uf is null)
7. Click on CONFIRM to save the configurations of the component
8. Connect the trigger to DB V2
9. Execute a test in the pipeline (you can use the CTRL + ENTER command)
10. The result of the query will be presented:

{
  "data": null,
  "updateCount": 5,
  "rowCount": 0
}

   

  • updateCount: with this example, the change of 5 lines of the table is confirmed.

     


Scenario 3: QUERY operation DELETE command
Let's say you have a configured database and a data table and that you want to invoke a DB V2 with a query with the DELETE operation. With it, you'll have access to the result of this operation in the pipeline through JSON.

   
Example

1. Create a pipeline and add a DB V2
2. Open the configurations of the component
3. Select the QUERY operation
4. Choose an ACCOUNT to be able to authenticate in the database
5. Define a DATABASE URL (eg.: jdbc:mysql://myserver/mydb)
6. Define a SQL STATEMENT (eg.: DELETE FROM CLIENTES WHERE codigo='111111')
7. Click on CONFIRM to save the configurations of the component
8. Connect the trigger to DB V2
9. Execute a test in the pipeline (you can use the CTRL + ENTER command)
10. The result of the query will be presented:

{
  "data": null,
  "updateCount": 1,
  "rowCount": 0
}

       

  • updateCount: with this example, the exclusion of 1 line of the table is confirmed.

     


Scenario 4: QUERY operation INSERT command
Let's say you have a configured database and a data table and that you want to invoke a DB V2 with a query with the INSERT operation. With it, you'll have access to the result of the operation in the pipeline through JSON.

   
Example

1. Create a pipeline and add a DB V2
2. Open the configurations of the component
3. Select the QUERY operation
4. Choose an ACCOUNT to be able to authenticate in the database
5. Define a DATABASE URL (eg.: jdbc:mysql://myserver/mydb
6. Define a SQL STATEMENT (eg.: INSERT INTO CLIENTES (name, email) VALUES ('My Name', 'myname@email.com'))
7. Click on CONFIRM to save the configurations of the component
8. Connect the trigger to DB V2
9. Execute a test in the pipeline (you can use the CTRL + ENTER command)
10. The result of the query will be presented:

{
  "data": null,
  "updateCount": 1,
  "rowCount": 0
}

     

  • updateCount: with this example, the creation of 1 line in the table is confirmed.

     


Scenario 5: Keep the connection open for older or unreliable drivers
Let's say you have a database incompatible with JDBC 4.0 or that it fails to keep a trustable connection and that you want to invoke DB V2 with any query, but 20 minutes after making the same call. With it, the connection with the database isn't lost.
     

The option ADVANCED > CONNECTION TEST QUERY allows to define a query that is executed before the query defined in SQL STATEMENT to make sure the connection is established with the database and avoid errors.
      

This option will allow a new connection to be opened if there's an error in the connection test. It will avoid problems of "stuck" connections and timeouts caused by previous queries that received the error.
     

IMPORTANT: this option must be considered only for older databases (with support version to jdbc < 4.0).        

1. Create a pipeline and add a DB V2
2. Open the configurations of the component
3. Select the QUERY operation
4. Choose an ACCOUNT to be able to authenticate in the database
5. Define a DATABASE URL (eg.: jdbc:mysql://myserver/mydb)
6. Define a SQL STATEMENT (eg.: INSERT INTO CLIENTES (name, email) VALUES ('My Name', 'myname@email.com'))
7. Connect the trigger to DB V2
8. Select the option ADVANCED (more options will be shown)
9. Define a CONNECTION TEST QUERY (eg.: SELECT version())
10. Click on CONFIRM to save the configurations of the component
11. Execute a test in the pipeline (you can use the CTRL + ENTER command)
12. The result of the query will be presented

       

Scenario 6: Interrupt the execution of the pipeline in caso of an error
Let's say you have a configured database and a data table and that you want to invoke a DB V2 with a QUERY or PROCEDURE in case the operation fails to interrupt the execution of the pipeline.
     

The option FAIL ON ERROR allows the pipeline to interrupt its execution in case of an error when invoking DB V2.        

       


Scenario 7: Access and write files into the database
Let's say you have a configured database and a table with BLOB type data and that you want to invoke a DB V2 with a QUERY or PROCEDURE. With it, you can not only write files available in the pipeline, but also read BLOB and/or CLOB data from a database column as if they were files.    

The BLOB AS FILE option allows to inform a fie path for the operations with BLOB-type columns - which means, invoking the INSERT command and providing the path of a file available in the pipeline will make this file to be written in the database.
           
In case of choosing the SELECT command, the BLOB content will become a file inside the pipeline.

IMPORTANT: for now, the CLOB AS FILE option can only be used with the SELECT command. Therefore, if the mentioned scenario is considered, only CLOB-type files can be manipulated inside the pipeline.

 

Example: step 1 (read file)     

1. Create a pipeline and add a DB V2
2. Open the configurations of the component
3. Define the name as 'Read-file'
4. Select the QUERY operation
5. Choose an ACCOUNT to be able to authenticate in the database
6. Define a DATABASE URL (eg.: jdbc:mysql://myserver/mydb)
7. Define a SQL STATEMENT (eg.: SELECT blob FROM CLIENTES WHERE id=78)
8. Activate the BLOB AS FILE and/or CLOB AS FILE options (if the option CLOB AS FILE is activated, a new field CHARSET will be displayed to inform the file encoding)
9. Click on CONFIRM to dabe the configurations of the component
10. Connect the trigger and DB V2 (Read-file)
11. Execute a test in the pipeline (you can use the CTRL + ENTER command)
12. The result of the query will be presented:

{
  "data": [
    {
      "blob": "E4G17F.file"
"clob": "H2K89G.file"
    }
  ],
  "updateCount": 0,
  "rowCount": 1
}

     
A file named 'E4G17F.file' was generated and made available in the pipeline.
    

Example: step 2 (write file)    

     

1. Add a DB V2
2. Open the configurations of the component
3. Define the name as 'Write-file'
4. Select the QUERY option
5. Choose an ACCOUNT to be able to authenticate the database
6. Define a DATABASE URL (eg.: jdbc:mysql://myserver/mydb)
7. Define a SQL STATEMENT (eg.: INSERT INTO CLIENTES (blob) VALUES ( {{ message.data.blob }} ))
     

Keep in mind we use the Double Braces expression: {{ message.data.blob }}
Through it you can access the result of the last component - in this case we accessed data that has the query result. After that, we access blob, which is the name of the generated file. In other words, we're providing the file path for the 'E4G17F.file' file in the INSERT command for the blob field.
    

8. Activate the BLOB AS FILE option

9. In the TYPE PROPERTIES field, click on the ADD button

10. A section will be displayed to define the fields:

  • Key as 0 (each Double Brace has an index and it refers to the first one)

  • TYPE as BLOB

  • OUT PARAMETER NAME in blank

  • PARAMETER TYPE as IN

11. Click on CONFIRM to save the configurations of the component
12. Connect the DB V2 (Read-file) to DB V2 (Write-file)
13. Execute a test in the pipeline (you can use the CTRL + ENTER command)
14. The result of the query will be presented:

{
  "data": null,
  "updateCount": 1,
  "rowCount": 0
}

       
The result indicates that a line from the table was affected. In this case, the first component (Read-file) read the bob content of the database and created a file. After that, the component (Write-file) took this available file in the pipeline and inserted it in the database in a new register.

           
Scenario 8: Execute a procedure
Let's say you have a database configured with an existing procedure and that you want to invoke a DB V2 in PROCEDURE mode. With it, you can execute an existing procedure and obtain the result of the operation in the pipeline through JSON.      

   
Example

The following procedure must exist in the database for the mentioned example:

CREATE 
PROCEDURE `proc_sum_one`(INOUT `num` INTEGER)
BEGIN  
    SET `num` = `num` + 1;
END

   
1. Create a pipeline and add a DB V2
2. Open the configuration of the component
3. Select the PROCEDURE operation
4. Choose an ACCOUNT to be able to authenticate in the database
5. Define a DATABASE URL (eg.: jdbc:mysql://myserver/mydb)
6. Define a SQL STATEMENT (eg.: call proc_sum_one({{ message.num }}))

Keep in mind we use the Double Braces expression: {{ message.num }}
Through it you can access the result of the last component.

7. On the TYPE PROPERTIES field, click on the ADD button
8. A section will be shown as the image above for you to define the fields:

  • Key as 0 (each Double Braces has an index and it refers to the first one)

  • TYPE as INTEGER

  • OUT PARAMETER NAME as result (defines the result parameter of procedure)

  • PARAMETER TYPE as INOUT

9. Click on CONFIRM to save the configurations of the component
10. Connect the trigger to DB V2
11. Open the test-mode and provide the entry value (eg.: { "num": 3 })
12. Execute a test in the pipeline (you can use the CTRL + ENTER command)
13. The result of the query will be presented:

{
  "data": {
    "result": 4
  },
  "success": true
}

     
Scenario 9: Define personalized connection properties
Let's say you have a configured database and that you want to invoke a DB V2. With it, you can specify personalized connection properties.
           

Example:

1. Open the configurations of the component
2. Select the QUERY option
3. Choose an ACCOUNT to be able to authenticate in the database
4. Define a DATABASE URL (eg.: jdbc:mysql://myserver/mydb)
5. Define a SQL STATEMENT (e.g.: SELECT * FROM CLIENTES)
6. Define the CUSTOM CONNECTION PROPERTIES field (eg.: { "connectTimeout": 1 })
7. Click on CONFIRM to dabe the configurations of the component
8. Connect the trigger to DB V2
9. Execute a test in the pipeline (you can use the CTRL + ENTER command)
10. The result of the query will be presented:

{
  "success": false,
  "message": "Could not establish connection to the database",
  "error": "java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=xxxxx)(port=XXX)(type=master) : null"
}

     
In this case, the specific connection property connectTimeout was defined as 1 (one second) to demonstrate it was considered and generated a connection error.
       
        

Scenario 10: Keep active connection for 5 or 30 minutes
Let's say you have a configured database and that you want to invoke a DB V2. With it, you can choose if you want to keep the connection with the database active for 5 or 30 minutes.

The option KEEP CONNECTION allows you to extend the standard 5 minutes to 30 minutes of active connection with the database.

Scenario 11: Read custom data types from a Oracle database

Let's say you have a configured database with a table that contains a custom type and that you want to invoke a DB V2 to view this data.

Example:

You have a table created with the following structure:

CREATE TABLE project (
name VARCHAR2(50),
coordinator OWNER
);

And the OWNER custom type is created like:

CREATE TYPE OWNER AS OBJECT 
(
name VARCHAR2(50),
email VARCHAR2(30)
);

Executing the query below:

SELECT * FROM projeto

You’ll have the following result:

{
"data": [
{
"name": "Example Project",
"owner": [
"Coordinator Name",
"Coordinator Email"
]
}
],
"updateCount": 0,
"rowCount": 1
}

If the custom type uses others custom types in its structure, you’d have the result with data nested as the following way:

{
"data": [
{
"name": "Example Project",
"owner": [
"Coordinator Name",
"Coordinator Email",
[
"Field from a 2nd custom type",
[
"Field from a 3rd custom type"
]
]
]
}
],
"updateCount": 0,
"rowCount": 1
}

IMPORTANT: custom types don’t expose the fields' names configured inside the custom type, but only the order of those fields. In the example above, the first element of the "owner" array corresponds to the field "name" and the second element to the field "email", both defined in this respective order inside the type OWNER.When reading custom types through PROCEDURES, the OUT PARAMETER NAME property in TYPE PROPERTIES must be configured with the custom type name, so in this example above it would be OWNER.

Scenario 12: Send custom data types from a Oracle database

Let's say you have a configured database with a table that contains a custom type (Struct) and that you want to invoke a DB V2 to insert data into this table.

 

IMPORTANT: the component DB V2 doesn’t support Double Braces to resolve the custom data (Struct) using the commands INSERT and UPDATE.

 

Example

You have the custom types and the table created with the following structures:

  • CREATE OR REPLACE TYPE CONTACT AS OBJECT

(
  NAME  VARCHAR2(50),
  PHONE VARCHAR2(15)
);

  • CREATE OR REPLACE TYPE PERSON AS OBJECT

(
  ID           NUMBER(10,0),
  NAME         VARCHAR2(50),
  CONTACT_INFO CONTACT
);

 

  • CREATE OR REPLACE TYPE DEPARTMENT AS OBJECT

(
  ID      NUMBER(10,0),
  NAME    VARCHAR2(50)
);

  • CREATE TABLE EMPLOYEE

(
  PERSON PERSON,
  DEPT   DEPARTMENT,
  SALARY NUMBER(12,2)
);

 

To insert data into this table, the following structures can be used:

Data to be inserted:

  {
   "personName": "John A.",
   "contactName": "John",
   "contactPhone": "99999",
   "departmentName": "IT"
  }

 

 

  • Through INSERT from a QUERY result

INSERT INTO EMPLOYEE
SELECT PERSON(DBMS_RANDOM.VALUE(0,999999), {{ message.personName }}, CONTACT({{ message.contactName }}, {{ message.contactPhone }})),
  DEPARTMENT(DBMS_RANDOM.VALUE(0,999999), {{ UPPERCASE(message.departmentName) }}),
  3030.67 SALARY
  FROM DUAL

 

 

  • Through PROCEDURE:PROCEDURE previously created in the database

CREATE OR REPLACE PROCEDURE MY_PROC(DEPT IN DEPARTMENT, EMP IN OUT PERSON) IS
  BEGIN
SAL  := 4567.89 * 1.10;
INSERT INTO EMPLOYEE VALUES (EMP, DEPT, SAL);  
  EXCEPTION
   WHEN OTHERS THEN
   RAISE_APPLICATION_ERROR(-20001,'An error was encountered - '||SQLCODE||' - ERROR: '||SQLERRM);
  END;

 

  • PROCEDURE call

   DECLARE
   CONT CONTACT;
    DEPT DEPARTMENT;
    SAL  EMPLOYEE.SALARY%TYPE;
  BEGIN
    DEPT := DEPARTMENT(TRUNC(DBMS_RANDOM.VALUE(0,999999)), {{ UPPERCASE(message.departmentName) }});
    CONT := CONTACT({{ message.contactName }}, {{ message.contactPhone }});
    EMP  := PERSON(TRUNC(DBMS_RANDOM.VALUE(0,999999)), {{ message.personName }}, CONT);
  MY_PROC(EMP);
  END;

 

 

  • Through SQL Script

DECLARE
   EMP  PERSON;
    CONT CONTACT;
   DEPT DEPARTMENT;
    SAL  EMPLOYEE.SALARY%TYPE;
  BEGIN
DEPT := DEPARTMENT(TRUNC(DBMS_RANDOM.VALUE(0,999999)), {{ UPPERCASE(message.personName) }});
CONT := CONTACT({{ message.contactName }}, {{ message.contactPhone }});
EMP  := PERSON(TRUNC(DBMS_RANDOM.VALUE(0,999999)), {{ message.personName }}, CONT);
SAL  := 4567.89;
INSERT INTO EMPLOYEE VALUES (EMP, DEPT, SAL);  
  EXCEPTION
   WHEN OTHERS THEN
   RAISE_APPLICATION_ERROR(-20001,'An error was encountered - '||SQLCODE||' - ERROR: '||SQLERRM);

  END;

DB V2 in Action

Batch mode

When it's necessary to make a batch processing of some instructions, you can make calls in batch mode in the queries.


Example

Let's say you need to inform an array of objects in the component, that will be used in the batch execution:

ITEMS

[ { "name": "Mathews", "type":"A"}, { "name": "Jules", "type":"A"}, { "name": "Raphael", "type":"B"} ]

In the SQL instructions, you must inform it in the following way:

SQL

INSERT INTO TABLE VALUES ( {{ item.name }}, {{ item.type }} )

When you use Double Braces expressions {{ item.name }}, an iteration is made inside the array (informed in items) and a corresponding property is searched inside the object. In that case, the property is "name".

After the execution, 3 registers are inserted. The expected return is:

{ "totalSucceeded":3, "totalFailed":0 }

If one of the executions fails, an object will be returned with the "error" property:

{ "totalSucceeded":1, "totalFailed":1 }

If one of the executions fails, an object will be returned with the "errors" property:

{ "totalSucceeded":1, "totalFailed":1 "errors": ["error1", "error2"]}

IMPORTANT: the errors returned in the “errors” property vary according to the database driver. Some drivers don't return all the errors that occurred during the execution in batch mode.

Rollback On Error

If this option is activated, the commits of the operations will be made only if all of them are successful. Otherwise, the rollback of all the batch operations will be made.

If the option is inactive, then the commit and the successful changes by commit will be made even if there's an error between the executions.

IMPORTANT: for some databases, especially for Oracle, it's not possible to return the consolidated number of successful or unsuccessful executions. If an error occurs, an object containing all the errors will be returned (inside the "errors" property) and the consolidated with the -1 value will also be returned:

{ "totalSucceeded":-1, "totalFailed":-1 "errors": ["erro1", "error2"], "success": false}

For other databases, as Firebird, the occurrence of errors isn't informed. Therefore, an object with no error can be returned even if there was a failure:

{ "totalSucceeded":0, "totalFailed":3 "errors": ["error1", "error2"], "success": false}

For these error cases in Batch Mode, don't forget to check the "success" property. If it returns "false", it means at least one error occurred during the execution.

Connection pool

By standard, we use a pool based on the configurations of the deployed pipeline. For example, if the pipeline is SMALL, then the pool size will be 10. For MEDIUM the size would be 20 and for LARGE it would be 40.

It's possible to manage the pool size by the deployment time as well. For that, it's necessary to enable the “Pool Size By Actual Consumers” property in the component. With it, what's manually configured in the deployment screen is used.

See in the image below the configuration of a SMALL pipeline with 5 consumers. If you want the pool of the database components (DB V2 and Stream DB V3) to use this size, you'll have to enable the “Pool Size By Actual Consumers” property in every existing component:

IMPORTANT: attention when manually configuring the pool size so there's no deadlock in concurrent calls to the same database.

Our pool is shared between the database components that access the same database inside the pipeline. If an exclusive pool for a determined component is necessary, enable the “Exclusive Pool” property.

Did this answer your question?