Skip to main content

Setting Up Automatic Imports in Bytespree Data Lake

Philippe Trussart avatar
Written by Philippe Trussart
Updated yesterday

Automatic Imports in Bytespree’s Data Lake let you schedule recurring file loads into your database. Instead of manually uploading files every time, you define where the file lives, how it’s formatted, and how it should be written into a table, Bytespree takes care of the rest.

This is ideal for:

  • Nightly or hourly donor and transaction feeds

  • Regular exports from CRMs, email tools, or third-party systems

  • Keeping reporting tables up to date without manual work


Accessing Automatic Imports

To configure an Automatic Import:

  1. Log in to Bytespree.

  2. Go to the Data Lake section.

  3. Locate the database you want to populate.

  4. Click the three-dots menu in the top-right corner of that database card.

  5. Select Manage Tables and Views.

  1. In the left navigation, click Automatic Imports.

  2. Click Add Import to start the setup flow.


After you click on Add Import, you’ll see multiple tabs. The first two are Settings and Data Source.

Settings Tab

Use the Settings tab to define what you’re importing and how the file is formatted.

Import Name

A descriptive name for this automatic import job.

  • Helps you recognize the source and purpose (e.g., “Nightly CRM Donor Export” or “Monthly Giving History Import”).

Select Imported Table

Choose the target table in your database where the data should be loaded.

  • Only tables within the current database are shown.

Import Type

Defines how new data is written into the selected table.

  • Append: Adds new rows to the existing data without removing what’s already there.

  • Replace: Completely replaces the existing data in the table with the new imported data.

Delimiter

Specifies the character used to separate columns in your source file.

  • Common values:

    • , (comma) – Standard CSV

    • ; (semicolon)

    • \t (tab)

  • This must match the delimiter used in your actual import file, or field parsing will be incorrect.

Enclosed Character

Defines the text qualifier used around field values, especially those that may contain the delimiter.

  • Common value: " (double quote)

  • Example: "John, Jr." uses quotes so the comma inside the name doesn’t split the column.

Escape Character

Character used to “escape” special characters inside a value.

  • Common value: \ (backslash)

  • Example: \" inside a quoted field allows a quote mark to appear as part of the data, not as the end of the field.

File Encoding

Specifies the character encoding of the incoming file.

  • Default is usually UTF-8, which works for most modern exports.

  • Change this only if your source system uses a different encoding (e.g., Latin-1).

Choosing the correct encoding ensures special characters (like accents or symbols) are imported correctly.

Ignore errors on import

If checked, the import will continue even if some rows fail, skipping invalid records and logging errors.

  • Recommended for large recurring imports where a few bad lines shouldn’t block the whole job.

  • If unchecked, the import may stop on the first critical error.

First row has column names

Indicates whether the first row of the file contains header names.

  • Checked: first line is treated as column headers and not imported as data.

  • Unchecked: first line is treated as a normal data row.

Use this according to how your file export is structured.

Ignore empty lines on import

When enabled, any blank lines in the file are ignored.

  • Prevents accidental empty rows from being loaded or causing errors.

  • Often used when source systems pad files with trailing newlines.


Data Source Tab

Use the Data Source tab to define where Bytespree should fetch the file from.

Select File Source

Choose the type of external system where your import files are stored.

  • SFTP
    Secure File Transfer Protocol. Use this when your files are stored on a secure server you access with a host, port, username, and password.

  • S3
    Amazon Simple Storage Service. Use this when your files live in an S3 bucket and are accessed via AWS credentials and bucket/path settings.

Your choice determines which connection fields appear below.


SFTP:

Host

The server hostname or IP address of your file source.

  • Example: sftp.example.org

  • This is where Bytespree will connect to retrieve the file.

Port

The network port used to connect to the host.

  • For SFTP, this is typically 22 (unless your organization uses a custom port).

Username

The account name Bytespree will use to authenticate to the file source.

  • Usually provided by your IT or data team.

  • Must have read access to the directory and files you want to import.

Password

The password for the specified username (or credentials required by the chosen source type).

  • Ensure it’s kept secure and updated if it changes on the source system.

Directory

The full directory path where the files to be imported are located.

  • Example: /exports/nightly

  • Bytespree will look in this location for new files matching the configured pattern or schedule (defined in later steps).


S3:

Select File Source

Choose S3 to load files from an Amazon S3 bucket.

Access Key

Your AWS access key ID used to authenticate to S3. Provided by your AWS admin.

Access Secret

The AWS secret access key paired with the access key. Used to securely authorize Bytespree to read files.

Bucket Name

The name of the S3 bucket where your import files are stored (for example, my-org-data-exports).

Region

The AWS region where the bucket lives (for example, us-east-1 or eu-west-1).

Endpoint

The S3 endpoint URL, if you use a custom or compatible S3 service. Leave blank or default for standard AWS S3.

Directory

The folder path inside the bucket where your files are located (for example, nightly/exports/).


Column Mappings

The Column Mappings tab is where you tell Bytespree how the columns in your incoming file line up with the columns in your database table.

Select How to Map Columns

Choose the method used to match file columns to table columns:

  • CSV Header Name – Map based on the column names in the file header. Use this when your file includes a header row with clear, stable names (e.g., email, amount, created_at).

  • CSV Order – Map based on the position of each column in the file (1st column, 2nd column, etc.). Use this when your file does not have a header row or when column names are inconsistent but their order is fixed.

Mapping each column

For each database column (e.g., subscriptioninitial, subscriptionid, paymenttoken, paymentidentifier):

  • Choose the corresponding CSV header name or column position from your file, depending on the mapping method selected above. If the field is left blank, the CSV header name will be used.

  • Every required column must be mapped for the import to work correctly.

Correct mappings ensure that each value from the file is loaded into the right field in your table.

Reset

Click Reset to clear all current mappings and start over. This is useful if you change files or mapping method and want to configure everything from scratch.


Review & Save

The Review & Save tab is your final checkpoint before creating the automatic import. Here you can verify that all settings are correct:

  • Import Name
    Displays the name you gave this import job in the Settings tab.

  • Import Type
    Shows how data will be written to the table (e.g., Append).

  • Imported Table
    The target table in your database where the data will be loaded.

  • File Source
    Indicates the configured source for your files (e.g., SFTP or S3).

  • Column Mappings
    A summary of how your file columns are mapped to table columns (for example, “Mapped by CSV Order” or “Mapped by CSV Header Name”).

If anything looks incorrect, you can click back into Settings, Data Source, or Column Mappings to make adjustments.

Did this answer your question?