Skip to main content
All CollectionsEnterprise Plus - Compliance and Security
Enterprise Plus – Compliance and Security setup
Enterprise Plus – Compliance and Security setup

Migrating a project to user-managed S3 cloud storage

S
Written by Serge Gershkovich
Updated over 2 months ago

Enterprise Plus users can host SqlDBM project assets in their own cloud storage for increased security. This guide will provide instructions for migrating your project to self-managed cloud hosting and setting up encryption on your Cloud Data Platform (CDP) of choice.

Private Networking (a.k.a PrivateLink/Private Link, offered by all CDPs) can also be enabled for traffic between the SqlDBM cloud and your external storage. This way, traffic never leaves the cloud network, improving security and reducing network latency.

SqlDBM also supports various cloud storage features, such as KMS encryption and object versioning, that additionally improve the security and performance of your SqlDBM experience.

The features described in this article require an Enterprise Plus license.

Considerations for creating or selecting a cloud storage location

In order to set up your external bucket/storage for your SqlDBM data, you will need to first select a supported cloud storage provider and select a region and location that meets your needs.

SqlDBM supports all three leading CDPs: AWS, Azure, and Google Cloud Platform (GCP).

The latest regions and locations can be found on the respective CDP websites:

To start the process, follow the guidelines in this article and contact SqlDBM support once all the configurations have been set.

Considerations for existing projects

Users with existing SqlDBM projects and integrations should do the following in preparation for Enterprise Plus custom storage migration:

  • Remove existing git integrations

    • Go to Dashboard -> Integrations -> Delete all integrations

  • Remove existing Confluence & Jira apps

    • Log in to Confluence/Jira -> Manage apps -> Uninstall the SqlDBM app

  • Follow the steps in this article to migrate your project to custom storage ​

  • Copy current projects to migrate them to the new storage location

    • Go to Dashboard and select Copy Project from the project options:

Cloud Storage Setup Instructions

AWS setup

Please refer to these steps and follow the guidelines carefully for setting up your account for self-hosted AWS storage.

Creating an S3 bucket

After selecting a region, you will need to create and configure the S3 bucket. Refer to AWS documentation for details on creating an S3 bucket.

SqlDBM supports S3 bucket names that match the following template:

sqldbm-external-revisions-*

For example, the following name would be valid:

sqldbm-external-revisions-my-company-bucket

However, the following name would not match the template:

my-company-bucket

Do not corrupt project files!

SqlDBM uses its own file structure and naming conventions to store your data inside your S3 bucket. Please use an empty S3 bucket for these purposes. Once configured, avoid modifying files inside the container manually. Doing so will corrupt the file hash and render it unusable in the tool.

Please make sure to block all public access to your S3 bucket to avoid security risks. AWS S3 management Console will notify you as follows if public access is blocked as needed:

Configuring your external S3 bucket as SqlDBM storage

In order to use your external S3 storage with the SqlDBM cloud app, you will need to set a bucket policy allowing read and write permissions for SqlDBM AWS identity.

Here is a sample JSON policy document generated for your bucket:

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "SqldbmExternalAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::315590678317:root"
},
"Action": [
"s3:GetObject",
"s3:DeleteObjectVersion",
"s3:ListBucketVersions",
"s3:ListBucket",
"s3:DeleteObject",
"s3:PutObject",
"s3:GetObject",
"s3:PutObjectTagging"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME",
"arn:aws:s3:::BUCKET_NAME/*"
]
}
]
}

  • Replace BUCKET_NAME with the name of the created bucket.

Copy generated bucket policy in the clipboard.

Proceed to your bucket Permissions page in AWS Console:

Scroll to the Bucket policy section and press Edit:

Paste the generated policy from the clipboard, verify it, and press save.

Note that while you maintain full control over the permissions you provide to the SqlDBM identity, the generated policy contains minimal permissions for the SqlDBM application to operate correctly.

Enabling versioning on the bucket

The SqlDBM team requires versioning to be enabled on your S3 bucket. This will ensure each revision of the object stored in the bucket can be easily tracked and restored if needed in a disaster recovery scenario.

This can be done by AWS console: Click the S3 bucket → Open the Properties tab → Enable Bucket versioning.

Configuring KMS encryption for your S3 bucket

Follow the steps below to create and enable KMS encryption on your S3 bucket.

Creating and configuring KMS symmetric key

You can create AWS KMS customer-managed keys to encrypt objects in your S3 bucket.

We recommend using the following settings for your S3 encryption key:

Make sure to create a key in the same AWS region as your S3 bucket or use the Multi-Region key setting.

Refer to AWS documentation for further details on creating AWS KMS customer-managed symmetric keys.

After your key is created, proceed to its properties in AWS Console and select the Key policy tab:

Scroll down to "Other AWS" accounts section and add 315590678317 (SqlDBM identity assigned for your company) as a trusted AWS account:

Configuring S3 to encrypt objects using a KMS key

Once you’ve created your KMS key, you can proceed to your S3 bucket and configure it to use KMS encryption.

In the AWS Console, proceed to your S3 bucket Properties tab:

Scroll down to the "Default encryption" section and press Edit. SqlDBM recommends using the following encryption settings for your bucket:

Once you’ve saved your changes, the AWS S3 service will automatically encrypt all objects in the bucket—improving the security of your data.

Enabling your external S3 bucket as SqlDBM storage

Once you have configured your S3 bucket according to this guide, you will be able to set up your SqlDBM account to target it.

Before proceeding, verify that:

  • you’ve used the correct bucket name

  • you’ve applied the correct policy to your bucket

  • if you’ve configured KMS encryption

    • you’ve ensured that the SqlDBM identity number was added as a trusted AWS account for the KMS key

Once you're ready, open a support ticket with the subject line “Enable external S3 storage”.

In your message, please provide the bucket name and AWS region. The SqlDBM administrator will begin the process of transferring your account to the specified S3 bucket.

In the current version, SqlDBM does not support transferring existing projects and account settings. Please contact SqlDBM support if you need this option.

Your S3 configuration will be verified programmatically. In case of configuration issues, you will receive a corresponding message. You will have to fix your S3 settings before the transfer process can start.

Depending on the AWS region, your account will be assigned a corresponding PrivateLink endpoint - all communication between the SqlDBM application and your S3 will be done through this PrivateLink endpoint. Once transferring your account to the specified S3 bucket is complete, you will receive a notification email.

In your dashboard, you will see the “Custom S3 bucket” inscription indicating that all user data is now retrieved from your external S3 storage.

Note that you also have the option to delete data from the initial SqlDBM S3 location, so it will only reside in your bucket. You can request the deletion using the SqlDBM support channel.

Azure setup

To access the data in external storage, a Private Link connection is established between the SqlDBM Azure tenant and customer-managed Azure storage. Due to the fact that SqlDBM compute servers are hosted within an AWS tenant, managed VPN tunneling is used for AWS to Azure communication within the SqlDBM network perimeter.

Traffic between the SqlDBM cloud and customer external Azure storage never leaves the private networks, greatly improving the security of sensitive data.

Access control to the customer’s external Azure storage container is organized leveraging Shared Access Signature (SAS) tokens signed with Azure storage access keys. This way, the customer retains full control over granting and revoking container access for SqlDBM.

Azure storage settings

The following settings are required to support SqlDBM projects:

  • Storage configuration:

    • Account kind: StorageV2 (general purpose v2)

  • Data protection:

    • Enable versioning for blobs: enabled

  • Network configuration:

    • Private endpoint connection with SqlDBM must be approved

SqlDBM uses its own file structure and naming conventions to store data inside the customer’s Azure storage container. Please use empty container storage for these purposes. Once configured, avoid modifying files inside the container manually. Doing so will corrupt the file hash and render it unusable in the tool.

The following settings are recommended to ensure data privacy and consistency:

  • Storage configuration:

    • Secure transfer: enabled

    • Blob anonymous access: enabled

    • Minimum TLS version: TLS v1.2

  • Network configuration:

    • Public network access: Disabled / Enabled from selected virtual networks and IP addresses

    • Network Routing: Microsoft network routing

Enabling external storage for the customer account subscription in SqlDBM

The following steps are required to set up external storage.

Step 1: Initiating external storage setup in SqlDBM

To initiate an external storage setup for your customer’s SqlDBM subscription, ensure the required Azure storage settings are configured according to the section above.

If all settings are correct, open a support ticket with the title “Enable external Azure storage”.

In your message, please provide the following information:

  • Storage account Resource ID

    • Proceed to Storage account → Overview → JSON View

    • Copy the Resource ID value to the clipboard

  • Storage account Primary Location

    Value for Storage account Primary Location is taken from Storage account → Overview

  • Blob service account endpoint URL

    Value for Blob service endpoint is taken from Storage account → Endpoints

  • Container name value, which can be retrieved at the top left corner of the Azure storage container view or in the list of Azure storage containers at Storage account → Containers

  • SAS token value, details on SAS token requirements, and a token generation process are described in the section below

Upon receiving the SqlDBM support ticket, the SqlDBM administrator will begin the process of transferring your account to the specified S3 bucket.

In the current version, SqlDBM does not support transferring existing projects and SqlDBM account settings. Please contact SqlDBM support if you need this option.

Shared Access Signature generation

A shared access signature (SAS) token is used by the SqlDBM tenant to access the customer’s Azure storage. SAS allows to provide cross-subscription access for SqlDBM in a controlled manner. Customers always retain the possibility to revoke access and control access permissions using granular controls during token generation.

SqlDbm supports only account-level shared access signature (SAS) tokens due to specific permissions required for a correct code workflow:

  • To generate account-level SAS proceed to Azure storage account → Security + networking → Shared access signature:

Required permissions

It is recommended to generate SAS tokens with an expiry date between six months and two years. This helps to reduce refresh token operation while still retaining control over temporary/permanent token revocation.

Below is the scope of the permissions required for SqlDBM to operate correctly using an account-level SAS token:

  • Allowed services:

    • Blob

  • Allowed resource types:

    • Service

    • Container

    • Object

  • Allowed permissions:

    • Read

    • Write

    • Delete

    • List

    • Add

    • Create

    • Immutable storage

    • Permanent delete

  • Blob versioning permissions:

    • Enables deletion of versions

  • Allowed blob index permissions:

    • Read/Write

    • Filter

  • Allowed protocols:

    • HTTPS only

  • Preferred routing tier:

    • Basic (default)

Azure storage account-level SAS configuration

Step 2: Private endpoint connection setup

After providing the necessary information to the SqlDBM support team, a Private endpoint connection will be initiated to your Azure storage from the SqlDBM Azure tenant.

Proceed to Security + Networking → Private endpoint connections and approve the pending connection that will appear in the list:

Once the Private endpoint connection is approved, contact the SqlDBM support team to activate the configured external Azure storage for your SqlDBM company account.


Step 3: Verification

As soon as you provide all the necessary information to the SqlDBM support team, the process of enabling customer external storage begins. After the final step of approving the pending Private endpoint connection SqlDBM support team can finish the process and notify the customer.

Once enabled, the custom storage indicator appears at the top of the UI in the SqlDBM Dashboard.

The customer can verify that all new projects and settings are now created and stored in the configured external storage

Updating Shared Access Signature token

SAS token must be renewed in several scenarios:

  • The token is expired due to its expiration settings

  • The access key that was used for signing the token was compromised/rotated and needs to be changed

In the current version of the feature, you will need to manually provide a new SAS token to the SqlDBM administrator by opening the support ticket.

You will be notified within 14 days prior to the token expiration so the update process won’t interrupt operations for your SqlDBM company account.

Revoking access

Customers retain full control over SqlDBM tenant access permissions to the external Azure storage. There are two options available to revoke the SAS token permissions.

Please be aware that when revoking SAS token permissions for the SqlDBM tenant, your SqlDBM company account will not be fully operable.

Option 1: disable storage account key access

This method can be used as a temporary measure to revoke SAS token access to the storage upon ongoing security incidents with SqlDBM or other vendors that leverage SAS tokens to access your Azure storage account.

Proceed to Storage account → Settings → Configuration and set Allow storage account key access to Disabled

Option 2: Access key rotation

Azure access key is used to sign the SAS token during the generation process:

To revoke the token access permanently, you can rotate the access key.

Proceed to Storage account → Security + networking → Access keys and rotate the intended key:

To resume SqlDBM access to your Azure storage, re-generate the SAS token using a new access key and send it to SqlDBM support.

GCP Setup

External client storage is accessed through the Private Service Connection Endpoint established in the SqlDBM Google Cloud tenant. Due to the fact, that SqlDBM compute servers are hosted within an AWS tenant, managed HA VPN tunnels are used for AWS to Google communication within the SqlDBM network perimeter.

Traffic between the SqlDBM cloud and customer external Google Storage never leaves the private networks greatly improving the security of the sensitive data.

Access control to the customer’s external Google Storage container is organized using a Service Account. This way the customer retains full control over granting and revoking bucket access for SqlDBM.

Google Cloud Storage

SqlDBM does not have specific requirements on bucket settings except for the Object versioning setting enabled:

To provide SqlDBM applications access to the bucket Storage Legacy Bucket Owner and Storage Object Admin roles must be assigned to the following SqlDBM service account:

sqldbm-gcp-identity-prod@sqldbm-bc-tenant.iam.gserviceaccount.com

The setup is now complete.


See also:

Did this answer your question?