メインコンテンツまでスキップ

Redshift connector API guide

Last updated on October 4, 2023

Overview

This guide provides detailed instructions on how to create, configure, and manage a Redshift Connector using the API. With the provided endpoints, you can seamlessly integrate the Redshift Connector into your data streaming workflows.

Create a Redshift connector using the API

  1. Create a Redshift database on your AWS account using the command:

    CREATE DATABASE <database_name>;
  2. Create a Redshift user using the command:

    CREATE USER <user_name> WITH password <user_password> | DISABLE;
    注記

    The password is optional as it's not needed for the connection.

  3. Grant Redshift user privileges to the database using the command:

     GRANT CREATE ON DATABASE <database_name> TO <user_name>;
  4. Create an AWS role for Connector with the following tag:

    • Key: accelbyte-service
    • Value: accelbyte-analytics-connector-service
  5. Create a policy based on the template provided at the endpoint /analytics-connector/v1/admin/tools/redshift/policies. This policy grants Connector the necessary permissions to interact with your Redshift resources. Attach this policy to the IAM role created in the previous step.

    Policy Template:

    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "AllowRedshift",
    "Effect": "Allow",
    "Action": [
    "redshift:GetClusterCredentials"
    ],
    "Resource": "*"
    }
    ]
    }
  6. Implement the trust relationship in your AWS role based on the template provided at the endpoint /analytics-connector/v1/admin/tools/redshift/trustrelationships. This trust relationship allows cross-access between services and ensures that Connector can assume the IAM role.

    Trust Relationship Template:

    {
    "Version": "2012-10-17",
    "Statement": [
    {
    "Sid": "AllowCrossAccess",
    "Effect": "Allow",
    "Principal": {
    "AWS": "arn:aws:iam::<accelbyte-aws-account-id>:root"
    },
    "Action": [
    "sts:AssumeRole"
    ],
    "Condition": {
    "StringEquals": {
    "sts:ExternalId": "<external-id>"
    }
    }
    }
    ]
    }
  7. Create the Redshift connector configuration via API endpoint [POST] /analytics-connector/v1/admin/connectors. Provide the following configuration structure. Retrieve a list of analytics data to fill out the filter configuration.

    {
    "connectorName": "<redshift-connector-name>",
    "connectorType": "SINK",
    "storageType": "KAFKA_REDSHIFT",
    "config": {
    "redshiftArn": "<redshift-arn>",
    "redshiftHost": "<redshift-host>",
    "redshiftPort": "<redshift-port>",
    "redshiftUsername": "<redshift-user>",
    "redshiftDatabase": "<redshift-database>",
    "flushInterval": "<flush-interval>",
    "flushSize": "<flush-size>",
    "eventType": "<event-type>",
    "model": "<table-model>",
    "isFlatten": "<column-model>"
    },
    "filter": {
    "<namespace>": [
    "<topic>"
    ]
    }
    }
  8. Activate the Redshift connector configuration via API endpoint [PUT] /analytics-connector/v1/admin/connectors/{id}/activate. Replace {id} with the Redshift connector id from the API response.

Connector configuration description

Config

  • redshift-connector-name: Name of the redshift connector, please note that the connector name will have an additional random number after the redshift connector is created.

  • redshift-host: Hostname of the redshift cluster.

  • redshift-arn: The arn of the AWS role that you created to grant access to the redshift cluster, the arn has the format arn:aws:iam::<your-aws-account-id>:role/<your-role-name>.

  • redshift-port: Port number of the redshift cluster.

  • redshift-user: Username to authenticate with the database.

  • redshift-database: Name of the redshift database.

  • event-type: There are two event types: justice_event and game_telemetry.

    • justice_event: System-generated events from AccelByte services (Service Telemetry).

    • game_telemetry: Custom telemetry events that are sent from game clients (Custom Telemetry).

  • flush-interval: Maximum time interval in milliseconds that the data should be periodically written into Redshift. The flush interval range is between 1 and 15 minutes.

  • flush-size: Maximum number of events that should be written into Redshift. The flush size range is 100 and 1000. Data will be sent depending on which condition is reached first between flush-interval or flush-size.

  • table-model: Presents how the table is created. There are two types of table models.

    • single: All events will be inserted into one table based on the event type.

      • Example topics:

        • analytics_game_telemetry.dev.lightfantastic.gameStarted

        • analytics_game_telemetry.dev.lightfantastic.gameEnded

      • Expected table (only has one table with table format schema.table_name):

        • public.game_telemetry_dev
    • mapping: Events will be inserted into multiple tables based on the topics.

      • Example topics:

        • analytics_game_telemetry.dev.lightfantastic.gameStarted

        • analytics_game_telemetry.dev.lightfantastic.gameEnded

      • Expected table (multiple tables based on topics with table format schema.table_name):

        • lightfantastic.gameStarted

        • lightfantastic.gameEnded

    注記

    If you opt for the single table model, all events will be streamed into the public schema. This provides a straightforward way to manage and query your data in Redshift. On the other hand, the mapping table model allows you to organize your data into separate tables based on the topics, offering a more structured approach to data storage and retrieval.

  • column-flatten: Presents how the column is created. There two types of the column model.

    • false (recommended for better performance): All events will be inserted into one column.

      • Example event:

        {
        "EventNamespace": "lightfantastic",
        "EventTimestamp": "2023-07-20T03:30:00.036483Z",
        "EventId": "d110582c54804a29ab1d95650ca4c644",
        "Payload": {
        "winning": true,
        "hero": "Captain America",
        "kill": 9,
        "network": 912.27,
        "item": [
        {
        "name": "vibranium shield",
        "defense": 10,
        "attack": 1
        },
        {
        "name": "mjolnir hammer",
        "defense": 1,
        "attack": 9
        }
        ]
        },
        "EventName": "gameEnded"
        }
      • Expected column:

        events
        {"EventNamespace":"lightfantastic","EventTimestamp":"2023-07-20T03:30:00.036483Z","EventId":"d110582c54804a29ab1d95650ca4c644","Payload":{"winning":true,"hero":"Captain America","kill":9,"network":912.27,"item":[{"name":"vibranium shield","defense":10,"attack":1},{"name":"mjolnir hammer","defense":1,"attack":9}]},"EventName":"gameEnded"}
    • true: All events will be inserted into multiple columns, based on event property.

      • Expected column:

        eventideventnamespaceeventtimestampeventnamepayload_itempayload_killpayload_winningpayload_networkpayload_hero
        d110582c54804a29ab1d95650ca4c644lightfantastic2023-07-20T03:30:00.036483ZgameEnded[{"defense":10,"attack":1,"name":"vibranium shield"},{"defense":1,"attack":9,"name":"mjolnir hammer"}]9true912.27Captain America
    注記

    Column flatten feature cannot be applied to the single table model, as each event may have different payload structures, which could result in a large number of columns.

Filter

  • namespace: Property to filter specific namespace, using an asterisk (*) for all namespaces.

  • topic: Property to filter specific analytics topics, using an asterisk (*) for all topics.

Example Redshift connector configuration

{
"connectorName": "redshift-connector",
"connectorType": "SINK",
"storageType": "KAFKA_REDSHIFT",
"config": {
"redshiftArn": "arn:aws:iam::12345678910:role/redshift_role",
"redshiftHost": "redshift-analytics.xxxxxxx.us-west-2.redshift.amazonaws.com",
"redshiftPort": "5439",
"redshiftUsername": "redshiftUser",
"redshiftDatabase": "redshiftDb",
"flushSize": "100",
"flushInterval": "5",
"model": "mapping",
"eventType": "justice_event",
"isFlatten": "true"
},
"filter": {
"*": [
"*"
]
}
}