From the course: Data Engineering Project: Build Streaming Ingestion Pipelines for Snowflake with AWS

Unlock the full course today

Join today to access over 24,900 courses taught by industry experts.

Preparing your Snowflake environment

Preparing your Snowflake environment

- [Instructor] In this video, we'll set up key components within your Snowflake instance in order for that instance to receive data from MSK. This will include a database and schema to store the incoming data, a virtual warehouse for processing, and a permissioned user that the Kafka connector can assume. Make sure you are logged into your Snowflake account. We'll work out of Snowflake's browser-based UI. Also make sure your role is ACCOUNTADMIN, or some other role that has the ability to create users, roles, and database objects. Click on the plus button on the top-right, then create a new SQL worksheet. Let's begin by creating a database and schema that we'll stream our data into. So, CREATE OR REPLACE DATABASE, and we'll call our database EVENTS. Then let's create a schema within that database, which we'll call PRODUCT. We'll then want to create a virtual compute warehouse in order to allocate data processing resources…

Contents