From the course: Data Engineering Project: Build Streaming Ingestion Pipelines for Snowflake with AWS
Unlock the full course today
Join today to access over 24,900 courses taught by industry experts.
Preparing your Snowflake environment
From the course: Data Engineering Project: Build Streaming Ingestion Pipelines for Snowflake with AWS
Preparing your Snowflake environment
- [Instructor] In this video, we'll set up key components within your Snowflake instance in order for that instance to receive data from MSK. This will include a database and schema to store the incoming data, a virtual warehouse for processing, and a permissioned user that the Kafka connector can assume. Make sure you are logged into your Snowflake account. We'll work out of Snowflake's browser-based UI. Also make sure your role is ACCOUNTADMIN, or some other role that has the ability to create users, roles, and database objects. Click on the plus button on the top-right, then create a new SQL worksheet. Let's begin by creating a database and schema that we'll stream our data into. So, CREATE OR REPLACE DATABASE, and we'll call our database EVENTS. Then let's create a schema within that database, which we'll call PRODUCT. We'll then want to create a virtual compute warehouse in order to allocate data processing resources…
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.
Contents
-
-
-
-
(Locked)
Setting up your MSK cluster and EC2 instance3m 52s
-
(Locked)
Setting up your keys5m 8s
-
(Locked)
What is Snowpipe?1m 42s
-
(Locked)
Installing Snowpipe Kafka connector3m 26s
-
(Locked)
Set up config for provider-MSK connection and create topic6m 35s
-
(Locked)
Preparing your Snowflake environment6m 23s
-
(Locked)
Setting up your Snowpipe Kafka connect config4m 29s
-
(Locked)
Sending data to Snowflake3m 25s
-
(Locked)
Final considerations2m 8s
-
(Locked)
-