How to read azure storage and write to Azure SQL using Databricks
Databricks project originally initiated by UC Berkeley and then latter known as Apache Spark. It provides all analytics and Big Data processing service. Currently it is part of Microsoft and named as Microsoft Azure Databricks . We are using Databricks in Azure platform as a single platform for Big Data processing and Machine Learning. It provides data science and data engineering teams with a fast, easy and collaborative Spark-based platform on Azure. In this post, I'll walk through how you can connect with Azure storage, read your files and write them in your Azure SQL. Basic Data Engineering steps. Implementing machine learning inelegancy or drill down level data aggregation is out of scope. Let's start by creating Databricks first. To create Databricks, search Azure Databricks in your Azure portal and click on it to create. Once created, you can see this at your Azure portal. Click on it to go to this resource home page. At the home page, click on Launch Workspace bu