Minimum of 5+ years of overall IT experience.
Working knowledge of Data Warehousing concepts and design (incl. ETL, data modelling & visualization).
Should have executed at least one end to end azure data lake project (preferably with 1 to 2 years of exp in Azure data platform).
Experience working with a cloud-based environment (Azure) mainly with delta data lake implementation.
Hands-on experience working on SQLDW and SQL DB - Write, modify, tune and debug queries, stored procedures, views, indexes, user-defined functions,
and other database entities. Creation of external tables using polybase.
Azure Databricks (Spark SQL, Python, PySpark - Setting up Databricks Cluster and automation of on demand clusters, Scaling Databricks workflows,
Integration of ADF data pipelines with ADB.
Hands-on development exposure on Microsoft Azure cloud services-Spark structure streaming with Azure,
Spark on HD insight, Azure Databricks, Azure Stream Analytics, Azure Data Factory (v1 & v2), Azure Blob Storage, Azur.