Azure Data Engineer

Posted 3 years ago

The job description is given below for your reference.

Role Description:

• Work closely with client technical heads, business heads, and business analysts to understand and

document business and technical requirements and constraints

• Ability to implement ETL pipeline using Databricks, Azure ADF ETL pipeline

• Experience in migrating batch, streaming data from On-Prem Hadoop/Oracle to Azure ADLS

• Drive technical architecture meetings and help clients select the right architecture for their solutions that

meet both functional and non-functional requirements today and in future

• Develop PoCs, strategies, and provide direction around data security, archiving, and retention, high

availability, and disaster recovery

Responsibilities:

• Should have good hands-on experience in Azure ADF, Databricks pipelines, functions, ADLS(Gen2)

• Expert in Hadoop, Hive, Sqoop, Python, Pyspark, Scala spark, Kafka and other related big data related

technologies and have related project experience.

• Develop data standards, best practices, processes, and definitions of project and data requirements in

support of strategy and ongoing operational support

• Expert in large distributed, high concurrency, high load system design, development, and optimization

• Deliver scalable and reliable Big Data solutions leveraging Hortonworks HDP Hadoop platform, RDBMS,

SaaS platforms and APIs

• Apply concepts, industry research, best practices and agile methodologies and tools (Jira, Confluence)

to implement Big Data solutions

Apply For This Job

A valid email address is required.
A valid phone number is required.

I confirm that I have read and accept the Terms and Conditions By sending my CV, I agree I have read and understand the Privacy Policy.