V2, Azure Databricks, Azure Function Apps& Logic Apps, Azure Stream Analytics, Azure Resource Manager skills (Terraform, Azure Portal, Az CLI and Az PowerShell) Strong PySpark, Delta Lake, Unity Catalog and Python skills. Includes ability to write unit and integration tests in Python with unittest, pytest, etc. Strong understanding of more »
a complete greenfield re-architecture from the ground up in Microsoft Azure. The Tech you'll be playing with: Azure Data Factory Azure Databricks PySpark SQL DBT What you need to bring: 1-3 year's experience in building Data Pipelines SQL experience in data warehousing Python experience would more »
a week in the Liverpool office - rest remote**** Senior Data Engineer, Data, Data Modelling, Migration, ETL, ETL Tooling (Informatica IICS & IDMC ) SQL, Python or Pyspark, Agile, migrating data from on-prem to cloud. **** Informatica Cloud (IICS & IDMC ) is essential - NOT PowerCenter**** A top insurance firm are looking for a … e.g. Scrum, SAFe) and tools (i.e. Jira, AzureDevOps). Data Engineer, Data, Data Modelling, Migration, ETL, ETL Tooling (Informatica IICS & IDMC ) SQL, Python or Pyspark, Agile, migrating data from on-prem to cloud. Seniority Level Mid-Senior level Industry Insurance Financial Services Employment Type Full-time Job Functions Information more »
Senior Data Engineer Up to £70k plus bonus Manchester Are you looking to take your Data Engineer career to the next level?? This company use extremely modern technologies, and you can be certain you will grow within a technical environment. more »
months to begin with & its extendableLocation: Leeds, UK (min 3 days onsite)Context: Legacy ETL code for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters.Converted code is causing failures/performance issues.Skills:Spark Architecture – component understanding around Spark Data Integration (PySpark … SQL, Spark Explain plans.Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting recommendations.Spark SME – Be able to review PySpark and Spark SQL jobs and make performance improvement recommendations.Spark – SME Be able to understand Data Frames/Resilient Distributed Data Sets and understand any … there are Cluster level failures.Cloudera (CDP) – Knowledge of understanding how Cloudera Spark is set up and how the run time libraries are used by PySpark code.Prophecy – High level understanding of Low-Code No-Code prophecy set up and its use to generate PySpark code. more »
months to begin with & its extendable Location: Leeds, UK (min 3 days onsite) Context: Legacy ETL code for example DataStage is being refactored into PySpark using Prophecy low-code no-code and available converters. Converted code is causing failures/performance issues. Skills: Spark Architecture – component understanding around Spark … Data Integration (PySpark, scripting, variable setting etc.), Spark SQL, Spark Explain plans. Spark SME – Be able to analyse Spark code failures through Spark Plans and make correcting recommendations. Spark SME – Be able to review PySpark and Spark SQL jobs and make performance improvement recommendations. Spark – SME Be able … are Cluster level failures. Cloudera (CDP) – Knowledge of understanding how Cloudera Spark is set up and how the run time libraries are used by PySpark code. Prophecy – High level understanding of Low-Code No-Code prophecy set up and its use to generate PySpark code. more »