criteria) Extensive experience leading small teams of data engineers Experience designing and building Databricks data products Strong programming skills in languages such as Python (PySpark preferred), Scala, or SQL Experience in owning, designing and implementing data pipelines ingesting enterprise levels of data volume with strong knowledge of data engineering more »
experience with data modelling, data warehousing, and ETL/ELT processes.A fluency and development experience in at least one of the following: Java, Python, PySpark or Scala.Experience working with a variety of data formats such as JSON, Parquet, XML etc.Experience with or developed understanding of the application of ETL more »
solutions in a production setting. Knowledge of developing real-time data stream systems (ideally Kafka). Proven track record in developing data systems using PySpark and Apache Spark for batch processing. Capable of managing data intake from various sources, including data streams, unstructured data, relational databases, and NoSQL databases. more »
SQL Server and relational databases. Solid understanding of the Azure data engineering stack, including Azure Synapse and Azure Data Lake. Programming skills in Python, PySpark, and T-SQL. Nice to haves: Familiarity with broader Azure Data Solutions, such as Azure ML Studio. Previous experience with Azure DevOps and knowledge more »
Azure Cloud platform Knowledge on orchestrating workloads on cloud Ability to set and lead the technical vision while balancing business drivers Strong experience with PySpark, Python programming Proficiency with APIs, containerization and orchestration is a plus Qualifications: Bachelor's and/or master’s degree About you: You are more »
understand consumers Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
understand consumers. Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
understand consumers. Hands-on data engineering/development experience, preferably in a cloud/big data environment Skilled in at least one of Python, PySpark, SQL or similar Experience in guiding or managing roles in insight or data functions, delivering data projects and insight to inspire action and drive more »
cross functional teams entrusted with business-critical platforms.Desirable skills & experienceWorking to an Agile methodology and familiarity with Azure DevOpsDeep automation knowledge with PythonSkilled in Pyspark and SynapseExperience with data modelling and visualisation in Power BI (or alternative)A strong understanding of architecting data platforms, BI, MI, or analytics solutionsStrong more »
Banking and Financial Services sector is advantageous. Deep knowledge or experience with using as much of the following: Azure Cloud Data Components | Databricks | Python | PySpark | Terraform | APIs | Lakehouse | Data Mesh | Nosql DBs | GitHub Oversee the evolution of the data platform, encompassing data pipelines, storage, and processing. Establish and nurture more »
and industry standards for the organization. Strong experience on Azure cloud services like Azure, ADF, ADLS, Synapse Proficiency in querying languages such as SQL, Pyspark, Python and familiarity with data visualization tools (e.g. Power BI). Strong communication skills to gather the business requirements from stakeholder and propose best more »
Quality and Information Security principles Experience with Azure, ETL Tools such as ADF and Databricks Advanced Database and SQL skills, alng with SQL, Python, Pyspark, Spark SQL Strong understanding of data model design and implementation principles Data warehousing design patterns and implementation Benefits : £50-£60k DOE Mainly home based more »
tracking. As a Data Engineer, you will be able to demonstrate strong experience in a majority of the following:Extensive experience in Python/PySpark coding and development, as well as SQL Expertise in AWS - ADF, Synpase, Databricks, Function Apps, Logic AppsPrevious commercial experience in cloud migration projectsYou have more »
of Python Experience developing in the cloud (AWS preferred) Solid understanding of libraries like Pandas and NumPy Experience in data warehousing tools like Snowflake, Pyspark, Databricks Commercial experience with performant database programming in SQL Capability to solve complex technical issues, comprehending risks prior to the circumstance. Please apply today more »
help cement Mace’s position as a provider of a Premium technical service.The ideal candidate must have extensive experience of working with Python/PySpark, data pipelines, Azure Databricks (or similar) and data warehousing platforms, as well as a good working knowledge of T-SQL.You will play a leading … languages. Experience with data structures/algorithms, building Data Platforms, Data-lake and Business Intelligence solutions.Experience as a data engineer: implementing data pipelines (using PySpark, Spark SQL, Scala, etc), orchestration tools/services (i.e. Airflow, data factory) and testing frameworks.1 year plus of experience in a technical leadership roleExperience more »
City of London, London, United Kingdom Hybrid / WFH Options
Develop
Modeling within a cloud-based data platform Strong experience with SQL Server Azure data engineering stack, including Azure Synapse and Azure Data Lake Python, PySpark and T-SQL In return you will be offered a competitive salary and benefits package, remote working options and an opportunity to work with more »
in retail/marketing but not requiredCandidates should be looking to work in a fast paced startup feel environmentTech across: Python, SQL, AWS, Databricks, PySpark, AB Testing, MLFlow, APIsApply below!CONTACTIf you can’t see what you’re looking for right now, send us your CV anyway – we’re more »
experience-related problems such as workforce management, demand forecasting, or root cause analysis Strong visualisation skills including experience with Tableau Familiarity with Databricks and PySpark for data manipulation and analysis Familiarity with Git-based source control methodologies, including branching and pull requests A self-starter, passionate about converting data more »
Mart. Utilize Vector Databases, Cosmos DB, Redis, and Elasticsearch for efficient data storage and retrieval. Demonstrate proficiency in programming languages including Python, Spark, Databricks, Pyspark, SQL, and ML Algorithms. Implement Machine Learning models and algorithms using Pyspark, Scikit Learn, and other relevant tools. Manage Azure DevOps, CI/… environments, Azure Data Lake, Azure Data Factory, Microservices architecture. Experience with Vector Databases, Cosmos DB, Redis, Elasticsearch. Strong programming skills in Python, Spark, Databricks, Pyspark, SQL, ML Algorithms, Gen AI. Knowledge of Azure DevOps, CI/CD pipelines, GitHub, Kubernetes (AKS). Experience with ML/OPS tools such more »
requiring 2-3 days onsite) and is paying up to £110,000 per annum Key Skills Strong commercial experience with Python/SQL/PySpark Knowledge of converting business requirements to engineering processes Azure environment - Data Bricks & Data Factory Industry experience with Insurance would be highly desirable The processing more »
City Of London, England, United Kingdom Hybrid / WFH Options
RJC Group
experience Data access methods (SQL, GraphQL, APIs) Beneficial Requirements Experience around data science tools and algorithms Manipulation technologies (e.g., WebSockets, Kafka, Spark) TensorFlow, Pandas, pySpark and scikit-learn would be great Salary up to £75K + 20% bonus and benefits package We have interview slots lined up for later more »
Greater London, England, United Kingdom Hybrid / WFH Options
Agora Talent
come with scaling a company • The ability to translate complex and sometimes ambiguous business requirements into clean and maintainable data pipelines • Excellent knowledge of PySpark, Python and SQL fundamentals • Experience in contributing to complex shared repositories. What’s nice to have: • Prior early-stage B2B SaaS experience involving client more »
start interviewing ASAP. Responsibilities: Azure Cloud Data Engineering using Azure Databricks Data Warehousing Data Engineering Very strong with the Microsoft Stack ESSENTIAL knowledge of PySpark clusters Python & C# Scripting experience Experience of message queues (Kafka) Experience of containerization (Docker) FINANCIAL SERVICES EXPERIENCE (Energy/commodities trading) If you have more »
process and aggregate data at scale.What you need:Around 5 years of experience in data engineering.Experience with cloud technologies, preferably AzureGood understanding of Python, Pyspark, Pandas.Business stakeholder management and gathering requirements.The salary goes up to 100k plus a discretionary bonus. They also offer a range of other benefits.Location: London more »
customer modelling but not required Candidates should be looking to work in a fast paced startup feel environment Tech across: Python, SQL, AWS, Databricks, PySpark, AB Testing, MLFlow, APIs Apply below more »