a Data Architect or similar role, with a focus on AWS cloud environments. Strong knowledge of AWS services relevant to data architecture, such as AmazonRedshift, Amazon Athena, Amazon S3, AWS Glue, and AWS Lambda. Experience designing and implementing data lakes, data warehouses, and analytics solutions more »
London, England, United Kingdom Hybrid / WFH Options
Annalect
ECS, CloudFront, ALB, API Gateway, RDS, CodeBuild, SSM, Secret Manager, Lambda, API Gateway etc. Familiarity with working with data and databases (SQL, MySQL, PostgreSQL, Amazon Aurora, Redis, AmazonRedshift, Google BigQuery). Knowledge of Database administration Demonstrate experience of using Terraform to provision and configure infrastructure Scripting more »
building large scale DW/BI systems for B2B SAAS companies · Experience with open-source tools like Apache Flink and AWS tools like S3, Redshift, EMR and RDS. · Experience with AI/Machine Learning and Predictive Analytics · Experience in developing global products will be a big plus. · Understanding of more »
platforms. We specialise on using the latest frameworks, reference architectures and technologies using AWS, Azure and GCP. Essential Skills and Experience: * AWS (e.g., Athena, Redshift, Glue, EMR) * Strong AWS Data Solution Architect Experience on Data Related Projects * Java, Scala, Python, Spark, SQL * Experience of developing enterprise grade ETL/ more »
platforms. We specialise on using the latest frameworks, reference architectures and technologies using AWS, Azure and GCP. Essential Skills and Experience: • AWS (e.g., Athena, Redshift, Glue, EMR) • Strong AWS Data Solution Architect Experience on Data Related Projects • Java, Scala, Python, Spark, SQL • Experience of developing enterprise grade ETL/ more »
cloud hosting provider (AWS (preferred), Google Cloud, Azure or similar). Experience using modern build tools such as Maven, Jenkins, GitHub, etc. Experience with Amazon Web Services a strong plus - CloudFormation, EMR, S3, EC2, Athena etc. Experience with scheduling services such as Airflow, Oozie. Experience with Data ETL and more »
cloud hosting provider (AWS (preferred), Google Cloud, Azure or similar). Experience using modern build tools such as Maven, Jenkins, GitHub, etc. Experience with Amazon Web Services a strong plus - CloudFormation, EMR, S3, EC2, Athena etc. Experience with scheduling services such as Airflow, Oozie. Experience with Data ETL and more »
Birmingham, West Midlands, United Kingdom Hybrid / WFH Options
Erin Associates
essential. Core skills for this Data Engineer role: Strong coding skills in Python & SQL Excellent experience in the AWS Cloud platform, including AWS Glue, Amazon DynamoDB for NoSQL database management, AmazonRedshift for data warehousing, and AWS Lambda Designing & developing ETL Pipelines Familiarity with big data technologies more »
experience with data orchestration tools: e.g. Apache Airflow, Dagster. Experience with big data storage and processing technologies: e.g. DBT, Spark, SQL, Athena/Trino, Redshift, Snowflake, RDBMSs (PostgreSQL/MySQL). Knowledge of event-driven architectures and streaming technologies: e.g. Apache Kafka, Kafka Streams, Apache Flink. Experience with public more »
experience with data orchestration tools: e.g. Apache Airflow, Dagster. Experience with big data storage and processing technologies: e.g. DBT, Spark, SQL, Athena/Trino, Redshift, Snowflake, RDBMSs (PostgreSQL/MySQL). Knowledge of event-driven architectures and streaming technologies: e.g. Apache Kafka, Kafka Streams, Apache Flink. Experience with public more »
Strong experience of Python Deep knowledge of NLP such as hugging face and spacy AWS - ETL Processes (Glue, Lambda etc.), ECS, S3, Database Solutions (Redshift, RDS) Vector Databases a big plus Familiar with Graph Data Structure and Algorithms Benefits: Competitive salary and benefits package, including health insurance, life insurance more »
Apache Flink, or Hadoop. > Hands-on experience with cloud platforms such as AWS, Google Cloud, or Azure. > Experience with data warehousing technologies such as Redshift, BigQuery, or Snowflake. > Knowledge of database systems such as MySQL, PostgreSQL, or MongoDB. > Excellent problem-solving skills and attention to detail. > Strong communication and more »
and improve existing data designs and data models, including relational table structures, using Unix Shell Scripting, Oracle, SQL Server, Azure SQL Data Warehouse, or AmazonRedshift Support and learn the associated scheduling logic for data pipelines using scheduling tool such as Autosys or Airflow Read and translate logical more »
skills Containerisation experience (Dockers, Kubernetes) Cloud Computing experience ( GCP/AWS/Azure ) Strong preference for a Snowflake background but open to Bigquery or Redshift Unfortunately sponsorship cannot be provided for this role If you are interested please apply here or reach out to morgan.beck@xcede.com for more information more »
and reporting. Specialised in AWS cloud technologies for ETL, data warehouse, and data lake design. Hands-on experience with AWS services like EMR, Glue, RedShift, Kinesis, Lambda, DynamoDB. Capable of processing large volumes of structured and unstructured data on AWS. Familiarity with AWS best practices in data engineering, data more »
database structure systems, data mining, data analysis, and strong software engineering skills. Strong understanding of Data Engineering Proficiency in AWS, data warehousing (Snowflake, Databricks, Redshift), big data frameworks (Spark, Kafka), container orchestration platforms (Kubernetes), and data integration/ETL tools. Strong written and verbal communication skills, with the ability more »
Python (no data science experience required). Experience working on production-level microservices (Docker/Kubernetes) and cloud infrastructure. Experience working with Databases, Postgres, Redshift, Neo4j would be a plus. Why You Should Join The position will be at the intersection of data science and development operations. The candidate more »
South East London, London, United Kingdom Hybrid / WFH Options
Stepstone UK
able to lend hands-on support to the team working on and using the following technologies:Data Warehousing, Data Modelling, Database Design, ETL, AWS Redshift, SQL Server, Power BI, Cloud/DevOps AWS, Docker, Terraform, Bitbucket, Bamboo, Team City, Octopus CI/CD deploy pipeline, Agile, pair programming, code more »
South East London, England, United Kingdom Hybrid / WFH Options
Builder.ai
Python (no data science experience required). Experience working on production-level microservices (Docker/Kubernetes) and cloud infrastructure. Experience working with Databases, Postgres, Redshift, Neo4j would be a plus.Why You Should JoinThe position will be at the intersection of data science and development operations. The candidate would want more »
Data modelling Principles & best practices and also needs prior experience leading a data engineering team. Key Tech: - AWS (S3, Glue, EMR, Athena, Lambda) - Snowflake, Redshift - DBT (Data Build Tool) - Programming: Python, Scala, Spark, PySpark or Ab Initio - Data pipeline orchestration (Apache Airflow) - Knowledge of SQL This is a more »
would also be useful. You will be a Engineer with past experience with java, data, and infrastructure (devOps). Java, Python, PySpark Mechanisms: MongoDB, Redshift, AWS S3 Environments/Infra: AWS (required), [AWS Lambda, Terraform] (nice to have) Platforms: Creating data pipelines within Databricks or equivalent such as Jupyter more »
programming language such as Python or Java. Experience with big data technologies such as Hadoop and Spark. Experience with data warehousing technologies such as Redshift, Snowflake, or BigQuery. Experience with data pipeline and ETL tools such as Apache NiFi, Airflow, or Glue. Knowledge of data governance and security best more »
pipelines (e.g., Beam, Flink etc) Proven expertise in designing and constructing data lakes and data warehouse solutions utilising technologies such as BigQuery, Azure Synapse, Redshift, Oracle, Teradata, and more. Familiarity with Agile methodologies, particularly Scrum, to support dynamic project management. Valuable contributions to open-source projects, reflecting your commitment more »
Reading, England, United Kingdom Hybrid / WFH Options
BJSS
involved in variety of projects in the cloud (AWS, Azure, GCP), learning about and using data services such as Databricks, Data Factory, Synapse, Kafka, Redshift, Glue, Athena, BigQuery, S3, Cloud Data Fusion etc. About You You're an engineer at heart and enjoy the challenge of building reliable, efficient more »
Cardiff, Wales, United Kingdom Hybrid / WFH Options
BJSS
involved in variety of projects in the cloud (AWS, Azure, GCP), learning about and using data services such as Databricks, Data Factory, Synapse, Kafka, Redshift, Glue, Athena, BigQuery, S3, Cloud Data Fusion etc. About You You're an engineer at heart and enjoy the challenge of building reliable, efficient more »