City of London, London, United Kingdom Hybrid / WFH Options
Oliver Bernard Ltd
Experience with a JVM language, Kotlin, Java, Scala, Clojure Knowledge of Typescript and React is beneficial Exposure to data pipelines using technologies such as Spark and Kafka Experience with cloud services (ideally AWS) Hybrid working 1-2 days per week in Central London. £110,000 depending on experience. Please more »
with JavaScript or Python Experience deploying software into the cloud and on-premise. Developing software products. Experience with EKS, Kubernetes, OpenSearch/ElasticSearch, MongoDB, Spark or NiFi. Experience with microservices architectures. Experience with AI/ML systems TO BE CONSIDERED…. Please either apply by clicking online or emailing more »
or Rust. Experience in building and enhancing compute, storage, and data platforms with exposure to open source products like Kubernetes, Knative, Ceph, Rook, Cassandra, Spark, Nate etc. Hands-on exp. with IaC tools and automation, such as Terraform, Ansible, or Helm. Active engagement or contributions to the open-source more »
Flask, Tornado or Django, Docker Experience working with ETL pipelines is desirable e.g. Luigi, Airflow or Argo Experience with big data technologies, such as ApacheSpark, Hadoop, Kafka, etc. Data acquisition and development of data sets and improving data quality Preparing data for predictive and prescriptive modelling Hands more »
pipelines Know your way around Unix based operating system Experience working with any major cloud provider (AWS, GCP, Azure) Fluency in English Experience using Apache Airflow Experience using Docker Experience using ApacheSpark Benefits: Salary £40-50K per annum dependant on skills and experience 25 Days more »
encompassing experience in both stream and batch processing. Proficiency in the design and deployment of production data pipelines, involving languages like Java, Python, Scala, Spark, and SQL. You should also have some, if not all, of the following; Capability in scripting, data extraction via APIs, and the composition of more »
Energy & Utilities, Financial Services, Government & Public Services, Healthcare, Life Sciences, and Transport. Essential Skills & Experience: • Design and deploy data pipelines using Java, Python, Scala, Spark, and SQL in big data architecture. • Execute tasks involving scripting, API data extraction, and SQL queries. • Proficient in data cleaning, wrangling, visualization, and reporting. more »
as TensorFlow, PyTorch, or Scikit-learn. Strong knowledge of statistical modelling, data mining, and data visualization techniques. Experience with big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, GCP, Azure). Strong problem-solving skills and the ability to think critically and creatively. Excellent analytical skills with more »
value through improved data handling and analysis. Responsibilities: Build predictive models using machine-learning techniques that generate data-driven insights on modern data platforms (Spark, Hadoop and other map-reduce tools); Develop and productionalize containerized algos for deployment in hybrid cloud environments (GCP, Azure) Connect and blend data from more »
least one cloud platform (preferably GCP).BSc/MSc in computer science, maths, physics or STEM subject.Basic knowledge of statistics and machine learning.Experience with Spark, Apache services, ETL tools, Data visualization and dashboards.Experience with streamed data processing, parallel compute, and/or event based architectures.Experience with web-scraping more »
or hedge fund industry. Technical Skills: Proficiency in Python and SQL. Experience with relational and NoSQL databases. Knowledge of big data frameworks (e.g., Hadoop, Spark, Kafka). Understanding of financial markets and trading systems. Strong analytical, problem-solving, and communication skills. Familiarity with DevOps tools and practices. This is more »
Learn, TensorFlow, PyTorch). Solid understanding of ML and data pipeline architectures and best practices. Experience with big data technologies and distributed computing (e.g., Spark, Hadoop) is a plus. Proficient in SQL and experience with relational databases. Strong analytical and problem-solving skills, with a keen attention to detail. more »
data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, ApacheSpark, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook.Our Commitment to Diversity and InclusionAt Databricks, we are more »
Platforms Must have 8+ years' Experience with Relational Databases like Oracle, NoSQL Databases and/or Big Data technologies (e.g. Oracle, SQL Server, Postgres, Spark, Hadoop, other Open Source). Must have experience in Data Security Solutions (Identity and Access Management and Data Security Access Management) Must have 3+ more »
Engineer, with expertise developing scalable data pipelines. Strong object oriented programming skills, particularly in Python . Experience with data lakes and data warehousing solutions ( Spark, Dataflow, BigQuery ). Knowledge of SQL and experience with relational databases, as well as NoSQL databases Familiarity with cloud services (preferably GCP ) and understanding more »
quality of data. Key Requirements: Strong experience designing data pipelines/warehouses using AWS and Snowflake. Exposure to big data technologies such as Kafka, Spark, or Hadoop. Solid experience with Snowflake, including performance optimisation and cost management. Strong experience with SQL and Data modelling. Excellent understanding of AWS architecture more »
to 10% What Will Help You On The Job Familiarity with running software services at scale AWS Infrastructure, Airflow, Kafka and data streaming using Spark/Scala Understanding of networking fundamentals (OSI layers 2-7) Technical and software engineering background in the areas of cloud computing, enterprise computing, serversand more »
in Computer Science, Engineering (or other related STEM subject) 5+ years experience in data engineering 2+ years in a leadership role. Experience working with ApacheSpark, Azure Data Factory and other data pipelines tools. Strong programming skills. Impeccable communication skills. Precise attention to detail. Pioneering attitude. If you more »
they are on the lookout for 2 AWS Data Engineers to come in on a contract basis. Key Skills/Requirements: Must have Python & Spark experience Must have strong AWS experience Must have Terraform experience SQL & NoSQL experience Have built out Data Warehouses & built Data Pipelines Strong Databricks & Snowflake more »
on experience.Requirements:Experience with a JVM language, Kotlin, Java, Scala, ClojureKnowledge of Typescript and React is beneficialExposure to data pipelines using technologies such as Spark and KafkaExperience with cloud services (ideally AWS)Hybrid working 1-2 days per week in Central London.110,000 depending on experience.Please apply and I more »
City Of London, England, United Kingdom Hybrid / WFH Options
RJC Group
Azure or AWS experience Data access methods (SQL, GraphQL, APIs) Beneficial Requirements Experience around data science tools and algorithms Manipulation technologies (e.g., WebSockets, Kafka, Spark) TensorFlow, Pandas, pySpark and scikit-learn would be great Salary up to £75K + 20% bonus and benefits package We have interview slots lined more »
Service, tackling the ML Ops loop as a service, interesting challenge. They are going to enable edge AI, another pioneering aspect. Tech stack: Athena, Spark, ECS, Temporal, Python, Flask, Redis, Postgres, React, Plotly, Docker more »
complex issues they are facing. out data-driven analysis, craft solutions to resolve business problems. Artificial Intelligence and data science approaches (Python, R, Matlab, Spark etc). database technologies such as Hadoop. tools that expand the companies tool kit, advancing their ability to serve clients. Experience needed Strong consulting more »
of databases. Snowflake is widely used, as are Docker and Kubernetes for containerisation. ETL and ELT tech are also used every day, primarily Airflow, Spark, Hive and a lot more. You’ll need to come from a strong academic background with some commercial experience in a data heavy software more »
processes, and data warehousing. - Significant exposure and hands on at least 2 of the programming languages - Python, Java, Scala, GoLang. - Significant experience with Hadoop, Spark and other distributed processing platforms and frameworks. - Experience working with Open table/storage formats like delta lake, apache iceberg or apache hudi. - Experience of developing and managing real time data streaming pipelines using Change data capture (CDC), Kafka and Apache Spark. - Experience with SQL and database management systems such as Oracle, MySQL or PostgreSQL. - Strong understanding of data governance, data quality, data contracts, and data security best practices. - Exposure more »