Full Stack Data Engineer
Skillsets: Experience with the following tools is preffered o Apache Spark o SQL Server o DB2 o AWS S3 o OpenShift...
Skillsets: Experience with the following tools is preffered o Apache Spark o SQL Server o DB2 o AWS S3 o OpenShift...
tools that enable statisticians to process large datasets and run analytical workloads using GCP, Apache Spark... supporting containerized Spark applications Architect and enhance the platform used for executing statistical risk models Drive...
. Hands-on experience with Spark, Kafka, and Airflow (or similar). Strong understanding of data modeling and lakehouse...
Management: Deploy, configure, and maintain OpenShift clusters or GCP projects to support containerized Spark Applications... Spark. Optimization: Tune Spark jobs for performance, leveraging OpenShift's resource management capabilities (e.g...
-native platform running on Google Cloud Platform and supporting distributed data processing using Apache Spark. Experience... with large-scale datasets, GCP services, Spark, and containerized microservices is essential. Responsibilities: Consult...
working with Cloud data solutions: creating/supporting Spark based ingestion and processing 3 years of experience with Data..., BigQuery, Data Proc, Cloud Composer Hands-on experience developing data flows using Kafka, Flink, and Spark streaming...
, snowflake schema) ETL/ELT processes Big data platforms (Hadoop, Spark) Experience with modeling tools: ERwin, ER/Studio...
., Hadoop, Spark, Snowflake). Experience with BI tools such as Tableau, ThoughtSpot, or Business Objects. Data Architecture...
(e.g., Hadoop, Spark, Snowflake). Experience with BI tools such as Tableau, ThoughtSpot, or Business Objects. Data...
, configure, and maintain OpenShift clusters or GCP projects for Spark workloads Support platform capabilities for statistical... model execution Design and implement large-scale data processing workflows using Apache Spark Tune Spark jobs using...