Sr. Data Engineer - Georgia IT Inc.
Newark, CA
About the Job
Position-Sr. Data Engineer_ IoT , AWS, Kafka- Location:Newark, CA
Rate-DOE
Technical/Functional Skills
Bachelor or Masters in Software Engineering or Computer Science
8+ years of experience in Data Engineering and Business Intelligence.
Proficient in IoT tools such as MQTT, Kafka, Spark
Proficient with AWS, S3, Redshift
Experience with Presto and Parquet/ORC
Proficient with Apache Spark and data frame.
Experienced in containerization, including Docker and Kubernetes
Expert in tools such as Apache Spark, Apache Airflow, Presto
Expert in design and implement reliable, scalable, and performant distributed systems and data pipelines
Extensive programming and software engineering experience, especially in Java, Python,
Experience with Columnar database such as Redshift, Vertica
Great verbal and written communication skills.
Roles & Responsibilities
Hands-on design and develop streaming and IoT data pipelines. Developing streaming pipeline using MQTT, Kafka, Spark Structure Streaming
Orchestrate and monitor pipelines using Prometheus and Kubernetes
Deploy and maintain streaming jobs in CI/CD and relevant tools.
Python scripting for automation and application development
Design and implement Apache Airflow and other dependency enforcement and scheduling tools.
Hands-on data modeling and data warehousing
Deploy solution using AWS, S3, Redshift and Docker/Kubernetes
Develop storage and retrieval system using Presto and Parquet/ORC
Scripting with Apache Spark and data frame.
Note: If you Interested, please reply back with your Updated Resume & you will get a call from our Manager for more details.
Rate-DOE
Technical/Functional Skills
Bachelor or Masters in Software Engineering or Computer Science
8+ years of experience in Data Engineering and Business Intelligence.
Proficient in IoT tools such as MQTT, Kafka, Spark
Proficient with AWS, S3, Redshift
Experience with Presto and Parquet/ORC
Proficient with Apache Spark and data frame.
Experienced in containerization, including Docker and Kubernetes
Expert in tools such as Apache Spark, Apache Airflow, Presto
Expert in design and implement reliable, scalable, and performant distributed systems and data pipelines
Extensive programming and software engineering experience, especially in Java, Python,
Experience with Columnar database such as Redshift, Vertica
Great verbal and written communication skills.
Roles & Responsibilities
Hands-on design and develop streaming and IoT data pipelines. Developing streaming pipeline using MQTT, Kafka, Spark Structure Streaming
Orchestrate and monitor pipelines using Prometheus and Kubernetes
Deploy and maintain streaming jobs in CI/CD and relevant tools.
Python scripting for automation and application development
Design and implement Apache Airflow and other dependency enforcement and scheduling tools.
Hands-on data modeling and data warehousing
Deploy solution using AWS, S3, Redshift and Docker/Kubernetes
Develop storage and retrieval system using Presto and Parquet/ORC
Scripting with Apache Spark and data frame.
Note: If you Interested, please reply back with your Updated Resume & you will get a call from our Manager for more details.
Source : Georgia IT Inc.