Data Engineer IV - Randstad USA
Charlotte, NC 28206
About the Job
Description:
Notes from Engagement Manager:
- Candidates MUST be able to code
- Candidates should be "consultants."
- Architects/Managers are not the right fit. If current role or previous role is an architect, they will not be the right fit for this role.
- Needs to be able to talk technical with a nontechnical audience. Candidates MUST be motivated and good communicators.
Skills:
- Data platforms on AWS. Data Pipelines and Data Lakes.
- Candidates in this role should have the ability to develop both Data Lakes AND Data Pipelines. It would be great if you could highlight that on the resumes. Candidates without must haves will not be shortlisted
Data Engineers that are good coders/analysts.
Client Enterprise Data Platforms Team is seeking a Subject Matter Expert to help develop client Data Fabric as an interconnected network of data capabilities and data products composed to deliver data at speed and scale.
Candidates should be experts in the field and be able to illustrate experience with skills related to developing and building data platforms to help our team overcome obstacles and avoid pitfalls.
They should also be capable of helping us accelerate our rate of production through optimization and automation using Terraform Enterprise scripts and optimized AWS, Apache, and other tool configurations and architectural design.
As well as be experienced and flexible with changing demands working in an Agile development environment.
Specifically, we are looking for individuals who can show they have at least 5+ year of experience in Data Engineering or Software Engineering role and be in a position to be a source of knowledge for our existing engineers.
Must have experience with below tech stack:
- GIT
- AWS
- IAM
- API Gateway
- Lambda
- Step Functions
- Lakeformation
- EKS
- Glue: Catalog, ETL, Crawler
- Athena
- Lambda
- S3 (Good foundational concepts like object data store vs block data store, encryption/decryption, storage tiers etc)
- Apache Hudi
- Flink
- PostgreSQL
- SQL
- RDS (Relational Database Services).
- Python
- Java
- Terraform Enterprise
- Must be able to explain what TF is used for
- Understand and explain basic principles (e.g. modules, providers, functions)
- Must be able to write and debug TF
Additional helpful experience would include:
- Kafka and Kafka Schema Registry
- AWS Services: CloudTrail, SNS, SQS, CloudWatch, Step Functions, Aurora, EMR, Redshift
- Concourse
- Secrets Management Platform: Vault, AWS Secrets manager
- Experience with Event Driven Architecture
- Experience transitioning on premise big data platforms into cloud-based platforms such as AWS
- Background in Kubernetes, Distributed Systems, Microservice architecture and containers
- Implementation and tuning experience in Streaming use cases in Big Data Ecosystems (such as EMR, EKS, Hadoop, Spark, Hudi, Kafka/Kinesis etc.)
- Experience building scalable data infrastructure and understand distributed systems concepts from a data storage and compute perspective.
- Good understanding of Data Lake and Data Warehouse concepts.
- Ability to define Standards and guidelines with understanding of various Compliance and Auditing needs.
location: Charlotte, North Carolina
job type: Contract
salary: $74.45 - 84.45 per hour
work hours: 8am to 5pm
education: Bachelors
responsibilities:
- Provides technical direction, engage team in discussion on how to best guide/build features on key technical aspects and responsible for product tech delivery
- Works closely with the Product Owner and team to align on delivery goals and timing
- C