AI Big Data Engineer - Everest Consultants, Inc.
Troy, MI 48098
About the Job
Job Title: AI Big Data Engineer
Location: Troy, Michigan
Duration: 12 months
Hours: 8.00am 5.00pm
Pay Rate - $65 - $69.5/hr on W-2 (No 1099 or C2C)
Summary
As a Data Engineer, you will focus on creating a Unified Data Platform. You will design, develop, and maintain data pipelines, data lakes, and data platforms that support the analytics and business intelligence needs of our clients. You will work with cutting-edge technologies and tools, such as Spark, Kafka, AWS, Azure, and Kubernetes, to handle large-scale and complex data challenges. You will also collaborate with full stack developers, data scientists, analysts, and stakeholders to ensure data quality, reliability, and usability. You must be comfortable working with huge datasets.
Main Responsibilities:
Everest Consultants is an equal opportunity employer and does not discriminate on the basis of race, color, religion, sex, national origin, age, disability, or any other characteristic protected by applicable local, state, or federal civil rights laws. #IND
Location: Troy, Michigan
Duration: 12 months
Hours: 8.00am 5.00pm
Pay Rate - $65 - $69.5/hr on W-2 (No 1099 or C2C)
Summary
As a Data Engineer, you will focus on creating a Unified Data Platform. You will design, develop, and maintain data pipelines, data lakes, and data platforms that support the analytics and business intelligence needs of our clients. You will work with cutting-edge technologies and tools, such as Spark, Kafka, AWS, Azure, and Kubernetes, to handle large-scale and complex data challenges. You will also collaborate with full stack developers, data scientists, analysts, and stakeholders to ensure data quality, reliability, and usability. You must be comfortable working with huge datasets.
Main Responsibilities:
- Build automated pipelines to extract and process data from a variety of legacy platforms (predominantly SQL Server), e.g., in stored procedures, Glue processing, etc.
- Implement data-related business logic on modern data platforms, such as AWS Glue, Databricks, and Azure using best practices and industry standards.
- Create vector databases, data marts and the data models to support them
- Optimize and monitor the performance, reliability, and security of data systems and processes.
- Integrate and transform data from (or to) various sources and formats, such as structured, unstructured, streaming, and batch.
- Develop and maintain data quality checks, tests, and documentation.
- Support data analysis, reporting, and visualization using tools such as SQL, Python, Tableau and Quicksight
- Research and evaluate new data technologies and trends to improve data solutions and existing capabilities.
- Bachelor's degree or higher in Computer Science, Engineering, Mathematics, or a related field
- At least 5 years of experience in data engineering or a similar role (previous DBA experience is a plus)
- Experience with big data frameworks and tools, such as Spark, Hadoop, Kafka and Hive
- Expert in SQL, including a knowledge of efficient query and schema design, DDL, data modeling and use of stored procedures
- Proficient in at least one programming language, such as Python, Go or Java
- Experience with CI/CD, containerization (ex: docker, K8s) and orchestration (ex: Airflow)
- Experience building production systems with more modern ETL, ELT and data systems, such as AWS Glue, Databricks, Snowflake, Elastic, and Azure Cognitive Search
- Experience deploying data infrastructure on cloud platforms (AWS, Azure, or GCP)
- Strong knowledge of data quality, data governance, and data security principles and practices
- Excellent communication, collaboration, and problem-solving
Everest Consultants is an equal opportunity employer and does not discriminate on the basis of race, color, religion, sex, national origin, age, disability, or any other characteristic protected by applicable local, state, or federal civil rights laws. #IND
Source : Everest Consultants, Inc.