Big Data Engineer (AwS) - ASCII Group
Chicago, IL
About the Job
Hi,
Please go through the JD and let me know if you are interested.
Location Chicago, IL (5 days onsite from day 1)
Job Description:
Need an experienced and seasoned professional, big data engineer, greater than 12 - 14 years of experience in AWS and other tech stack listed below:
Understand customer requirements and render those as architectural models that will operate at large scale and at high performance
Create and validate the prepared architecture patterns, validate them against non-functional requirements and finalize the build model.
Should be able to present their point of view on different big data architectural patterns.
Experience in creating architectural proposals.
Must have experience in leading teams and interacting with clients to gather requirements.
Must have led a team of developers with diverse skillset (Big Data , Reporting and Database architecture).
Experience with Statistics, Machine Learning and Predictive Modelling.
Work alongside customers to build data management platforms using Elastic Map Reduce (EMR), Redshift, Kinesis, Amazon Machine Learning,
Amazon Athena, Lake Formation, S3, AWS Glue, DynamoDB, ElastiCache and the Relational Database Service (RDS)
Well experience on other AWS services EC2, EMR, Redshift, S3, Streaming services like Kafka, Kinesis, HDFS.
Experience with Starburst ecosystem.
Experience with developing data models in Data Build Tool (dbt).
Experience with configuration based ETL pipeline development.
Experience with Dagster as an orchestration too.
Render working, high performance data management solutions, as CloudFormation and reusable artifacts for
Implementation by the customer. Bootstrap using user data scripts will be added advantage
Implementation and tuning experience in the Big Data Ecosystem, (such as EMR, Hadoop, Spark, R, Presto, Hive),
Database (such as Oracle, MySQL, PostgreSQL, MS SQL Server), NoSQL (such as DynamoDB, HBase, MongoDB, Cassandra,
design principles) and Data Warehousing (such as Redshift, Teradata, Vertica, schema design, query tuning and optimization) and data migration and integration.
Track record of implementing AWS services in a variety of business environments such as large enterprises and start-ups.
Knowledge of foundation infrastructure requirements such as Networking, Storage, and Hardware Optimization.
Prepare architecture and design briefs that outline the key features and decision points of the application built in the Data Lab
Work with customers to advise on changes as they put these systems live on AWS 25. Extract best-practice knowledge, reference architectures, and patterns from these engagements for sharing with the worldwide AWS solution architect community
AWS Certification, eg. AWS Solutions Architect, AWS Developer, or AWS Certified Big Data Specialty (Data Analytics Speciality)
Please go through the JD and let me know if you are interested.
Location Chicago, IL (5 days onsite from day 1)
Job Description:
Need an experienced and seasoned professional, big data engineer, greater than 12 - 14 years of experience in AWS and other tech stack listed below:
Understand customer requirements and render those as architectural models that will operate at large scale and at high performance
Create and validate the prepared architecture patterns, validate them against non-functional requirements and finalize the build model.
Should be able to present their point of view on different big data architectural patterns.
Experience in creating architectural proposals.
Must have experience in leading teams and interacting with clients to gather requirements.
Must have led a team of developers with diverse skillset (Big Data , Reporting and Database architecture).
Experience with Statistics, Machine Learning and Predictive Modelling.
Work alongside customers to build data management platforms using Elastic Map Reduce (EMR), Redshift, Kinesis, Amazon Machine Learning,
Amazon Athena, Lake Formation, S3, AWS Glue, DynamoDB, ElastiCache and the Relational Database Service (RDS)
Well experience on other AWS services EC2, EMR, Redshift, S3, Streaming services like Kafka, Kinesis, HDFS.
Experience with Starburst ecosystem.
Experience with developing data models in Data Build Tool (dbt).
Experience with configuration based ETL pipeline development.
Experience with Dagster as an orchestration too.
Render working, high performance data management solutions, as CloudFormation and reusable artifacts for
Implementation by the customer. Bootstrap using user data scripts will be added advantage
Implementation and tuning experience in the Big Data Ecosystem, (such as EMR, Hadoop, Spark, R, Presto, Hive),
Database (such as Oracle, MySQL, PostgreSQL, MS SQL Server), NoSQL (such as DynamoDB, HBase, MongoDB, Cassandra,
design principles) and Data Warehousing (such as Redshift, Teradata, Vertica, schema design, query tuning and optimization) and data migration and integration.
Track record of implementing AWS services in a variety of business environments such as large enterprises and start-ups.
Knowledge of foundation infrastructure requirements such as Networking, Storage, and Hardware Optimization.
Prepare architecture and design briefs that outline the key features and decision points of the application built in the Data Lab
Work with customers to advise on changes as they put these systems live on AWS 25. Extract best-practice knowledge, reference architectures, and patterns from these engagements for sharing with the worldwide AWS solution architect community
AWS Certification, eg. AWS Solutions Architect, AWS Developer, or AWS Certified Big Data Specialty (Data Analytics Speciality)
Source : ASCII Group