Data Engineer - Rush University Medical Center
Chicago, IL
About the Job
Job Description
Location: Chicago, IL
Hospital: RUSH University Medical Center
Department: ORA Administration
Work Type: Full Time (Total FTE between 0.9 and 1.0)
Shift: Shift 1
Work Schedule: 8 Hr (8:00:00 AM - 5:00:00 PM)
Summary:
The Data Engineer is responsible for designing and implementing data pipelines for cloud projects. This position will require working with complex data sources and transforming it into something useful for analysts. Exemplifies the Rush mission, vision and values and acts in accordance with Rush policies and procedures.
Other information:
• Bachelor's Degree.
• 5 years of experience.
• Strong background in cloud computing, software engineering and data processing.
• Data management experience.
• Experience in ETL Tools such as Pentaho, Talend, Informatica, Azure Data Factory, Apache Kafka and Apache Camel.
• Experience designing and implementing analysis solutions on Hadoop-based platforms such as Cloudera Hadoop, or Hortonworks Data Platform or Spark based platforms such as Databricks.
• Proficient in RDBMS such as Oracle, SQL Server, DB2, MySQL etc.
• Strong analytical and problem-solving skills.
• Strong verbal and written communication skills.
• Proficient programming skills in Python, SQL NoSQL, and Spark.
• Ability to manage multiple projects essential.
• Ability to work independently or in groups.
• Ability to prioritize time.
• Ability to adapt to a rapidly changing environment.
Preferred Job Qualifications:
Experience, education, licensure(s), specialized certification(s).
Cloud Certifications in AWS, Azure or GCP.
Physical Demands:
Competencies:
Disclaimer: The above is intended to describe the general content of and requirements for the performance of this job. It is not to be construed as an exhaustive statement of duties, responsibilities or requirements.
• Bachelor's Degree.
• 5 years of experience.
• Strong background in cloud computing, software engineering and data processing.
• Data management experience.
• Experience in ETL Tools such as Pentaho, Talend, Informatica, Azure Data Factory, Apache Kafka and Apache Camel.
• Experience designing and implementing analysis solutions on Hadoop-based platforms such as Cloudera Hadoop, or Hortonworks Data Platform or Spark based platforms such as Databricks.
• Proficient in RDBMS such as Oracle, SQL Server, DB2, MySQL etc.
• Strong analytical and problem-solving skills.
• Strong verbal and written communication skills.
• Proficient programming skills in Python, SQL NoSQL, and Spark.
• Ability to manage multiple projects essential.
• Ability to work independently or in groups.
• Ability to prioritize time.
• Ability to adapt to a rapidly changing environment.
Preferred Job Qualifications:
Experience, education, licensure(s), specialized certification(s).
Cloud Certifications in AWS, Azure or GCP.
Physical Demands:
Competencies:
Disclaimer: The above is intended to describe the general content of and requirements for the performance of this job. It is not to be construed as an exhaustive statement of duties, responsibilities or requirements.
• Bachelor's Degree.
• 5 years of experience.
• Strong background in cloud computing, software engineering and data processing.
• Data management experience.
• Experience in ETL Tools such as Pentaho, Talend, Informatica, Azure Data Factory, Apache Kafka and Apache Camel.
• Experience designing and implementing analysis solutions on Hadoop-based platforms such as Cloudera Hadoop, or Hortonworks Data Platform or Spark based platforms such as Databricks.
• Proficient in RDBMS such as Oracle, SQL Server, DB2, MySQL etc.
• Strong analytical and problem-solving skills.
• Strong verbal and written communication skills.
• Proficient programming skills in Python, SQL NoSQL, and Spark.
• Ability to manage multiple projects essential.
• Ability to work independently or in groups.
• Ability to prioritize time.
• Ability to adapt to a rapidly changing environment.
Preferred Job Qualifications:
Experience, education, licensure(s), specialized certification(s).
Cloud Certifications in AWS, Azure or GCP.
Physical Demands:
Competencies:
Disclaimer: The above is intended to describe the general content of and requirements for the performance of this job. It is not to be construed as an exhaustive statement of duties, responsibilities or requirements.
Responsibilities:
• Responsible for design, development and maintenance of data pipelines to enable data analysis and reporting.
• Builds, evolves and scales out infrastructure to ingest, process and extract meaning out data.
• Write complex SQL queries or python code to support analytics needs.
• Manage projects / processes, working independently with limited supervision
• Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses.
• Combines, optimizes, and manages multiple big data sources.
• Builds data infrastructure and determines proper data formats to ensure data is ready for use.
Rush is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other legally protected characteristics.
Position Data Engineer
Location US:IL:Chicago
Req ID 13533
Location: Chicago, IL
Hospital: RUSH University Medical Center
Department: ORA Administration
Work Type: Full Time (Total FTE between 0.9 and 1.0)
Shift: Shift 1
Work Schedule: 8 Hr (8:00:00 AM - 5:00:00 PM)
Summary:
The Data Engineer is responsible for designing and implementing data pipelines for cloud projects. This position will require working with complex data sources and transforming it into something useful for analysts. Exemplifies the Rush mission, vision and values and acts in accordance with Rush policies and procedures.
Other information:
• Bachelor's Degree.
• 5 years of experience.
• Strong background in cloud computing, software engineering and data processing.
• Data management experience.
• Experience in ETL Tools such as Pentaho, Talend, Informatica, Azure Data Factory, Apache Kafka and Apache Camel.
• Experience designing and implementing analysis solutions on Hadoop-based platforms such as Cloudera Hadoop, or Hortonworks Data Platform or Spark based platforms such as Databricks.
• Proficient in RDBMS such as Oracle, SQL Server, DB2, MySQL etc.
• Strong analytical and problem-solving skills.
• Strong verbal and written communication skills.
• Proficient programming skills in Python, SQL NoSQL, and Spark.
• Ability to manage multiple projects essential.
• Ability to work independently or in groups.
• Ability to prioritize time.
• Ability to adapt to a rapidly changing environment.
Preferred Job Qualifications:
Experience, education, licensure(s), specialized certification(s).
Cloud Certifications in AWS, Azure or GCP.
Physical Demands:
Competencies:
Disclaimer: The above is intended to describe the general content of and requirements for the performance of this job. It is not to be construed as an exhaustive statement of duties, responsibilities or requirements.
• Bachelor's Degree.
• 5 years of experience.
• Strong background in cloud computing, software engineering and data processing.
• Data management experience.
• Experience in ETL Tools such as Pentaho, Talend, Informatica, Azure Data Factory, Apache Kafka and Apache Camel.
• Experience designing and implementing analysis solutions on Hadoop-based platforms such as Cloudera Hadoop, or Hortonworks Data Platform or Spark based platforms such as Databricks.
• Proficient in RDBMS such as Oracle, SQL Server, DB2, MySQL etc.
• Strong analytical and problem-solving skills.
• Strong verbal and written communication skills.
• Proficient programming skills in Python, SQL NoSQL, and Spark.
• Ability to manage multiple projects essential.
• Ability to work independently or in groups.
• Ability to prioritize time.
• Ability to adapt to a rapidly changing environment.
Preferred Job Qualifications:
Experience, education, licensure(s), specialized certification(s).
Cloud Certifications in AWS, Azure or GCP.
Physical Demands:
Competencies:
Disclaimer: The above is intended to describe the general content of and requirements for the performance of this job. It is not to be construed as an exhaustive statement of duties, responsibilities or requirements.
• Bachelor's Degree.
• 5 years of experience.
• Strong background in cloud computing, software engineering and data processing.
• Data management experience.
• Experience in ETL Tools such as Pentaho, Talend, Informatica, Azure Data Factory, Apache Kafka and Apache Camel.
• Experience designing and implementing analysis solutions on Hadoop-based platforms such as Cloudera Hadoop, or Hortonworks Data Platform or Spark based platforms such as Databricks.
• Proficient in RDBMS such as Oracle, SQL Server, DB2, MySQL etc.
• Strong analytical and problem-solving skills.
• Strong verbal and written communication skills.
• Proficient programming skills in Python, SQL NoSQL, and Spark.
• Ability to manage multiple projects essential.
• Ability to work independently or in groups.
• Ability to prioritize time.
• Ability to adapt to a rapidly changing environment.
Preferred Job Qualifications:
Experience, education, licensure(s), specialized certification(s).
Cloud Certifications in AWS, Azure or GCP.
Physical Demands:
Competencies:
Disclaimer: The above is intended to describe the general content of and requirements for the performance of this job. It is not to be construed as an exhaustive statement of duties, responsibilities or requirements.
Responsibilities:
• Responsible for design, development and maintenance of data pipelines to enable data analysis and reporting.
• Builds, evolves and scales out infrastructure to ingest, process and extract meaning out data.
• Write complex SQL queries or python code to support analytics needs.
• Manage projects / processes, working independently with limited supervision
• Work with structured and unstructured data from a variety of data stores, such as data lakes, relational database management systems, and/or data warehouses.
• Combines, optimizes, and manages multiple big data sources.
• Builds data infrastructure and determines proper data formats to ensure data is ready for use.
Rush is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other legally protected characteristics.
Position Data Engineer
Location US:IL:Chicago
Req ID 13533
Source : Rush University Medical Center