Python Developer With Data Engineering Exp - Petadata
Las Vegas, NV
About the Job
Job Title: Python Developer with Data Engineering Exp.
Location: Las Vegas, NV (REMOTE)
EXPERIENCE: 12+ YRS
Job Type: W2/C2C
PETADATA is currently looking to hire for the position of Python Developer with Data Engineering Exp. for one of their clients.
Job Overview:
We are seeking a highly skilled Data Engineer with a strong emphasis on Python programming. The ideal candidate will possess extensive experience in object-oriented programming (OOP) and be proficient in building, understanding, and utilizing packages, modules, objects, and methods. This role will require you to develop and maintain APIs using FastAPI and Pydantic models, work with SQLAlchemy and ORM, and manage data workflows with tools like Airflow, Snowflake, Spark, ETL, and ELT. Strong analytical skills are also essential for this position.
Key Responsibilities:
Design, develop, and maintain robust data pipelines and workflows.
Write clean, efficient, and reusable Python code with a strong emphasis on object-oriented principles.
Develop and manage APIs using FastAPI and Pydantic models.
Work with SQLAlchemy and ORM to interact with databases.
Implement data processing workflows using Apache Airflow.
Manage and optimize data storage and retrieval using Snowflake.
Utilize Apache Spark for large-scale data processing.
Perform ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) operations to ensure data integrity and accessibility.
Collaborate with cross-functional teams to understand data requirements and deliver solutions.
Analyze data to provide actionable insights and support decision-making processes.
Required Skills:
The ideal candidate will have a strong computer science background.
Must Have 10+ years of IT experience with 8+ years of experience in Data Engineering.
In-depth knowledge and min 4+ yrs. of experience in Amazon RDS-Aurora PostgreSQL and advanced Python knowledge.
You should have experience working with Agile and cloud services, SQL/NoSQL databases, and Docker/Kubernetes.
Should know about Integration development using AWS/any other cloud technologies.
Design, architect, implement, and support 'key datasets' that provide structured and timely access to actionable business insights.
Develop ETL processes that convert data into formats through a team of data analysts and dashboard charts.
Extensive knowledge of task creation, such as scheduled tasks, triggered tasks, etc., is required.
Should have Pipeline monitoring and troubleshooting experience.
Must have good written and verbal communication skills.
Educational Qualification:
Bachelor's/Master's degree in Computer Science, Engineering, or a related field.
We offer a professional work environment and are given every opportunity to grow in the Information technology world.
Note:
Candidates must attend Phone, Video, or in-person interviews, and after the Selection, the candidate (He/She) should undergo all background checks on Education and Experience.
Please email your resume to : jhansib@petadata.co
After carefully reviewing your experience and skills, one of our HR team members will contact you to discuss the next steps.