Big Data Engineer / Hadoop Developer - Portland, OR - Georgia IT Inc.
San Francisco, CA
About the Job
Job Title: Big Data Engineer / Hadoop Developer
Location: Portland, OR
Position Type: 6-9+ Months Contract
Rate: DOE (W2/1099)
Duties:
* Work closely with data analysts and development stakeholders to identify data operations
* Design, document, and implement data lake and data stream processing.
* Support the testing, deployment, and support of data processes.
* Understand when to use data streams vs data lakes
* Design and implement support tools for data processes.
* Benchmark systems, analyze bottlenecks and propose solutions to eliminate them.
* Articulate and align fellow teams to data process designs.
Skills:
* Experience architecting and deploying highly scalable distributed systems
* Experience using Spark, Storm, Hadoop/MapReduce, SQL and other data processing tools and languages
* Experience building high-performance algorithms in scaleable languages such as Scala, Python and R
* Familiarity with Big Data patterns of analysis such as machine learning, map reduce and complex event processing
* Experience working on Linux systems
* Software engineering experience (design, coding, testing, deployment and support)
* Experience using standard SDLC tools like Jira, Git, Jenkins etc.
A good fit will:
* Enjoy being challenged by and solving complex problems
* Have good written and verbal communication skills
* Have patience to "bring others along "
* Be able to assist in documenting requirements
* Be able to identify and resolve conflicts or ambiguities
Education:
* Advanced degree in Computer Science or a related discipline and eight plus years experience
Location: Portland, OR
Position Type: 6-9+ Months Contract
Rate: DOE (W2/1099)
Duties:
* Work closely with data analysts and development stakeholders to identify data operations
* Design, document, and implement data lake and data stream processing.
* Support the testing, deployment, and support of data processes.
* Understand when to use data streams vs data lakes
* Design and implement support tools for data processes.
* Benchmark systems, analyze bottlenecks and propose solutions to eliminate them.
* Articulate and align fellow teams to data process designs.
Skills:
* Experience architecting and deploying highly scalable distributed systems
* Experience using Spark, Storm, Hadoop/MapReduce, SQL and other data processing tools and languages
* Experience building high-performance algorithms in scaleable languages such as Scala, Python and R
* Familiarity with Big Data patterns of analysis such as machine learning, map reduce and complex event processing
* Experience working on Linux systems
* Software engineering experience (design, coding, testing, deployment and support)
* Experience using standard SDLC tools like Jira, Git, Jenkins etc.
A good fit will:
* Enjoy being challenged by and solving complex problems
* Have good written and verbal communication skills
* Have patience to "bring others along "
* Be able to assist in documenting requirements
* Be able to identify and resolve conflicts or ambiguities
Education:
* Advanced degree in Computer Science or a related discipline and eight plus years experience
Source : Georgia IT Inc.