Data Engineering

Eden Prairie, Minnesota

Posted in Retail

This job has expired.

Job Info

Projects the candidate will be working on:

  • Position will help address a spike in the data acquisition work we routinely perform on the Data Lake (ODL).
  • The spike is due to Clinical Performance (CP) needing several new Care Delivery Organizations (CDOs) data to be acquired by the end of the year.
  • This resource will augment current staff and provide resource dedication and focus to the CP demand.

Ideal Background:

  • The healthcare specific background that would be important for us is an understanding of the data that is collected as part of clinical encounters between patients and healthcare providers.
  • There are several data exchange standards for this data including HL7, CDA, FHIR, among others.
  • Familiarity with these data standards would help these resources address business specific needs rather than learning the data formats.


  • I  will look for initiative and customer engagement.

Top Requirements:

  • Familiarity with clinical data exchange structures
  • SQL expertise

Team and Team size:

  • Would be augmenting an existing team supporting the ODL with data acquisition (ETL) capabilities. The existing team has about 3 developers, 2 Data Analysts, and 2 QE resources.

Function Description:

  • Functions may include database architecture, engineering, design, optimization, security, and administration; as well as data modeling, big data development, Extract, Transform, and Load (ETL) development, storage engineering, data warehousing, data provisioning and other similar roles.
  • Responsibilities may include Platform-as-a-Service and Cloud solution with a focus on data stores and associated eco systems.
  • Duties may include management of design services, providing sizing and configuration assistance, ensuring strict data quality, and performing needs assessments.
  • Analyzes current business practices, processes and procedures as well as identifying future business opportunities for leveraging data storage and retrieval system capabilities.
  • Manages relationships with software and hardware vendors to understand the potential architectural impact of different vendor strategies and data acquisition.
  • May design schemas, write SQL or other data markup scripting and helps to support development of Analytics and Applications that build on top of data.
  • Selects, develops and evaluates personnel to ensure the efficient operation of the function.

Core Tasks:

  • Maintain, improve, clean, and manipulate data in the businesss operational and analytics databases
  • Design and deploy data platforms across multiple domains ensuring operability
  • Transform data for meaningful analyses
  • Improve data efficiency, reliability and quality
  • Create data enrichment
  • Build high performance
  • Ensure data integrity
  • Create and manage data stores at scale
  • Ensure data governance - security, quality, access and compliance

Required Qualifications:

  • Experience with healthcare data Knowledge and understanding of health care privacy and security practices
  • 6+ years of designing, coding and supporting distributed, data intensive systems at scale
  • 3+ years of non-relational (NoSQL, Big Data) delivery
  • 4+ years of development experience with BIG Data technology stack (e.g. Hadoop, MapR, Talend, PIG Scripting, HBase, HIVE, SPARK); Relational databases; and Test automation in Linux/Windows environments
  • 3+ years in Dev Ops Automation tools (Oozie, Python etc.)
  • Excellent communication skills with ability to describe data/capability stories? (and not defects) and explain value (and not resolution) to customers.
  • Experience in delivering Data Platforms
  • 6+ years of relational database delivery
  • 7+ years of experience working within the Software Development Life Cycle (SDLC)
  • 4+ years of experience in Agile Delivery BS/BA or equivalent experience

Preferred Qualifications:

  • Hands-on experience on programming languages - Go, python, scala, java.
  • Hands-on experience in full automated testing framework (unit & integration) - cucumber, spock, Go unit test, Junit.
  • Hands-on experience of big data and streaming frameworks - kafka , Hadoop, Hive, spark, HDFS. Hands-on experience on using PAAS like - Kubernete, Openshift
  • Hands-on experience on working on CICD platform and monitoring - namely Github, Jenkins, Grafana, Prometheus.
  • Experience and working exposure in cloud environment, preferably Azure.

Interview Process:

  • 2 rounds of interviews over the phone. Assessing the technology will be a small portion of the interview.

This job has expired.

More Retail jobs

Cincinnati, Ohio
Posted about 1 hour ago

Kansas City, Kansas
Posted about 1 hour ago

Kansas City, Kansas
Posted about 1 hour ago

Job Alerts

Provide an email, zip code for jobs, and/or job category to subscribe to job alerts. Learn more now.

*By subscribing, you agree to our Terms and Privacy Policy.