Responsibilities & Qualifications
KPMG is currently seeking an Associate for our Data & Analytics Engineer – Big Data Systems for AI practice.
While this requisition may state a specific geographic office, please note that our positions are location flexible between our major hubs. Opportunities may include, but are not limited to, Atlanta, Chicago, Dallas, Denver, New York City, Orange County, Philadelphia, Seattle, Washington DC. Please proceed with applying here, and let us know your location preference during interview phase if applicable.
• Rapidly prototype, implement, and optimize architectures to tackle the Big Data and Data Science needs for a variety of Fortune 1000 corporations and other major organizations; develop modular code base to solve "real" world problems while conducting regular peer code reviews to ensure code quality and compliance following best practices in the industry.
• Work in cross-disciplinary teams with KPMG industry experts to understand client needs and ingest rich data sources such as social media, news, internal/external documents, emails, financial data, and operational data.
• Develop and maintain D&A solutions on premise, cloud, KPMG-hosted, or hybrid infrastructure. Be the team champion of some mainstream BI/EDW/Big Data toolsets like Tableau, Alteryx, Informatica, Pentaho, Er-Win, and Power Designer.
• Help in research and experiment of leading and emerging BI/EDW/Big Data methodologies such as serverless data lake, AWS Redshift, Athena, Glue, GCP Bigquery, and MS PowerBI and apply them in solving real world client problems.
• Help drive the process for pursuing innovations, target solutions, and extendable platforms for Lighthouse, KPMG, and client.
• Participate in developing and presenting thought leadership, and assist in ensuring that the Lighthouse technology stack incorporates and is optimized for using specific technologies.
• Bachelors, Master’s or PhD from an accredited college or university in Computer Science, Statistics, Mathematics, Engineering, Bioinformatics, Physics, Operations Research, or related field.
• 1 year of relevant software development experience in multiple programming languages and technologies preferred; preferably related to professional services. Experience with object-oriented design, coding and testing patterns as well as experience in engineering (commercial or open source) software platforms and large-scale data infrastructures.
• Ability to pick up and learn new technologies quickly. Experience or knowledge of RDBMS design, data modeling, MPP EDW system implementation; hands-on experience and knowledge in distributed computing architecture, massive-parallel processing big data platforms (Hadoop, MapReduce, HDFS, Spark, Hive/Impala, H-Base/MongoDB/Casandra, Teradata/Netezza/Redshift etc.).
• Hands-on experience and knowledge in BI/EDW/Big Data toolsets (Tableau, Alteryx, Informatica, Pentaho, Er-Win, Power Designer), hands-on experience and strong knowledge in mainstream cloud infrastructures (AWS, MS Azure and GCP including their D&A-related Microservices), and ability to implement data lake and serverless data lake.
• Market-leading fluency of SQL; hands-on experience of Linux/Unix/Windows/.NET. Market-leading fluency in several programming languages (Bash/ksh/Powershell; Python/Perl/R) and understanding of programming methodologies (version control, testing, QA), and development methodologies (Waterfall and Agile). Full-stack development capability is preferred.
• Ability to travel up to 80% of the time.
• Targeted graduation Fall 2018 through Summer 2019
Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future.