Responsibilities & Qualifications
KPMG is currently seeking an Associate for our Data & Analytics- Big Data Software Engineer practice.
While this requisition may state a specific geographic office, please note that our positions are location flexible between our major hubs. Opportunities may include, but are not limited to, Atlanta, Chicago, Dallas, Denver, New York City, Orange County, Philadelphia, Seattle, Washington DC. Please proceed with applying here, and let us know your location preference during interview phase if applicable.
• Work in multi-disciplinary and cross-functional teams to translate business requirements into artificial intelligence goals and solution architecture; rapidly iterate models and results to refine and validate approach across deployment options from KPMG-hosted, client, laptop, cloud, and container.
• Work in a fast-paced and dynamic environment with both virtual and face-to-face interactions utilizing structured approaches to solving problems, managing risks, and documenting assumptions; communicate results and educate others through insightful visualizations, reports, and presentations.
• Build ingestion processes to prepare, extract, and annotate a rich data variety of unstructured data sources (social media, news, internal or external documents, images, video, voice, emails, financial data, and operational data).
• Design, develop and maintain artificial intelligence-enabled managed services (APIs) with a team of Data Scientist, Software Engineers, and Project Managers; architect, implement, and test data processing pipelines (e.g. Hadoop and Spark) and data mining and data science algorithms on a variety of hosted settings (Cloud, AWS, Azure, GCP, or KPMG's own clusters).
• Develop automated reporting for API and system health (process, memory, response time) utilizing leading processes for software development and analytics.
• Translate advanced technical architectures into production systems and contribute to the continual maintenance and testing of processes, APIs and associated user interfaces; build continuous integration and automated deployment environments; develop containers (Docker) to ensure that APIs and processing pipeline can be easily deployed across a variety of hardware and software architectures.
• Bachelors, Master's or PhD from an accredited college or university in Computer Science, Computer Engineering, Linguistics or related field with good understanding of object oriented design and design patterns; familiarity with agile software development practices, testing strategies and solid unit testing skills
• Experience working in teams of data & analytics professionals to deliver on business-driven analytics projects using big data methods on multiple programming languages and technologies preferred. Direct experience or close working relationship with DevOps engineering. Multidisciplinary backgrounds preferred.
• Ability to work with local and international teams to understand available resources and constraints around data, architecture, platforms, tools, processes, and security; provide assistance, and resolve problems, using excellent problem-solving skills, verbal and written communication.
• Understanding of cloud and distributed systems principles, including load balancing, networks, scaling, in-memory vs. disk; Experience with large-scale, big data methods, such as MapReduce, Hadoop, Spark, Hive, Impala, or Storm.
• Fluency in several programming languages (Python, Scala, or Java), with the ability to pick up new languages and technologies quickly; ability to work efficiently under Unix/Linux environment with experience with source code management systems like GIT; experience with cloud computing and virtualization, persistence technologies both relational and No-SQL and multi-layered distributed applications.
• Ability to travel up to 80% of the time.
• Targeted graduation Fall 2018 through Summer 2019
Applicants must be currently authorized to work in the United States without the need for visa sponsorship now or in the future.