Brighthouse Financial is on a mission to help people achieve financial security. As one of the largest providers of annuities and life insurance in the U.S., we specialize in products designed to help people protect what they ve earned and ensure it lasts. We are built on a foundation of experience and knowledge, which allows us to keep our promises and provide the value they deserve. At Brighthouse Financial, we re fostering a culture where diverse backgrounds and experiences are celebrated, and different ideas are heard and respected. We believe that by creating an inclusive workplace, we re better able to attract and retain our talent, provide valuable solutions that meet the needs of our advisors and their clients, and deliver on our mission of helping more people achieve financial security. We re seeking passionate, high-performing team member to join us. Sound like you? Read on. Role Value Proposition: The Data & Analytics have a mission to use big, alternative and directly sourced data to describe and predict key metrics of companies and economies so that we may deliver innovative data solutions and insights to our internal analyst teams and the firm's clients. We are looking for self-starting and innovative data engineers to join and grow our team. You must have a keen interest in financial markets and experience with modern tools and methods, including processing data at large scale for analytical purposes. Key Responsibilities: Build data pipeline frameworks to automate high-volume and real-time data delivery for our Hadoop and research data hub Build data APIs and data delivery services that support critical operational and analytical applications for our internal business operations, customers and partners Transform complex analytical models into scalable, production-ready solutions Continuously integrate and ship code into on premise and cloud Production environments Develop applications from ground up using a modern technology stack such as Scala, Spark, Postgres, Angular JS, and NoSQL Develop sustainable data driven solutions with current new gen data technologies to meet the needs of our organization and business Customers Grasp new technologies rapidly as needed to progress varied initiatives Break down data issues and resolve them Build robust systems with an eye on the long-term maintenance and support of the application Leverage reusable code modules to solve problems across the team and organization Utilize a working knowledge of multiple development languages Essential Business Experience and Technical Skills: Master's Degree and or equivalent work experience. Expert level experience in designing, building and managing applications to process large amounts of data in a Hadoop ecosystem Extensive experience with performance tuning applications on Hadoop and configuring Hadoop systems to maximise performance Experience building systems to perform real-time data processing using Spark Streaming and Kafka, or similar technologies Experience with common SDLC, including SCM, build tools, unit testing, TDD/BDD, continuous delivery and agile practices Experience working in large-scale multi tenancy Hadoop environments;Hadoop, HDFS, AVRO, MongoDB, or Zookeeper Strong software development experience in Scala and Python programing languages; other functional languages Experience with Unix-based systems, including bash programming Experience with other distributed technologies such as Cassandra, Solr/ElasticSearch, Flink, Flume would also be desirable Travel: None Department: Technology - provided by Dice Agile, Analyst, Bash, Development, Foundation, Hadoop, Hub, Metrics, MongoDB, NoSQL, Programming, Python, Research, SDLC, Security, Solr, TDD, Testing, Unix
Associated topics: data administrator, data analytic, data architect, data center, data scientist, database, database administrator, etl, mongo database, teradata
* The salary listed in the header is an estimate based on salary data for similar jobs in the same area. Salary or compensation data found in the job description is accurate.