Parkside Securities is simplifying global access to US markets through regulatory innovations and technology. We are a US-based broker-dealer allowing foreign citizens the ability to invest in US securities using their local currency offering low fees and no minimum investment amounts.

We are looking for an experienced Data Engineer that can work across multiple teams to own data processing and programming. The ideal candidate will have knowledge of functional programming languages such as Clojure or Scala and be willing to roll up their sleeves in a fast-paced startup environment.

Responsibilities

  • Design the architecture to perform financial data analysis and algorithm-based products including, but not limited to, optimizing stock brokerage operations, portfolio management, as well as customer behavior analytics.
  • Create and maintain optimal data pipeline architecture
  • Work closely with server-side, Cloud Operations, Infrastructure, and front-end engineers
  • Assemble large, complex data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
  • Work with stakeholders including Executives, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics experts to strive for greater functionality in our data systems.

Requirements

  • 5+ years of data processing and analytics programming
  • Knowledge of Python to translate the proof of concepts
  • Experience developing data processing apps in Clojure would be ideal, but other functional languages such as Scala is acceptable
  • Knowledge of Spark to design, develop, and maintain
  • Hands-on experience with AWS Analytics stack such as Redshift, EMR, Athena, Glue, etc.
  • Experience developing ETL ( Extract, Transform and Load) Data pipelines
  • Experience with real-time streaming data processing
  • Experience with implementing clustered/distributed/multi-threaded infrastructure to support Machine Learning processing

Technology Stack used in core application development

  • AWS
  • Terraform
  • Kubernetes
  • Docker
  • Clojure
  • DynamoDB
  • Apache Kafka
  • Datomic
  • GitHub
  • MacOS/Linux for development