Educational requirements: Bachelor
English requirements: Competent English
Requirements for skilled employment experience for years: 3-5 years
Required residence status: Temporary visa, Permanent resident
Accept remote work: unacceptable
Essential skills/experience required:
Strong experience as a big data engineer / Developer Programmer working with Hadoop
Solid experience working with Spark, Hive etc
Programming experience with either Scala, Java or python
Excellent written & verbal communication skills
You will be responsible for:
Responsible for the development of Data Analysis tools, analytics reporting and real-time analysis systems.
Work closely with the team to develop well architected & tested code that support the business requirements.
Recommend technologies to use to take advantage of development capabilities for new and existing projects.
Promote and ensure that developed code is well tested and stable.
Work closely & collaboratively with other data platform software engineers within the team and wider business to ensure integrations with their systems.
Liaise with project stakeholders to ensure timely and cost effective delivery of initiatives.
Be responsible for developing documentation that communicates effectively to the relevant stakeholders.
Hands on Big Data SQL variants (Hive-QL, Snowflake ANSI, Redshift SQL)
Python and Spark and one or more of its API (PySpark, Spark SQL, Scala), Bash/Shell scripting
Source code control - GitHub, VSTS etc.
Big Data technologies Hadoop stack such as HDFS, Hive, Impala, Spark etc, and cloud Big Data warehouses - RedShift, Snowflake etc.