Develop technical architecture to enable Analytics and Data Science using industry best practices for large scale processing;
Design, develop and implement data pipelines for batch and streaming solutions;
Research and develop distributed crawler and data acquisition system; optimize the crawling strategy and improve the crawling effect;
Monitor data quality across data processing lifecycle;
Cloud-based data infrastructure administration and database administration when necessary.
About your background
Degree holder (Bachelor’s and above) from a renowned university, major in Computer Science, Computer Engineering or other related discipline
Development experience in Linux platform
Proficient in data pipeline development including batch and streaming processing;
Familiar with concepts and techniques of crawling; Able to design and develop crawler system an advantage;
Experience with cloud platforms, in particular, AWS a plus;
Self-motivated, dependable, responsible and results driven
An excellent communicator and team player
Ability to manage multiple tasks within a fast-paced environment
Good learning curiosity & agility
Why you should apply
Make an impact
Stay ahead of market with latest technology & knowledge sharing
Senior Partner one-on-one mentorship
Highly competitive compensation package
Work in a flat structure where your talent gets noticed and promoted quickly
Work with like-mind people who share common value of being highly motivated, meticulous in details, and systematic thinking.
Work with a subject matter expert team from dynamic academic and cultural background
Privacy Statement
Data collected will be used for recruitment purposes only. Personal data provided will be used strictly in accordance with the relevant data protection law of Hong Kong Special Administration Region.