Data Engineer Tech Leader

Porto, Remote

Google Cloud Platform, Big Data, Spark, Hadoop, PowerBI, Azure

Are you a Data Engineer Tech Leader? Become a Rhino join the #crash

We are META, more than a company a TEAM that will shake your idea about tech recruitment. Based in Évora but acting mainly in a remote way, we work with national and international partners in several areas with offices across the country.

We are a disruptive company always ‘charging down’ for big opportunities for the best professionals, through a Human 2 Human approach, we are straight to the point, always focus on transparency, making sure the well-being of our Rhinos and their professional development.

META provides you:

  • A welcome kit 🎒
  • Career progression 📈
  • Health insurance 💙
  • Netflix and Spotify accounts 📺 🎧
  • Coverflex 💰
  • Protocols and special discounts 🤑

We are looking for a Rhino that will…

  • Participate as a technical referent in developing code for processing and analyzing large amounts of data (e.g. unstructured tracking data, large full-text data sets, graph data, etc.);
  • Design and implement highly scalable batch or real-time data processing workflows/components and products that make smart use of data to provide data-driven functionality (e.g. search services, recommender services, classification services, etc.), leveraging machine learning, Big Data technology, and distributed systems;
  • Deploy solutions on cloud technologies, mainly GCP (Google Cloud Platform)
  • Work in a cross-functional team and collaborate closely with Data Scientists and Analysts to support the (technical) design, implementation and evaluation of new algorithms, machine learning models and other data-driven features and services.

The perfect Rhino will have…

  • BSc or higher degree in Computer Science or equivalent Software Engineering experience;
  • Experience in designing and implementing scalable software and data-driven services using Big Data technologies, data storages (e.g. relational databases, key-value stores), data warehouses, and related data processing technologies, as well as programming languages such as Scala / Java / Python (for real-time and batch processing);
  • Experience in building and optimizing data pipelines that process large datasets using Spark and Hadoop environment;
  • English proficiency (both written and spoken).

And will be skilled in…

  • Working on cloud environments especially with data related services, for example BigQuery, Dataproc, GCS on Google Cloud Platform;
  • Analyzing large datasets (e.g. with SQL, Spark, Python, Hive etc.);
  • Workflow management platforms (like Airflow) and collaborative platforms (like Jupyter or Zeppelin);
  • YAML (data serialization);
  • Microsoft Azure and PowerBI is a plus; 
  • Professional English (candidate should be able to conduct the interview in English).

Join the crash! In a world of unicorns be a Rhino 📩

Apply to this position

I allow META to store and process my personal data. My information will be handled in accordance with Meta Privacy Policy*

Can't find the job you're looking for?

APPLy HERE
×