Senior Data Engineer
Wind Tre S.p.A. con Socio Unico
Milano, Italy
29 min fa

We are looking for brilliant and talented Data Engineer to join our Data Engineering Dream-Team, within the Big Data & Analytics / Commercial Dept.

You’ll collaborate in a friendly, young, passionate, fast-paced and challenging customer-focused environment, to drive Wind Tre forward on its Data-Driven transformation journey.

If you like creating new visions for the future, if you have a natural curiosity and a creative mind, and you’re eager to work in a collaborative context, we would like to meet you! Main responsibilities and activities Industrialize highly scalable data pipelines covering the end to end data lifecycle, from discovery to production.

Contribute to standardize conceptual and logical data model Process large-scale multi-structured information (multi-variate, heterogeneous sources, large volume, raw data) and guarantee optimized data sets Automate corporate information content, designing and implementing scalable data architecture Work tightly with Data Scientists in an agile and collaborative environment to deliver advanced analytics and AI use cases Required Skills Demonstrated experience in Data pipelining / ETL and Data architectures.

Experience with object-oriented and functional programming, software engineering and testing patterns for large-scale data processing Knowledge of SQL and NoSQL Database (such as MongoDB and Neo4j) Familiarity with Java, JavaScript, HTML.

Fluent in data wrangling and preparation (exploratory analysis, profiling & cleansing, feature engineering) Strong working knowledge on Python, common Python libraries (e.

g. Pandas, Numpy, Matplotlib, Seaborn, Tensorflow) and DevOps tools (Git, Airflow, Dataflow, Kubernetes) Working knowledge of cloud and distribute computing with one of the most famous cloud providers (Google Cloud, AWS, Azure) 3+ years of proven experience in Data Engineering roles, in major consulting firms or big international companies Preferred : Proficiency with Hadoop Stack (Apache Hadoop, Spark, Beam, Kafka) and open source ecosystem Preferred : Knowledge of Deep Learning models and open source frameworks (Keras, TensorFlow, PyTorch) Preferred : knowledge of GCP such as BigQuery, DataProc, DataFlow, Pub / Sub Preferred : working experience on Telco companies, data engineering applied to Commercial use, data pipelining, cleansing, preparation Education MSc in Computer Engineering, Computer Science or other related disciplines

Segnala questo annuncio
checkmark

Thank you for reporting this job!

Your feedback will help us improve the quality of our services.

Invia candidatura
La mia Email
Cliccando su “Continua”, autorizzo neuvoo ad utilizzare i miei dati ed inviarmi avvisi email come menzionato nella sezione Politica sulla Privacy di neuvoo. Posso ritirare il mio consenso e cancellare la registrazione in qualsiasi momento.
Continua
Modulo di candidatura