Oferta de empleo de Inteligencia Artificial en Madrid
- 100% teletrabajo
Plants of Tomorrow (PoT) is a strategic program from our Client multinantional swiss, having the goal to empower their business to increase productivity, reduce costs and get insights for new innovation by using Artificial Intelligence and Machine Learning onsite and in real time.
PoT ML-OPS culture of diversity, intellectual curiosity, problem solving, and innovation oriented towards helping solving business problems and openness is key to its success. Our team brings together people with a wide variety of backgrounds, experiences and perspectives.
Design and implement comprehensive data analytics solutions from start to finish, encompassing data modeling, data integration, and automation of various processes.
Design,create and maintain optimal data pipelines software
Extract and assemble large, complex data sets to be consumed directly in machine learning models or analytics applications.
Proactively enhance and sustain the current data platform and ecosystem through actions such as system configuration, performance optimization, monitoring of data pipelines, and providing operational support to users.
Self-driven, autonomous, results oriented person
Enjoys solving business and technical challenges
Positive and joyful attitude including under stress
Analytical and practical mindset
Curiosity to explore, to learn new things and to challenge existing understandings
Design solutions considering the context, the end result and all the intermediate elements
Build solutions to be reliable, secure, sustainable and performant while remaining pragmatic in achieving the intermediate objectives
Courage to take risks, openness to admit errors and move forward by learning from errors
Perseverance in face of setbacks
Expert knowledge of SQL, with the capacity to efficiently extract and collate meaningful information from high-volume sources and structuring it into a comprehensive and usable data set.
Expertise in programming, with proficiency in at least one major general-purpose programming language, such as Java, C++, C#, or Python (preferred).
Good understanding of both relational and NoSQL databases, big data platforms, and principles of database and data warehouse modeling.
Strong analytic skills related to working with unstructured datasets.
Build processes supporting data collection, cleansing and transformation, data structures, metadata, dependency and workload management.
Working knowledge of message queuing, stream processing and highly scalable ‘big data’ data stores.
Strong project management and organizational skills.
Experience supporting and working with cross-functional teams in a dynamic environment.
We are looking for a candidate with 5+ years of experience in a Data Engineer, DWH/ETL developer, BI engineer or similar analytics development role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
Experience with big data tools
Experience with relational SQL and NoSQL databases
Experience working in the MLOps setup, deploying and scaling multiple products.
Experience with major cloud data pipeline services like AWS ( EC2, EMR, RDS, Redshift....), GCP (DataFlow, DataPrep, BigQuery, GCS...)...
Advanced/bilingual English essential, the interviews will be conducted in this language, other languages such as French or German are valuable
Nice to have:
Advanced skills in Python
Experience with data pipeline and workflow management tools: Luigi, Airflow, etc.
Experience with stream-processing systems: Storm, Spark-Streaming, etc.
Experience with industrial data protocols like OPC DA /OPC UA