999 Ofertas de Analistas de Datos en Madrid
Big Data Engineer
Ayer
Trabajo visto
Descripción Del Trabajo
Big Data Engineer
Hoy
Trabajo visto
Descripción Del Trabajo
ARQUITECTO BIG DATA
Publicado hace 24 días
Trabajo visto
Descripción Del Trabajo
En IRIUM nos preocupamos porque no dejes de perseguir tus sueños. Prepárate para conquistar tus metas, y ten siempre presente disfrutar del camino.
Estamos buscando a un Arquitecto Big Data para proyecto estable.
¿Qué buscamos?- Solo candidatos que residan en España
- Arquitectura Big Data con diferentes herramientas y entornos: Cloud (AWS, Azure y GCP), Cloudera, bases de datos No-sql (Cassandra, Mongo DB), Databricks, ELK, Kafka, Snowflake
- Conocimiento de herramientas de Data Engineering y calidad de datos (Informatica, Talend, etc.)
- Conocimiento de soluciones de Data Governance y Data Catalog: IBM, Axon, Informatica EDC, Colibra, Purview, etc.
- Lugar de trabajo: Madrid 60% teletrabajo
- Retribución flexible (restaurante, transporte y guardería)
- 23 días de vacaciones
- Buen clima laboral
- Acceso ilimitado a formación tecnológica puntera en modalidad barra libre
- Club de beneficios para empleados con descuentos directos y miles de ofertas en marcas, hoteles, agencias de viaje, cines, ropa.
- Banda salarial: Según valía y experiencia
Mid-Senior level
Employment typeFull-time
Job functionEngineering and Information Technology
IndustriesInternet Publishing
#J-18808-LjbffrBig Data Engineer
Ayer
Trabajo visto
Descripción Del Trabajo
Overview
Winning Consulting, Madrid, Community of Madrid, Spain. We are looking for a Big Data Engineer to be part of our dynamic team. This role involves working on an exciting project for one of our key clients in the banking sector in Madrid (Hybrid).
Responsibilities- Designing, developing and optimizing data pipelines using SQL and Databricks
- Implementing data ingestion and transformation workflows using Spark in distributed environments
- Ensuring data governance, lineage, and integrity across all stages of data architecture
- Degree in Engineering, Computer Science or Mathematics with a clear focus on data and analytics
- Additional training or certifications in data platforms or cloud-based data tools is a plus
- Proficient in Excel, VBA and Microsoft Office
- High level of English, both written and verbal
We are a consulting firm offering services in consulting, training, recruitment, and headhunting. We support our clients in finding innovative and sustainable solutions, ranging from applying scientific knowledge to solve complex management challenges to driving digital and technological transformation within organizations.
If you want to learn more about us, visit our website at
Employment details- Seniority level: Mid-Senior level
- Employment type: Full-time
- Job function: Engineering and Information Technology
- Industries: Internet Publishing
Referrals increase your chances of interviewing at Winning by 2x. Get notified about new Big Data Developer jobs in Madrid, Community of Madrid, Spain.
#J-18808-LjbffrBig Data Architect
Ayer
Trabajo visto
Descripción Del Trabajo
Big Data Architect
Position Purpose:
We are seeking a highly skilled and experienced Big Data Architect to join our international team. You will play a pivotal role in shaping our Big Data environments and projects, including the Global Data Lake, while enhancing our Sustainable Estimatics offerings. Sustainable Estimatics is a leading suite within the company, recognized for its substantial impact on the industry. With our innovative and certified algorithms, we provide our customers with significant cost savings by minimizing waste and optimizing resource usage. By embedding sustainability principles into our Estimatics practices, we actively contribute to the industry's collective effort to reduce environmental impact. Our commitment to sustainability goes beyond individual projects; we aim to drive industry-wide innovation through the continuous development of new technologies and practices that create a positive ripple effect for both the environment and society.
As a Big Data Architect, you will be responsible for designing the overall architecture of our data systems, ensuring they are robust, scalable, and efficient. You will develop architectural strategies and frameworks that guide our data processing initiatives, enabling the effective management of large volumes of data from diverse sources worldwide.
What You Will Be Doing:
- Design and implement scalable and efficient data architectures that support data processing pipelines using Cloudera, Spark, and other relevant technologies.
- Lead the development of scalable API solutions to facilitate Data as a Service (DaaS), providing seamless access to data for both external and internal customers.
- Establish best practices for data ingestion, transformation, and storage processes to ensure data quality, integrity, and availability across international locations.
- Collaborate with cross-functional teams to gather business requirements and translate them into comprehensive architectural specifications for data processing and analysis.
- Optimize data workflows and the performance of Spark jobs to ensure they meet stringent latency and throughput requirements while processing massive datasets.
- Conduct troubleshooting and performance tuning of Cloud or On-premises infrastructure to identify performance bottlenecks and enhance resource utilization.
- Leveraging tools like New Relic for performance monitoring and Graylog for log analysis.
- Work closely with data scientists and analysts to ensure timely and reliable data sets for advanced analytics and machine learning models.
- Implement data governance practices and ensure compliance with data privacy and security regulations across various regions.
- Stay abreast of emerging technologies and industry trends related to big data processing, Cloudera, and Spark, and propose innovative architectural solutions to enhance data processing capabilities.
- Provide technical leadership, mentorship, and guidance to engineering teams, fostering a collaborative and innovative culture within the international group.
- Participate in agile development practices, including sprint planning, architecture reviews, and continuous integration and deployment, to ensure high-quality software delivery.
What You Need for this Position:
- Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field.
- A minimum of 7 years of working experience in big data architecture, preferably with Cloudera and Spark technologies.
- Strong understanding of API architectures and best practices, with experience in developing APIs for Data as a Service (DaaS) solutions.
- Strong proficiency in programming languages such as Scala, Python, or Java, with the ability to design and implement complex data solutions.
- In-depth knowledge of distributed computing principles and frameworks, including Hadoop and Spark.
- Extensive experience with Cloudera distribution and tools like HDFS, Hive, Impala, and HBase.
- Strong understanding of data modeling and database design principles, including schema design, partitioning, and indexing.
- Solid understanding of SQL and NoSQL databases, data warehousing concepts, and ETL processes.
- Proven expertise in designing, implementing, and optimizing data pipelines using Spark Streaming, Spark SQL, or other Spark modules.
- Familiarity with data ingestion techniques and tools, such as Kafka, Flume, Sqoop, or Nifi.
- Experience with cloud platforms like AWS or Azure, and knowledge of containerization technologies like Docker or Kubernetes is a plus.
- Understanding of data governance, data privacy, and security practices, particularly in an international context.
- Excellent problem-solving and analytical skills, with the ability to design solutions that optimize data processing workflows.
- English is required, with communication skills to effectively convey complex technical concepts to both technical and non-technical stakeholders.
#LI-JG1
#J-18808-LjbffrBig Data Architect
Ayer
Trabajo visto
Descripción Del Trabajo
Join to apply for the
Big Data Architect
role at
Solera Holdings, LLC.
2 weeks ago Be among the first 25 applicants
Position Purpose :
- We are seeking a highly skilled and experienced Big Data Architect to join our international team. You will play a pivotal role in shaping our Big Data environments and projects, including the Global Data Lake, while enhancing our Sustainable Estimatics offerings. Sustainable Estimatics is a leading suite within the company, recognized for its substantial impact on the industry. With our innovative and certified algorithms, we provide our customers with significant cost savings by minimizing waste and optimizing resource usage. By embedding sustainability principles into our Estimatics practices, we actively contribute to the industry's collective effort to reduce environmental impact. Our commitment to sustainability goes beyond individual projects;
we aim to drive industry-wide innovation through the continuous development of new technologies and practices that create a positive ripple effect for both the environment and society.
Responsibilities :
Design and implement scalable and efficient data architectures supporting data processing pipelines using Cloudera, Spark, and other relevant technologies.
Lead the development of scalable API solutions to facilitate Data as a Service (DaaS), providing seamless access to data for both external and internal customers.
Establish best practices for data ingestion, transformation, and storage processes to ensure data quality, integrity, and availability across international locations.
Collaborate with cross-functional teams to gather business requirements and translate them into architectural specifications.
Optimize data workflows and Spark job performance to meet latency and throughput requirements for large datasets.
Troubleshoot and tune performance of Cloud or On-premises infrastructure to identify bottlenecks and improve resource utilization.
Utilize tools like New Relic for performance monitoring and Graylog for log analysis.
Work with data scientists and analysts to ensure reliable data sets for analytics and machine learning.
Implement data governance and ensure compliance with data privacy and security regulations globally.
Stay updated on emerging technologies and propose innovative solutions to enhance data processing capabilities.
Provide technical leadership, mentorship, and guidance to engineering teams, fostering collaboration and innovation.
Participate in agile practices, including sprint planning, architecture reviews, and CI / CD processes.
Requirements :
Bachelor’s or Master’s in Computer Science, Data Science, or related fields.
At least 7 years of experience in big data architecture, with preference for Cloudera and Spark expertise.
Strong understanding of API architectures and experience in DaaS solutions.
Proficiency in Scala, Python, or Java for complex data solutions.
Deep knowledge of distributed computing frameworks like Hadoop and Spark.
Experience with Cloudera tools such as HDFS, Hive, Impala, HBase.
Knowledge of data modeling, schema design, partitioning, and indexing.
Experience with SQL, NoSQL, ETL, and data warehousing.
Proven ability to design, implement, and optimize Spark data pipelines.
Familiarity with data ingestion tools like Kafka, Flume, Sqoop, Nifi.
Cloud platform experience (AWS, Azure) and containerization (Docker, Kubernetes) is a plus.
Understanding of data governance, privacy, and security practices.
Excellent problem-solving, analytical, and communication skills in English.
Additional Details : Seniority level :
Mid-Senior level
Employment type : Full-time
Job function : Engineering and IT
Industries :
IT Services and Consulting
Referrals increase your chances of interviewing at Solera Holdings, LLC. by 2x.
Get notified about new Data Architect jobs in
Madrid, Community of Madrid, Spain .
J-18808-Ljbffr
#J-18808-LjbffrBig Data Architect
Ayer
Trabajo visto
Descripción Del Trabajo
Overview
Hybrid Role, working Part-time (It may be necessary at some point for on-site presence in the customer office in Madrid).
Location: Madrid, Community of Madrid, Spain
Responsibilities- To provide the framework that appropriately replicates the Big Data needs of a company utilizing data.
- More than 3 years of pre-sales experience in the design of Big Data and Data analytics solutions according to customer requirements
- Previous experience with the preparation of high-quality engaging customer presentations, excellent communication skills, experience in conversations at CxO level, ability to adapt the message to the customer feedback, etc.
- Experience in preparation answering RFPs: organize the offer solution team, solution definition, effort and cost estimation
- Past experience in dealing with partners, tools vendors, etc.
- Business Domain Knowledge
- More than 5 years of experience in Big Data implementation projects
- Experience in Big Data architecture with different tools and environments: Cloud (AWS, Azure and GCP), Cloudera, No-SQL databases (Cassandra, Mongo DB), ELK, Kafka, Snowflake, etc.
- Past experience in Data Engineering and data quality tools (Informatica, Talend, etc.)
- Previous involvement in working in a Multi-language and multicultural environment
- Proactive, tech passionate and highly motivated
- Experience in Data analysis and visualization solutions: Microstrategy, Qlik, PowerBI, Tableau, Looker
- Background in Data Governance and Data Catalog solutions : Axon, Informatica EDC, Colibra, Purview, etc.
To find out more details, please apply below!
Seniority level- Mid-Senior level
- Part-time
- Information Technology and Supply Chain
- Staffing and Recruiting
Get notified about new Data Architect jobs in Madrid, Community of Madrid, Spain.
#J-18808-LjbffrSé el primero en saberlo
Acerca de lo último Analistas de datos Empleos en Madrid !
Big Data Engineer
Publicado hace 3 días
Trabajo visto
Descripción Del Trabajo
At Winning Consulting, we are on the lookout for a Big Data Engineer to be part of our dynamic team. This role involves working on an exciting project for one of our key clients in the banking sector in Madrid (Hybrid).
Responsibilities- Designing, developing and optimizing of data pipelines using SQL and Databricks.
- Implementing of data ingestion and transformation workflows using Spark in distributed environments.
- Ensuring data governance, lineage, and integrity across all stages of data architecture.
- Degree in Engineering, Computer Science or Mathematics with a clear focus on data and analytics.
- Additional training or certifications in data platforms or cloud-based data tools is a plus.
- Proficient in Excel, VBA and Microsoft Office
- High level of English, both written and verbal
Do you want to know more about Winning?
We are a consulting firm offering services in consulting, training, recruitment, and headhunting. We support our clients in finding innovative and sustainable solutions, ranging from applying scientific knowledge to solve complex management challenges to driving digital and technological transformation within organizations.
If you want to learn more about us, visit our website at
#J-18808-LjbffrBig Data Engineer
Publicado hace 3 días
Trabajo visto
Descripción Del Trabajo
Join to apply for the Big Data Engineer role at Plexus Tech .
We are looking for new colleagues with the profile of Big Data Engineer with solid knowledge of data warehousing, data modeling and ETL.
Responsibilities- Analysis, design, development and testing, application, documentation and evaluation.
- Reviews the software code to make it more efficient and is able to detect errors in the code.
- More than 5 years of general experience in data management (on-premise or cloud) and/or programming languages related to Data.
- More than 5 years of experience with Python, including experience in data processing, automation and ETL development. Knowledge of other programming languages such as Java, Scala, and R will be valued.
- More than 5 years of experience with SQL at an intermediate level, including extensive experience in problem solving.
- Experience using Snowflake, DBT, Bitbucket, or Airflow is highly desirable.
With our hybrid model, Flexology, you can work from wherever your talent flows best: from any of the 24 work centers in Spain, from your home, or combining both modalities. Plexus Tech's work ecosystem allows for a collaborative environment to be maintained in the company.
Benefits- Work with leading professionals.
- Access to ongoing training.
- Professional development.
- Flexible remuneration in health insurance, restaurant vouchers, childcare, transportation.
- Executive
- Full-time
- Engineering and Information Technology
- IT Services and IT Consulting
Big Data Architect
Publicado hace 7 días
Trabajo visto
Descripción Del Trabajo
Big Data Architect
Position Purpose:
We are seeking a highly skilled and experienced Big Data Architect to join our international team. You will play a pivotal role in shaping our Big Data environments and projects, including the Global Data Lake, while enhancing our Sustainable Estimatics offerings. Sustainable Estimatics is a leading suite within the company, recognized for its substantial impact on the industry. With our innovative and certified algorithms, we provide our customers with significant cost savings by minimizing waste and optimizing resource usage. By embedding sustainability principles into our Estimatics practices, we actively contribute to the industry's collective effort to reduce environmental impact. Our commitment to sustainability goes beyond individual projects; we aim to drive industry-wide innovation through the continuous development of new technologies and practices that create a positive ripple effect for both the environment and society.
As a Big Data Architect, you will be responsible for designing the overall architecture of our data systems, ensuring they are robust, scalable, and efficient. You will develop architectural strategies and frameworks that guide our data processing initiatives, enabling the effective management of large volumes of data from diverse sources worldwide.
What You Will Be Doing:
- Design and implement scalable and efficient data architectures that support data processing pipelines using Cloudera, Spark, and other relevant technologies.
- Lead the development of scalable API solutions to facilitate Data as a Service (DaaS), providing seamless access to data for both external and internal customers.
- Establish best practices for data ingestion, transformation, and storage processes to ensure data quality, integrity, and availability across international locations.
- Collaborate with cross-functional teams to gather business requirements and translate them into comprehensive architectural specifications for data processing and analysis.
- Optimize data workflows and the performance of Spark jobs to ensure they meet stringent latency and throughput requirements while processing massive datasets.
- Conduct troubleshooting and performance tuning of Cloud or On-premises infrastructure to identify performance bottlenecks and enhance resource utilization.
- Leverage tools like New Relic for performance monitoring and Graylog for log analysis.
- Work closely with data scientists and analysts to ensure timely and reliable data sets for advanced analytics and machine learning models.
- Implement data governance practices and ensure compliance with data privacy and security regulations across various regions.
- Stay abreast of emerging technologies and industry trends related to big data processing, Cloudera, and Spark, and propose innovative architectural solutions to enhance data processing capabilities.
- Provide technical leadership, mentorship, and guidance to engineering teams, fostering a collaborative and innovative culture within the international group.
- Participate in agile development practices, including sprint planning, architecture reviews, and continuous integration and deployment, to ensure high-quality software delivery.
What You Need for this Position:
- Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field.
- A minimum of 7 years of working experience in big data architecture, preferably with Cloudera and Spark technologies.
- Strong understanding of API architectures and best practices, with experience in developing APIs for Data as a Service (DaaS) solutions.
- Strong proficiency in programming languages such as Scala, Python, or Java, with the ability to design and implement complex data solutions.
- In-depth knowledge of distributed computing principles and frameworks, including Hadoop and Spark.
- Extensive experience with Cloudera distribution and tools like HDFS, Hive, Impala, and HBase.
- Strong understanding of data modeling and database design principles, including schema design, partitioning, and indexing.
- Solid understanding of SQL and NoSQL databases, data warehousing concepts, and ETL processes.
- Proven expertise in designing, implementing, and optimizing data pipelines using Spark Streaming, Spark SQL, or other Spark modules.
- Familiarity with data ingestion techniques and tools, such as Kafka, Flume, Sqoop, or Nifi.
- Experience with cloud platforms like AWS or Azure, and knowledge of containerization technologies like Docker or Kubernetes is a plus.
- Understanding of data governance, data privacy, and security practices, particularly in an international context.
- Excellent problem-solving and analytical skills, with the ability to design solutions that optimize data processing workflows.
- English is required, with communication skills to effectively convey complex technical concepts to both technical and non-technical stakeholders.