Big Data & Machine Learning
Using BigData technologies to build a highly scalable Machine Learning platform
ImpactHub - Atlantic room
18th November, 12:00-13:00
The presentation will describing the process of combining various technologies from the BigData ecosystem like Kafka, HBase, HDFS, ELK, Mesos with the new way of development using microservices to build a highly scalable Machine Learning platform and how we build it and run it in our private datacenter and then migrate it to AWS.
Daniel is leading the Big Data and Machine Translation group at SDL Research Cluj, working on a Hadoop-based data pipeline that processes Petabytes of unstructured data; the resulting clean data is used to increase the quality of SDL’s Statistical Machine Translation engines. Daniel is involved in the local Big Data community as a meet-up organizer/speaker in the effort to raise the awareness of the Big Data and Hadoop field.