--> --> --> --> --> --> --> -->
In this talk, learn about the new ODPi, how it’s leverage the expertise of the Linux Foundation in hosting vendor-neutral open source projects, and how you can bring your project to ODPi.
Jon and Neri will share their learnings about sensible approaches to designing data science projects and present a framework that they have found to be useful in giving projects the best chance for success.
Jon and Neri will share their learnings about sensible approaches to designing data science projects and present a framework that they have found to be useful in giving projects the best chance for success.
In this presentation Daniel will show how on-premise data processing or indexing pipelines can be extended by cloud services to get more out of your unstructured data while bypassing all the above-mentioned challenges saving time and money.
The space race was a EEUU – Soviet Union competition to conquer the space. This competence helped to develop space technology in an incredible manner, developing other derivative technologies as a side effect.
If we want to use words as an interest analysis tool, however, we need to analyze large volumes of data and manual analysis can be expensive and inaccurate. We Will go through this topic starting from how retrieve data and how to analyze these data with some NLP libraries.
During this talk, Silvan Jongerius, expert in GDPR compliance for technology will explain the obvious and not so obvious challenges of GDPR in big data, and look at different approaches to overcome them
In this talk Marko will show one approach which allows you to write a low-latency, auto-parallelized and distributed stream processing pipeline in Java that seamlessly integrates with a data scientist’s work taken in almost unchanged form from their Python development environment.
In this session, Antia Fernandez will talk about different applications of artificial intelligence for the automotive and food sector Gradiant clients.
We are living the era when everyday developers can build/train/run their own machine learning models straight from the database query editor, by issuing CREATE MODEL statements. In this demo driven session we will be exploring logistics regression, k-means models and running predictions on tabular data straight from your SQL tables using Google BigQuery.
As a service provider for insurance companies, pension & healthcare funds we rolled out a resilient stream processing platform running in kubernetes that we can scale out horizontally to integrate different microservices developed in different languages like java, scala or python.
In this talk, Nicolas will define the context in which the old batch processing model was born, the reasons that are behind the new stream processing one, how they compare, what are their pros and cons, and a list of existing technologies implementing the latter with their most prominent characteristics.
Achieving true continuous deployment of bytecode on one single JVM instance is possible if one changes one’s way of looking at things. What if compilation could be seen as changes? What if those changes could be stored in a data store, and a listener on this data store could stream those changes to the running production JVM via the Attach API?