
Themes from JupyterCon 2017
This past August was the first JupyterCon—an O’Reilly-sponsored conference around the Jupyter ecosystem, held in NYC. In this post we look at the major themes from the conference, and some top talks from each theme.
This past August was the first JupyterCon—an O’Reilly-sponsored conference around the Jupyter ecosystem, held in NYC. In this post we look at the major themes from the conference, and some top talks from each theme.
In this tutorial, we will walk you through some of the basics of using Kafka and Spark to ingest data.
By far the most difficult thing in being data-driven is getting the right data in the first place.
In today’s business climate, executives understandably want to see both early results and a long-term direction. A data strategy helps meet business needs, while ordering work in a way that respects constraints and creates future opportunities.
In this post, we’ll look at the components that make up a modern data strategy, and how they work to bring you business value quickly
In this post, we’ll start to develop an intuition for how to approach the remaining useful life (RUL) estimation problem and take the first steps in modeling RUL.
In this post, we will cover some of the basics of monitoring and alerting as it relates to data pipelines in general, and Kafka and Spark in particular.
We are seeing evidence of an important pattern: the creation of internal service platform to meet the data science and analytic needs of organizations.
Interested in how AI is being applied out in the real world? Check out these stories, ranging from fighting food insecurity, to a very low-level version of a butler.