
Exploring the Possibilities of Artificial Intelligence
In this interview, Paco Nathan discusses making life more livable, AI fears, and more.
In this interview, Paco Nathan discusses making life more livable, AI fears, and more.
In this post, we will discuss how dealing with small files is different if you are using MapR-FS rather than the traditional HDFS installation.
This past August was the first JupyterCon—an O’Reilly-sponsored conference around the Jupyter ecosystem, held in NYC. In this post we look at the major themes from the conference, and some top talks from each theme.
How can you manage your implementation in a way that allows you to take maximum advantage of technology innovation as you go, rather than having to freeze your view of technology to today’s state and design something that will be outdated when it launches? You must start by deciding which pieces are necessary now, and which can wait.
In this tutorial, we will walk you through some of the basics of using Kafka and Spark to ingest data.
As well as developing familiarity with AI techniques, practitioners must choose their technology platforms wisely.
We use continuous delivery automation tools and techniques that have become available in the last few years. Here we’ll walk through creation of a Maven-based Java project here and demonstrate incorporating it into our pipeline.
We summarize the objectives and contents of our PyCon tutorial, and then provide instructions for following along so you can begin developing your own EDA skills.
In this post, we will cover some of the basics of monitoring and alerting as it relates to data pipelines in general, and Kafka and Spark in particular.