/
Big Data

Building a real-time big data pipeline 9: Spark MLlib, Regression, Python

Apache Spark expresses parallelism by three sets of APIs – DataFrames, DataSets and RDDs (Resilient Distributed Dataset).Originally, spark was designed to read and write data from and to Hadoop Distributed File System (HDFS). A Hadoop cluster is composed of a network of master, worker and client nodes that orchestrate and execute the various jobs across
read more

Building a real-time big data pipeline 10: Spark Streaming, Kafka, Java

Spark Streaming is an extension of the core Apache Spark platform that enables scalable, high-throughput, fault-tolerant processing of data streams; written in Scala but offers Scala, Java, R and Python APIs to work with. It takes data from the sources like Kafka, Flume, Kinesis, HDFS, S3 or Twitter. This data can be further processed using
read more

Building a real-time big data pipeline 8: Spark MLlib, Regression, R

Apache Spark MLlib is a distributed framework that provides many utilities useful for machine learning tasks, such as: Classification, Regression, Clustering, Dimentionality reduction and, Linear algebra, statistics and data handling. R is a popular statistical programming language with a number of packages that support data processing and machine learning tasks. To address R’s scalability issue, the
read more

Building a real-time big data pipeline 7 : Spark MLlib, Regression, Java

Apache Spark MLlib is a distributed framework that provides many utilities useful for machine learning tasks, such as: Classification, Regression, Clustering, Dimentionality reduction and, Linear algebra, statistics and data handling. >>>  
read more

Building a real-time big data pipeline 6: Spark Core, Hadoop, SBT

Apache Spark is an open-source cluster computing system that provides high-level APIs in Java, Scala, Python and R. Spark also packaged with higher-level libraries for SQL, machine learning (MLlib), streaming, and graphs (GraphX). >>>  
read more

Building a real-time big data pipeline 5 : NoSQL, Java

Apache Cassandra is a distributed NoSQL database (DB) which is used for handling Big data and real-time web applications. NoSQL stands for “Not Only SQL” or “Not SQL”. NoSQL database is a non-relational data management system, that does not require a fixed schema. >>>
read more

Building a real-time big data pipeline 4 : Spark Streaming, Kafka, Scala

Apache Kafka is a scalable, high performance and low latency platform for handling of real-time data feeds. Kafka allows reading and writing streams of data like a messaging system; written in Scala and Java.Kafka requires Apache Zookeeper to run. Kafka v2.5.0 (scala v2.12 build) and zookeeper (v3.4.13) were installed using docker. >>>
read more

Building a real-time big data pipeline 3 : Spark SQL, Hadoop, Scala

Apache Spark is an open-source cluster computing system that provides high-level API in Java, Scala, Python and R.Spark also packaged with higher-level libraries for SQL, machine learning, streaming, and graphs. Spark SQL is Spark’s package for working with structured data. >>>
read more

Building a real-time big data pipeline 2 : Spark Core, Hadoop, Scala

Apache Spark is a general-purpose, in-memory cluster computing engine for large scale data processing. Spark can also work with Hadoop and its modules. The real-time data processing capability makes Spark a top choice for big data analytics. The spark core has two parts. 1) Computing engine and 2) Spark Core APIs. >>>
read more

Building a real-time big data pipeline 1 : Kafka, RESTful, Java

Building a real-time big data pipeline 1 : Kafka, RESTful, Java Updated on September 20, 2021 Apache Kafka is used for building real-time data pipelines and streaming apps. Kafka is a message broker, which helps transmit messages from one system to another. Zookeeper is required to run a Kafka Cluster. Apache ZooKeeper is primarily used
read more