ONLY $9.90 DELIVERY INFO

Close Notification

Your cart does not contain any items

Data Engineering with Scala and Spark

Build streaming and batch pipelines that process massive amounts of data using Scala

Eric Tome Rupam Bhattacharjee David Radford

$83.95   $71.08

Paperback

Not in-store but you can order this
How long will it take?

QTY:

English
Packt Publishing Limited
31 January 2024
Take your data engineering skills to the next level by learning how to utilize Scala and functional programming to create continuous and scheduled pipelines that ingest, transform, and aggregate data

Key Features

Transform data into a clean and trusted source of information for your organization using Scala Build streaming and batch-processing pipelines with step-by-step explanations Implement and orchestrate your pipelines by following CI/CD best practices and test-driven development (TDD) Purchase of the print or Kindle book includes a free PDF eBook

Book DescriptionMost data engineers know that performance issues in a distributed computing environment can easily lead to issues impacting the overall efficiency and effectiveness of data engineering tasks. While Python remains a popular choice for data engineering due to its ease of use, Scala shines in scenarios where the performance of distributed data processing is paramount. This book will teach you how to leverage the Scala programming language on the Spark framework and use the latest cloud technologies to build continuous and triggered data pipelines. You’ll do this by setting up a data engineering environment for local development and scalable distributed cloud deployments using data engineering best practices, test-driven development, and CI/CD. You’ll also get to grips with DataFrame API, Dataset API, and Spark SQL API and its use. Data profiling and quality in Scala will also be covered, alongside techniques for orchestrating and performance tuning your end-to-end pipelines to deliver data to your end users.

By the end of this book, you will be able to build streaming and batch data pipelines using Scala while following software engineering best practices.

What you will learn

Set up your development environment to build pipelines in Scala Get to grips with polymorphic functions, type parameterization, and Scala implicits Use Spark DataFrames, Datasets, and Spark SQL with Scala Read and write data to object stores Profile and clean your data using Deequ Performance tune your data pipelines using Scala

Who this book is forThis book is for data engineers who have experience in working with data and want to understand how to transform raw data into a clean, trusted, and valuable source of information for their organization using Scala and the latest cloud technologies.
By:   , ,
Imprint:   Packt Publishing Limited
Country of Publication:   United Kingdom
Dimensions:   Height: 235mm,  Width: 191mm, 
ISBN:   9781804612583
ISBN 10:   1804612588
Pages:   300
Publication Date:  
Audience:   Professional and scholarly ,  Undergraduate
Format:   Paperback
Publisher's Status:   Active

Eric Tome has over 25 years of experience working with data. He has contributed to and led teams that ingested, cleansed, standardized, and prepared data used by business intelligence, data science, and operations teams. He has a background in mathematics and currently works as a senior solutions architect at Databricks, helping customers solve their data and AI challenges. Rupam Bhattacharjee works as a lead data engineer at IBM. He has architected and developed data pipelines, processing massive structured and unstructured data using Spark and Scala for on-premises Hadoop and K8s clusters on the public cloud. He has a degree in electrical engineering. David Radford has worked in big data for over 10 years, with a focus on cloud technologies. He led consulting teams for several years, completing a migration from legacy systems to modern data stacks. He holds a master's degree in computer science and works as a senior solutions architect at Databricks.

See Also