bright idea

Goals


- Install the services of a Hadoop node

- Assemble several Hadoop nodes

- Deploy a new application on an existing cluster

- Perform a data restoration following disaster recovery

Program

What is Big Data?
The size issue
Hadoop’s position in the landscape

Presentation of an existing node
Organization of services and study of sequencing with YARN

Workshop: modifying the size of HDFS blocks to reduce the number of Map / Reduce

Relationship between the installed platform and the development
frameworks Propose independent frameworks to ensure compatibility: Spring Data

Workshop: deploy an application for accessing HBase through an O / R Spring Data mapping

Deploy a Map / Reduce program on a cluster of Hadoop nodes
Search for logs
Report anomalies to developers
Suggest the use of Kafka file

Workshop: use of input output queue for a Map / Reduce program

Definition of software routes with Apache Flume
Set up a calculation case where the data triggers the programs

Workshop: routing data from an HDFS directory to a Kafka file which is the entry of a Map / Reduce program

Using Ambari views
Viewing the status of nodes in a cluster
Import / export configuration files

Workshop: relaunching a cluster of services, using the YARN and Tez views

Managing user accounts
Managing file rights on a distributed file system
Using certificates

Lab: Configure Knox and Ranger Services

Duration

3 days

Price

£ 1994

Audience

System administrators

Prerequisites

Knowledge of system administration, preferably Java

Reference

BUS100612-F

Sessions

Contact us for more informations about session date