- For the default threefold replication, Hadoop’s rack placement policy is to write the first copy of a block on a node in one rack, then the other two copies on two nodes in a different rack. Since the first copy is written to rack2, the other two will either be written to two nodes on rack1, or two nodes on rack3.
- Apache HBase provides random, realtime read/write access to your data. HDFS does not allow random writes. HDFS is built for scalability, fault tolerance, and batch processing.
- Each slave node in a cluster configured to run MapReduce v2 (MRv2) on YARN typically runs a DataNode daemon (for HDFS functions) and NodeManager daemon (for YARN functions). The NodeManager handles communication with the ResourceManager, oversees application container lifecycles, monitors CPU and memory resource use of the containers, tracks the node health, and handles log management. It also makes available a number of auxiliary services to YARN applications. Further Reading
- Configure YARN Daemons
- When the first NameNode is started, it connects to ZooKeeper and registers itself as the Active NameNode. The next NameNode then sees that information and sets itself up in Standby mode (in fact, the ZooKeeper Failover Controller is the software responsible for the actual communication with ZooKeeper). Clients never connect to ZooKeeper to discover anything about the NameNodes. In an HDFS HA scenario, ZooKeeper is not used to keep track of filesystem changes. That is the job of the Quorum Journal Manager daemons.
- You decide to create a cluster which runs HDFS in High Availability mode with automatic failover, using Quorum-based Storage. Which service keeps track of which NameNode is active at any given moment?
- Which three describe functions of the ResourceManager in YARN?
- Tracking heartbeats from the NodeManagers
- Running a scheduler to determine how resources are allocated
- Monitoring the status of the ApplicationMaster container and restarting on failure
- Explanation:
- The ResourceManager has two main components: Scheduler and ApplicationsManager.
- The Scheduler is responsible for allocating resources to the various running applications subject to familiar constraints of capacities, queues etc.
- The ResourceManger tracks heartbeats from the NodeManagers to determine available resources then schedules those resources based on the scheduler specific configuration.
- The ApplicationsManager is responsible for accepting job-submissions, negotiating the first container for executing the application specific ApplicationMaster and provides the service for restarting the ApplicationMaster container on failure.
- The per-application ApplicationMaster has the responsibility of negotiating appropriate resource containers from the Scheduler, tracking their status and monitoring for progress. Depending on the type of application, this may include monitoring the map and reduce tasks progress, restarting tasks, and archiving job history and meta-data.
- The NodeManager is the per-machine framework agent who is responsible for containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the ResourceManager/Scheduler.
- http://hadoop.apache.org/docs/r2.4.0/hadoop-yarn/hadoop-yarn-site/YARN.html
- You are running a Hadoop cluster with a NameNode on host hadoopNN01, a Secondary NameNode on host hadoopNN02 and several DataNodes.
- How can you determine when the last checkpoint happened?
- Connect to the web UI of the Secondary NameNode (http://hadoopNN02:50090/) and look at the “Last Checkpoint” information.
- Explanation
- The Secondary NameNode’s Web UI contains information on when it last performed its checkpoint operation. This is not displayed via the NameNode’s Web UI, and is not available via the hdfs dfsadmin command.
- A specific node in your cluster appears to be running slower than other nodes with the same hardware configuration. You suspect a RAM failure in the system. Which commands may be used to the view the memory seen in the system?
- free
- top
- dmidecode
- Explanation
- dmidecode shows bios information on a running system. The amount of installed RAM and size of the modules in each slot can be found in the output.
- Additionally memory and swap usage can be viewed with cat /proc/meminfo or vmstat.
- You are running a Hadoop cluster with a NameNode on host mynamenode. What are two ways you can determine available HDFS space in your cluster?
- Run hdfs dfsadmin -report and locate the DFS Remaining value.
- Connect to http://mynamenode:50070/ and locate the DFS Remaining value.
- Explanation:
- When a client wishes to read a file from HDFS, it contacts the NameNode and requests the locations and names of the first few blocks in the file. It then directly contacts the DataNodes containing those blocks to read the data. It would be very wasteful to move blocks around the cluster based on a client’s read request, so that is never done. Similarly, if all data was passed via the NameNode, the NameNode would immediately become a serious bottleneck and would slow down the cluster operation dramatically.
- As well as the block itself, a separate file is written containing checksums for the data in the file. These checksums are used when the data is being written, to ensure that no data corruption has occurred. No metadata regarding the name of the file of which the block is a part, or information about that file, is stored on the DataNode.
- Hadoop: The Definitive Guide, 3rd Edition, Chapter 4, under the section “Data Integrity in HDFS.”
- The NameNode needs to know which DataNodes hold each HDFS block. How is that block location information managed?
- The NameNode stores the block locations in RAM. They are never stored on disk.
- Explanation
- The NameNode never stores the HDFS block locations on disk; it only stores the names of the blocks associated with each file. After the NameNode starts up, each DataNode heartbeats in and sends its block report, which lists all the blocks it holds. The NameNode keeps that information in RAM.
- Hadoop The Definitive Guide, Chapter 3, under the section "Namenodes and Datanodes"
- What are the permissions of a file in HDFS with the following: rw-rw-r-x?
- No one can modify the contents of the file.
- Ex:
- The permissions show that the file can be read from and written to (appended to) by the owner and anyone in the owner's group, and read from by anyone else (it is 'world readable'). The execute permission on a file in HDFS is ignored.
- That the file's existing contents cannot be modified by anyone because HDFS is a write-once filesystem. Once a file has been written, its contents cannot be changed.
- See the Overview section of the docs for a discussion of capabilities encapsulated by each permission mode.
- Hadoop performs best when disks on slave nodes are configured as JBOD (Just a Bunch Of Disks). There is no need for RAID on individual nodes -- Hadoop replicates each block three times on different nodes for reliability. A single Linux LVM will not perform as well as having the disks configured as separate volumes and, importantly, the failure of one disk in an LVM volume would result in the loss of all data on that volume, whereas the failure of a single disk if all were configured as separate volumes would only result in the loss of data on that one disk (and, of course, each block on that disk is replicated on two other nodes in the cluster).
- MRv1: $ hadoop fsck /
- The number of DataNodes in the cluster
- The number of under-replicated blocks in the cluster
- In its standard form, hadoop fsck / will return information about the cluster including the number of DataNodes and the number of under-replicated blocks.
- To view a list of all the files in the cluster, the command would be hadoop fsck / -files
- To view a list of all the blocks, and the locations of the blocks, the command would be hadoop fsck / -files -blocks -locations
- Additionally, the -racks option would display the rack topology information for each block.
- Further Reading
- Hadoop Operations: A Guide for Developers and Administrators, Chapter 8, under the heading “Checking Filesystem Integrity with fsck” Hadoop: The Definitive Guide, 3rd Edition, Chapter 10, under the heading “Tools”
- On a cluster which is NOT running HDFS High Availability, which four pieces of information does the NameNode store on disk?
- Names of the files in HDFS
- The directory structure of the files in HDFS
- An edit log of changes that have been made since the last snapshot compaction by the Secondary NameNode
- File permissions of the files in HDFS
- The NameNode holds its metadata in RAM for fast access. However, it also needs to persist that information to disk. The initial metadata on disk is stored in a file called fsimage. Metadata in fsimage includes the names of all the files in HDFS, their locations (the directory structure), and file permissions. Whenever a change is made to the metadata (such as a new file being created, or a file being deleted), that information is written to a file on disk called edits. Periodically, the Secondary NameNode coalesces the edits and fsimage files and writes the result back as a new fsimage file, at which point the NameNode can delete its old edits file.
- The NameNode has no knowledge of when it was last backed up. Heartbeat information from the DataNodes is held in RAM but is not persisted to disk.
- Block size is a client-side parameter; the value used will be whatever value is present on the machine creating the file. Setting the value on the NameNode has no effect unless a process on the NameNode is acting as a client and writing a file to HDFS.
- What is the basis of resource allocation using the Fair Scheduler in YARN?
- Resources are allocated in terms of minimum and maximum memory and cpu usage per queue.
- In a YARN cluster using the Fair Scheduler, resources are allocated based on the queue usage of either memory or a combination of memory and cpu. To be "fair" the queue using the least resources (and having demand for more resources) receives the next allocation of available resources. The allocations file is used to specify the minimum and maximum resources allocated to a queue.
- For more information see http://hadoop.apache.org/docs/r2.4.0/hadoop-yarn/hadoop-yarn-site/FairScheduler.html
- "By default, the Fair Scheduler bases scheduling fairness decisions only on memory. It can be configured to schedule with both memory and CPU, using the notion of Dominant Resource Fairness developed by Ghodsi et al. When there is a single app running, that app uses the entire cluster. When other apps are submitted, resources that free up are assigned to the new apps, so that each app eventually on gets roughly the same amount of resources."
- "The scheduler organizes apps further into "queues", and shares resources fairly between these queues. By default, all users share a single queue, named “default”. If an app specifically lists a queue in a container resource request, the request is submitted to that queue. It is also possible to assign queues based on the user name included with the request through configuration. Within each queue, a scheduling policy is used to share resources between the running apps. The default is memory-based fair sharing, but FIFO and multi-resource with Dominant Resource Fairness can also be configured. Queues can be arranged in a hierarchy to divide resources and configured with weights to share the cluster in specific proportions."
- Also review the architecture of YARN at http://hadoop.apache.org/docs/r2.4.0/hadoop-yarn/hadoop-yarn-site/YARN.html
- "The ResourceManager has two main components: Scheduler and ApplicationsManager."
- "The Scheduler is responsible for allocating resources to the various running applications subject to familiar constraints of capacities, queues etc. The Scheduler is pure scheduler in the sense that it performs no monitoring or tracking of status for the application. Also, it offers no guarantees about restarting failed tasks either due to application failure or hardware failures. The Scheduler performs its scheduling function based the resource requirements of the applications; it does so based on the abstract notion of a resource Container which incorporates elements such as memory, cpu, disk, network etc."
- "The Scheduler has a pluggable policy plug-in, which is responsible for partitioning the cluster resources among the various queues, applications etc. The current Map-Reduce schedulers such as the CapacityScheduler and the FairScheduler would be some examples of the plug-in.The NodeManager is the per-machine framework agent who is responsible for containers, monitoring their resource usage (cpu, memory, disk, network) and reporting the same to the ResourceManager/Scheduler."
18.5.15
Errors: CDA - 20150518
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment