Cloudera CCA-410 시험은 국제적으로 승인해주는 IT인증시험의 한과목입니다. 근 몇년간 IT인사들에게 최고의 인기를 누리고 있는 과목으로서 그 난이도 또한 높습니다. 자격증을 취득하여 직장에서 혹은 IT업계에서 자시만의 위치를 찾으련다면 자격증 취득이 필수입니다. Cloudera CCA-410 시험을 패스하고 싶은 분들은 Pass4Test제품으로 가보세요.
NO.1 Choose which best describe a Hadoop cluster's block size storage parameters once you set the
HDFS default block size to 64MB?
A. The block size of files in the cluster can be determined as the block is written.
B. The block size of files in the Cluster will all be multiples of 64MB.
C. The block size of files in the duster will all at least be 64MB.
D. The block size of files in the cluster will all be the exactly 64MB.
Answer: D
NO.2 How does HDFS Federation help HDFS Scale horizontally?
A. HDFS Federation improves the resiliency of HDFS in the face of network issues by removing the
NameNode as a single-point-of-failure.
B. HDFS Federation allows the Standby NameNode to automatically resume the services of an active
NameNode.
C. HDFS Federation provides cross-data center (non-local) support for HDFS, allowing a cluster
administrator to split the Block Storage outside the local cluster.
D. HDFS Federation reduces the load on any single NameNode by using the multiple, independent
NameNode to manage individual pars of the filesystem namespace.
Answer: D
NO.3 How does the NameNode know DataNodes are available on a cluster running MapReduce v1
(MRv1)
A. DataNodes listed in the dfs.hosts file. The NameNode uses as the definitive list of available
DataNodes.
B. DataNodes heartbeat in the master on a regular basis.
C. The NameNode broadcasts a heartbeat on the network on a regular basis, and DataNodes
respond.
D. The NameNode send a broadcast across the network when it first starts, and DataNodes
respond.
Answer: B
NO.4 What action occurs automatically on a cluster when a DataNode is marked as dead?
A. The NameNode forces re-replication of all the blocks which were stored on the dead DataNode.
B. The next time a client submits job that requires blocks from the dead DataNode, the JobTracker
receives no heart beats from the DataNode. The JobTracker tells the NameNode that the DataNode
is dead, which triggers block re-replication on the cluster.
C. The replication factor of the files which had blocks stored on the dead DataNode is temporarily
reduced, until the dead DataNode is recovered and returned to the cluster.
D. The NameNode informs the client which write the blocks that are no longer available; the client
then re-writes the blocks to a different DataNode.
Answer: A
NO.5 For a MapReduce job, on a cluster running MapReduce v1 (MRv1), what's the relationship
between tasks and task templates?
A. There are always at least as many task attempts as there are tasks.
B. There are always at most as many tasks attempts as there are tasks.
C. There are always exactly as many task attempts as there are tasks.
D. The developer sets the number of task attempts on job submission.
Answer: A
NO.6 Which two updates occur when a client application opens a stream to begin a file write on a
cluster running MapReduce v1 (MRv1)?
A. Once the write stream closes on the DataNode, the DataNode immediately initiates a black
report to the NameNode.
B. The change is written to the NameNode disk.
C. The metadata in the RAM on the NameNode is flushed to disk.
D. The metadata in RAM on the NameNode is flushed disk.
E. The metadata in RAM on the NameNode is updated.
F. The change is written to the edits file.
Answer: E,F
NO.7 Which three distcp features can you utilize on a Hadoop cluster?
A. Use distcp to copy files only between two clusters or more. You cannot use distcp to copy data
between directories inside the same cluster.
B. Use distcp to copy HBase table files.
C. Use distcp to copy physical blocks from the source to the target destination in your cluster.
D. Use distcp to copy data between directories inside the same cluster.
E. Use distcp to run an internal MapReduce job to copy files.
Answer: B,D,E