無料問題集CCA175 資格取得
質問 1:
CORRECT TEXT
Problem Scenario 73 : You have been given data in json format as below.
{"first_name":"Ankit", "last_name":"Jain"}
{"first_name":"Amir", "last_name":"Khan"}
{"first_name":"Rajesh", "last_name":"Khanna"}
{"first_name":"Priynka", "last_name":"Chopra"}
{"first_name":"Kareena", "last_name":"Kapoor"}
{"first_name":"Lokesh", "last_name":"Yadav"}
Do the following activity
1 . create employee.json file locally.
2 . Load this file on hdfs
3 . Register this data as a temp table in Spark using Python.
4 . Write select query and print this data.
5 . Now save back this selected data in json format.
正解:
See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
Step 1 : create employee.json tile locally.
vi employee.json (press insert) past the content.
Step 2 : Upload this tile to hdfs, default location hadoop fs -put employee.json
Step 3 : Write spark script
#lmport SQLContext
from pyspark import SQLContext
# Create instance of SQLContext sqIContext = SQLContext(sc)
# Load json file
employee = sqlContext.jsonFile("employee.json")
# Register RDD as a temp table employee.registerTempTablef'EmployeeTab"}
# Select data from Employee table
employeelnfo = sqlContext.sql("select * from EmployeeTab"}
#lterate data and print
for row in employeelnfo.collect():
print(row)
Step 4 : Write dataas a Text file employeelnfo.toJSON().saveAsTextFile("employeeJson1")
Step 5: Check whether data has been created or not hadoop fs -cat employeeJsonl/part"
質問 2:
CORRECT TEXT
Problem Scenario 35 : You have been given a file named spark7/EmployeeName.csv
(id,name).
EmployeeName.csv
E01,Lokesh
E02,Bhupesh
E03,Amit
E04,Ratan
E05,Dinesh
E06,Pavan
E07,Tejas
E08,Sheela
E09,Kumar
E10,Venkat
1. Load this file from hdfs and sort it by name and save it back as (id,name) in results directory. However, make sure while saving it should be able to write In a single file.
正解:
See the explanation for Step by Step Solution and configuration.
Explanation:
Solution:
Step 1 : Create file in hdfs (We will do using Hue). However, you can first create in local filesystem and then upload it to hdfs.
Step 2 : Load EmployeeName.csv file from hdfs and create PairRDDs
val name = sc.textFile("spark7/EmployeeName.csv")
val namePairRDD = name.map(x=> (x.split(",")(0),x.split(",")(1)))
Step 3 : Now swap namePairRDD RDD.
val swapped = namePairRDD.map(item => item.swap)
step 4: Now sort the rdd by key.
val sortedOutput = swapped.sortByKey()
Step 5 : Now swap the result back
val swappedBack = sortedOutput.map(item => item.swap}
Step 6 : Save the output as a Text file and output must be written in a single file.
swappedBack. repartition(1).saveAsTextFile("spark7/result.txt")
質問 3:
CORRECT TEXT
Problem Scenario 40 : You have been given sample data as below in a file called spark15/file1.txt
3070811,1963,1096,,"US","CA",,1,
3022811,1963,1096,,"US","CA",,1,56
3033811,1963,1096,,"US","CA",,1,23
Below is the code snippet to process this tile.
val field= sc.textFile("spark15/f ilel.txt")
val mapper = field.map(x=> A)
mapper.map(x => x.map(x=> {B})).collect
Please fill in A and B so it can generate below final output
Array(Array(3070811,1963,109G, 0, "US", "CA", 0,1, 0)
,Array(3022811,1963,1096, 0, "US", "CA", 0,1, 56)
,Array(3033811,1963,1096, 0, "US", "CA", 0,1, 23)
)
正解:
See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
A. x.split(","-1)
B. if (x. isEmpty) 0 else x
質問 4:
CORRECT TEXT
Problem Scenario 78 : You have been given MySQL DB with following details.
user=retail_dba
password=cloudera
database=retail_db
table=retail_db.orders
table=retail_db.order_items
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Columns of order table : (orderid , order_date , order_customer_id, order_status)
Columns of ordeMtems table : (order_item_td , order_item_order_id ,
order_item_product_id,
order_item_quantity,order_item_subtotal,order_item_product_price)
Please accomplish following activities.
1. Copy "retail_db.orders" and "retail_db.order_items" table to hdfs in respective directory p92_orders and p92_order_items .
2. Join these data using order_id in Spark and Python
3. Calculate total revenue perday and per customer
4. Calculate maximum revenue customer
正解:
See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
Step 1 : Import Single table .
sqoop import --connect jdbc:mysql://quickstart:3306/retail_db -username=retail_dba - password=cloudera -table=orders --target-dir=p92_orders -m 1 sqoop import -connect jdbc:mysql://quickstart:3306/retail_db -username=retail_dba - password=cloudera -table=order_items --target-dir=p92_order_orderitems --m 1
Note : Please check you dont have space between before or after '=' sign. Sqoop uses the
MapReduce framework to copy data from RDBMS to hdfs
Step 2 : Read the data from one of the partition, created using above command, hadoop fs
-cat p92_orders/part-m-00000 hadoop fs -cat p92 orderitems/part-m-00000
Step 3 : Load these above two directory as RDD using Spark and Python (Open pyspark terminal and do following). orders = sc.textFile(Mp92_orders") orderitems = sc.textFile("p92_order_items")
Step 4 : Convert RDD into key value as (orderjd as a key and rest of the values as a value)
#First value is orderjd
orders Key Value = orders.map(lambda line: (int(line.split(",")[0]), line))
#Second value as an Orderjd
orderltemsKeyValue = orderltems.map(lambda line: (int(line.split(",")[1]), line))
Step 5 : Join both the RDD using orderjd
joinedData = orderltemsKeyValue.join(ordersKeyValue)
#print the joined data
for line in joinedData.collect():
print(line)
#Format of joinedData as below.
#[Orderld, 'All columns from orderltemsKeyValue', 'All columns from ordersKeyValue'] ordersPerDatePerCustomer = joinedData.map(lambda line: ((line[1][1].split(",")[1], line[1][1].split(",M)[2]), float(line[1][0].split(",")[4]))) amountCollectedPerDayPerCustomer = ordersPerDatePerCustomer.reduceByKey(lambda runningSum, amount: runningSum + amount}
#(Out record format will be ((date,customer_id), totalAmount} for line in amountCollectedPerDayPerCustomer.collect(): print(line)
#now change the format of record as (date,(customer_id,total_amount))
revenuePerDatePerCustomerRDD = amountCollectedPerDayPerCustomer.map(lambda threeElementTuple: (threeElementTuple[0][0],
(threeElementTuple[0][1],threeElementTuple[1])))
for line in revenuePerDatePerCustomerRDD.collect():
print(line)
#Calculate maximum amount collected by a customer for each day
perDateMaxAmountCollectedByCustomer =
revenuePerDatePerCustomerRDD.reduceByKey(lambda runningAmountTuple,
newAmountTuple: (runningAmountTuple if runningAmountTuple[1] >=
newAmountTuple[1] else newAmountTuple})
for line in perDateMaxAmountCollectedByCustomer\sortByKey().collect(): print(line)
質問 5:
CORRECT TEXT
Problem Scenario 23 : You have been given log generating service as below.
Start_logs (It will generate continuous logs)
Tail_logs (You can check , what logs are being generated)
Stop_logs (It will stop the log service)
Path where logs are generated using above service : /opt/gen_logs/logs/access.log
Now write a flume configuration file named flume3.conf , using that configuration file dumps logs in HDFS file system in a directory called flumeflume3/%Y/%m/%d/%H/%M
Means every minute new directory should be created). Please us the interceptors to provide timestamp information, if message header does not have header info.
And also note that you have to preserve existing timestamp, if message contains it. Flume channel should have following property as well. After every 100 message it should be committed, use non-durable/faster channel and it should be able to hold maximum 1000 events.
正解:
See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
Step 1 : Create flume configuration file, with below configuration for source, sink and channel.
#Define source , sink , channel and agent,
agent1 .sources = source1
agent1 .sinks = sink1
agent1.channels = channel1
# Describe/configure source1
agent1 .sources.source1.type = exec
agentl.sources.source1.command = tail -F /opt/gen logs/logs/access.log
#Define interceptors
agent1 .sources.source1.interceptors=i1
agent1 .sources.source1.interceptors.i1.type=timestamp
agent1 .sources.source1.interceptors.i1.preserveExisting=true
## Describe sink1
agent1 .sinks.sink1.channel = memory-channel
agent1 .sinks.sink1.type = hdfs
agent1 .sinks.sink1.hdfs.path = flume3/%Y/%m/%d/%H/%M
agent1 .sinks.sjnkl.hdfs.fileType = Data Stream
# Now we need to define channel1 property.
agent1.channels.channel1.type = memory
agent1.channels.channel1.capacity = 1000
agent1.channels.channel1.transactionCapacity = 100
# Bind the source and sink to the channel
Agent1.sources.source1.channels = channel1
agent1.sinks.sink1.channel = channel1
Step 2 : Run below command which will use this configuration file and append data in hdfs.
Start log service using : start_logs
Start flume service:
flume-ng agent -conf /home/cloudera/flumeconf -conf-file
/home/cloudera/flumeconf/flume3.conf -DfIume.root.logger=DEBUG,INFO,console -name agent1
Wait for few mins and than stop log service.
stop logs
質問 6:
CORRECT TEXT
Problem Scenario 6 : You have been given following mysql database details as well as other info.
user=retail_dba
password=cloudera
database=retail_db
jdbc URL = jdbc:mysql://quickstart:3306/retail_db
Compression Codec : org.apache.hadoop.io.compress.SnappyCodec
Please accomplish following.
1. Import entire database such that it can be used as a hive tables, it must be created in default schema.
2. Also make sure each tables file is partitioned in 3 files e.g. part-00000, part-00002, part-
00003
3. Store all the Java files in a directory called java_output to evalute the further
正解:
See the explanation for Step by Step Solution and configuration.
Explanation:
Solution :
Step 1 : Drop all the tables, which we have created in previous problems. Before implementing the solution.
Login to hive and execute following command.
show tables;
drop table categories;
drop table customers;
drop table departments;
drop table employee;
drop table ordeMtems;
drop table orders;
drop table products;
show tables;
Check warehouse directory. hdfs dfs -Is /user/hive/warehouse
Step 2 : Now we have cleaned database. Import entire retail db with all the required parameters as problem statement is asking.
sqoop import-all-tables \
-m3\
-connect jdbc:mysql://quickstart:3306/retail_db \
--username=retail_dba \
-password=cloudera \
-hive-import \
--hive-overwrite \
-create-hive-table \
--compress \
--compression-codec org.apache.hadoop.io.compress.SnappyCodec \
--outdir java_output
Step 3 : Verify the work is accomplished or not.
a. Go to hive and check all the tables hive
show tables;
select count(1) from customers;
b. Check the-warehouse directory and number of partitions,
hdfs dfs -Is /user/hive/warehouse
hdfs dfs -Is /user/hive/warehouse/categories
c. Check the output Java directory.
Is -Itr java_output/
弊社は失敗したら全額で返金することを承諾します
我々は弊社のCCA175問題集に自信を持っていますから、試験に失敗したら返金する承諾をします。我々のCloudera CCA175を利用して君は試験に合格できると信じています。もし試験に失敗したら、我々は君の支払ったお金を君に全額で返して、君の試験の失敗する経済損失を減少します。
Cloudera CCA175 認定試験の出題範囲:
トピック | 出題範囲 |
---|
トピック 1 | - Generate reports by using queries against loaded data
- Produce ranked or sorted data
|
トピック 2 | - Perform standard extract, transform, load (ETL) processes on data using the Spark API
- Join disparate datasets using Spark
|
トピック 3 | - Understand the fundamentals of querying datasets in Spark
- Write the results back into HDFS using Spark
|
トピック 4 | - Write queries that calculate aggregate statistics
- Load data from HDFS for use in Spark applications
|
トピック 5 | - Use Spark SQL to interact with the meta store programmatically in your applications
- Read and write files in a variety of file formats
|
参照:https://www.cloudera.com/about/training/certification/cdhhdp-certification/cca-spark.html
安全的な支払方式を利用しています
Credit Cardは今まで全世界の一番安全の支払方式です。少数の手続きの費用かかる必要がありますとはいえ、保障があります。お客様の利益を保障するために、弊社のCCA175問題集は全部Credit Cardで支払われることができます。
領収書について:社名入りの領収書が必要な場合、メールで社名に記入していただき送信してください。弊社はPDF版の領収書を提供いたします。
TopExamは君にCCA175の問題集を提供して、あなたの試験への復習にヘルプを提供して、君に難しい専門知識を楽に勉強させます。TopExamは君の試験への合格を期待しています。
弊社のCloudera CCA175を利用すれば試験に合格できます
弊社のCloudera CCA175は専門家たちが長年の経験を通して最新のシラバスに従って研究し出した勉強資料です。弊社はCCA175問題集の質問と答えが間違いないのを保証いたします。
この問題集は過去のデータから分析して作成されて、カバー率が高くて、受験者としてのあなたを助けて時間とお金を節約して試験に合格する通過率を高めます。我々の問題集は的中率が高くて、100%の合格率を保証します。我々の高質量のCloudera CCA175を利用すれば、君は一回で試験に合格できます。
一年間の無料更新サービスを提供します
君が弊社のCloudera CCA175をご購入になってから、我々の承諾する一年間の更新サービスが無料で得られています。弊社の専門家たちは毎日更新状態を検査していますから、この一年間、更新されたら、弊社は更新されたCloudera CCA175をお客様のメールアドレスにお送りいたします。だから、お客様はいつもタイムリーに更新の通知を受けることができます。我々は購入した一年間でお客様がずっと最新版のCloudera CCA175を持っていることを保証します。
弊社は無料Cloudera CCA175サンプルを提供します
お客様は問題集を購入する時、問題集の質量を心配するかもしれませんが、我々はこのことを解決するために、お客様に無料CCA175サンプルを提供いたします。そうすると、お客様は購入する前にサンプルをダウンロードしてやってみることができます。君はこのCCA175問題集は自分に適するかどうか判断して購入を決めることができます。
CCA175試験ツール:あなたの訓練に便利をもたらすために、あなたは自分のペースによって複数のパソコンで設置できます。