Log in
Language:

MERAL Myanmar Education Research and Learning Portal

  • Top
  • Universities
  • Ranking
To
lat lon distance
To

Field does not validate



Index Link

Index Tree

Please input email address.

WEKO

One fine body…

WEKO

One fine body…

Item

{"_buckets": {"deposit": "a30e1b93-7141-4f3a-8ee0-8b3d1b3edab9"}, "_deposit": {"created_by": 45, "id": "2930", "owner": "45", "owners": [45], "owners_ext": {"displayname": "", "username": ""}, "pid": {"revision_id": 0, "type": "recid", "value": "2930"}, "status": "published"}, "_oai": {"id": "oai:meral.edu.mm:recid/2930", "sets": ["1596102355557", "user-uit"]}, "communities": ["uit"], "item_1583103067471": {"attribute_name": "Title", "attribute_value_mlt": [{"subitem_1551255647225": "Performance-Aware Data Placement Policy for Hadoop Distributed File System", "subitem_1551255648112": "en"}]}, "item_1583103085720": {"attribute_name": "Description", "attribute_value_mlt": [{"interim": "Apache Hadoop is an open-source software\nframework for distributed storage and distributed\nprocessing of very large data sets on computer\nclusters built from commodity hardware. The Hadoop\nDistributed File System (HDFS) is the underlying file\nsystem of a Hadoop cluster. The default HDFS data\nplacement strategy works well in homogeneous\ncluster. But it performs poorly in heterogeneous\nclusters because of the heterogeneity of the nodes\ncapabilities. It may cause overload in some\ncomputing nodes and reduce Hadoop performance.\nTherefore, Hadoop Distributed File System (HDFS)\nhas to rely on load balancing utility to balance data\ndistribution. As a result, data can be placed evenly\nacross the Hadoop cluster. But it may cause the\noverhead of transferring unprocessed data from slow\nnodes to fast nodes because each node has different\ncomputing capacity in heterogeneous Hadoop\ncluster. In order to solve these problems, a\ndata/replica placement policy based on storage\nutilization and computing capacity of each data node\nin heterogeneous Hadoop Cluster is proposed. The\nproposed policy tends to reduce the overload of some\ncomputing nodes as well as reduce overhead of data\ntransmission between different computing nodes."}]}, "item_1583103108160": {"attribute_name": "Keywords", "attribute_value_mlt": [{"interim": "HDFS"}, {"interim": "Data Placement Policy"}]}, "item_1583103120197": {"attribute_name": "Files", "attribute_type": "file", "attribute_value_mlt": [{"accessrole": "open_access", "date": [{"dateType": "Available", "dateValue": "2020-08-06"}], "displaytype": "preview", "download_preview_message": "", "file_order": 0, "filename": "Performance-Aware Data Placement Policy for Hadoop Distributed File System.pdf", "filesize": [{"value": "302 Kb"}], "format": "application/pdf", "future_date_message": "", "is_thumbnail": false, "licensetype": "license_0", "mimetype": "application/pdf", "size": 302000.0, "url": {"url": "https://meral.edu.mm/record/2930/files/Performance-Aware Data Placement Policy for Hadoop Distributed File System.pdf"}, "version_id": "931982f3-372c-46ab-bb00-06522be97d4c"}]}, "item_1583103147082": {"attribute_name": "Conference papers", "attribute_value_mlt": [{"subitem_acronym": "ICCA 2018", "subitem_c_date": "22-23 February, 2018", "subitem_conference_title": "16th International Conference on Computer Applications", "subitem_place": "Sedona Hotel, Yangon, Myanmar", "subitem_website": "https://www.ucsy.edu.mm/page228.do"}]}, "item_1583105942107": {"attribute_name": "Authors", "attribute_value_mlt": [{"subitem_authors": [{"subitem_authors_fullname": "Nang Kham Soe"}, {"subitem_authors_fullname": "Tin Tin Yee"}, {"subitem_authors_fullname": "Ei Chaw Htoon"}]}]}, "item_1583108359239": {"attribute_name": "Upload type", "attribute_value_mlt": [{"interim": "Publication"}]}, "item_1583108428133": {"attribute_name": "Publication type", "attribute_value_mlt": [{"interim": "Conference paper"}]}, "item_1583159729339": {"attribute_name": "Publication date", "attribute_value": "2018-02-23"}, "item_title": "Performance-Aware Data Placement Policy for Hadoop Distributed File System", "item_type_id": "21", "owner": "45", "path": ["1596102355557"], "permalink_uri": "http://hdl.handle.net/20.500.12678/0000002930", "pubdate": {"attribute_name": "Deposited date", "attribute_value": "2020-08-06"}, "publish_date": "2020-08-06", "publish_status": "0", "recid": "2930", "relation": {}, "relation_version_is_last": true, "title": ["Performance-Aware Data Placement Policy for Hadoop Distributed File System"], "weko_shared_id": -1}
  1. University of Information Technology
  2. Faculty of Computer Science

Performance-Aware Data Placement Policy for Hadoop Distributed File System

http://hdl.handle.net/20.500.12678/0000002930
http://hdl.handle.net/20.500.12678/0000002930
da1baeda-a98e-4523-be90-b43ce28376e0
a30e1b93-7141-4f3a-8ee0-8b3d1b3edab9
None
Preview
Name / File License Actions
Performance-Aware Performance-Aware Data Placement Policy for Hadoop Distributed File System.pdf (302 Kb)
license.icon
Publication type
Conference paper
Upload type
Publication
Title
Title Performance-Aware Data Placement Policy for Hadoop Distributed File System
Language en
Publication date 2018-02-23
Authors
Nang Kham Soe
Tin Tin Yee
Ei Chaw Htoon
Description
Apache Hadoop is an open-source software
framework for distributed storage and distributed
processing of very large data sets on computer
clusters built from commodity hardware. The Hadoop
Distributed File System (HDFS) is the underlying file
system of a Hadoop cluster. The default HDFS data
placement strategy works well in homogeneous
cluster. But it performs poorly in heterogeneous
clusters because of the heterogeneity of the nodes
capabilities. It may cause overload in some
computing nodes and reduce Hadoop performance.
Therefore, Hadoop Distributed File System (HDFS)
has to rely on load balancing utility to balance data
distribution. As a result, data can be placed evenly
across the Hadoop cluster. But it may cause the
overhead of transferring unprocessed data from slow
nodes to fast nodes because each node has different
computing capacity in heterogeneous Hadoop
cluster. In order to solve these problems, a
data/replica placement policy based on storage
utilization and computing capacity of each data node
in heterogeneous Hadoop Cluster is proposed. The
proposed policy tends to reduce the overload of some
computing nodes as well as reduce overhead of data
transmission between different computing nodes.
Keywords
HDFS, Data Placement Policy
Conference papers
ICCA 2018
22-23 February, 2018
16th International Conference on Computer Applications
Sedona Hotel, Yangon, Myanmar
https://www.ucsy.edu.mm/page228.do
Back
0
0
views
downloads
See details
Views Downloads

Versions

Ver.1 2020-08-06 11:34:42.148025
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Export

OAI-PMH
  • OAI-PMH DublinCore
Other Formats
  • JSON

Confirm


Back to MERAL


Back to MERAL