Log in
Language:

MERAL Myanmar Education Research and Learning Portal

  • Top
  • Universities
  • Ranking
To
lat lon distance
To

Field does not validate



Index Link

Index Tree

Please input email address.

WEKO

One fine body…

WEKO

One fine body…

Item

{"_buckets": {"deposit": "d61a73cc-e75f-436e-afed-eb3936c62404"}, "_deposit": {"created_by": 45, "id": "6258", "owner": "45", "owners": [45], "owners_ext": {"displayname": "", "username": ""}, "pid": {"revision_id": 0, "type": "recid", "value": "6258"}, "status": "published"}, "_oai": {"id": "oai:meral.edu.mm:recid/6258", "sets": ["user-uit"]}, "communities": ["uit"], "item_1583103067471": {"attribute_name": "Title", "attribute_value_mlt": [{"subitem_1551255647225": "Optimum Checkpoint Interval for MapReduce Fault-Tolerance", "subitem_1551255648112": "en"}]}, "item_1583103085720": {"attribute_name": "Description", "attribute_value_mlt": [{"interim": "MapReduce is the efficient framework for parallel processing of distributed big data in cluster environment. In such a cluster, task failures can impact on performance of applications. Although MapReduce automatically reschedules the failed tasks, it takes long completion time because it starts from scratch. The checkpointing mechanism is the valuable technique to avoid re-execution of failed tasks in MapReduce. However, defining incorrect checkpoint interval can still decrease the performance of MapReduce applications and job completion time. In this paper, the optimum checkpoint interval is proposed to reduce MapReduce job completion time when failures occur. The proposed system defines checkpoint interval that is based on five parameters: expected job completion time without checkpointing, checkpoint overhead time, rework time, down time and restart time. Therefore, because of proposed checkpoint interval, MapReduce does not need to re-execute the failed tasks, so it reduces job completion time when failures occur. The proposed system reduces job completion time even though the number of failures increases and the performance of this system can be improved 4 times better than the original MapReduce."}]}, "item_1583103108160": {"attribute_name": "Keywords", "attribute_value_mlt": [{"interim": "MapReduce"}, {"interim": "big data"}, {"interim": "task failures"}, {"interim": "completion time"}, {"interim": "checkpoint interval"}]}, "item_1583103120197": {"attribute_name": "Files", "attribute_type": "file", "attribute_value_mlt": [{"accessrole": "open_access", "date": [{"dateType": "Available", "dateValue": "2020-11-19"}], "displaytype": "preview", "download_preview_message": "", "file_order": 0, "filename": "Optimum Checkpoint Interval for MapReduce Fault-Tolerance.pdf", "filesize": [{"value": "1.5 Mb"}], "format": "application/pdf", "future_date_message": "", "is_thumbnail": false, "licensefree": "© 2017 ICAIT", "licensetype": "license_free", "mimetype": "application/pdf", "size": 1500000.0, "url": {"url": "https://meral.edu.mm/record/6258/files/Optimum Checkpoint Interval for MapReduce Fault-Tolerance.pdf"}, "version_id": "1fde31b6-54b1-4ae2-94fc-7c70f9892dc9"}]}, "item_1583103147082": {"attribute_name": "Conference papers", "attribute_value_mlt": [{"subitem_acronym": "ICAIT-2017", "subitem_c_date": "1-2 November, 2017", "subitem_conference_title": "1st International Conference on Advanced Information Technologies", "subitem_place": "Yangon, Myanmar", "subitem_session": "Cloud Computing and Big Data Analytics", "subitem_website": "https://www.uit.edu.mm/icait-2017/"}]}, "item_1583105942107": {"attribute_name": "Authors", "attribute_value_mlt": [{"subitem_authors": [{"subitem_authors_fullname": "Naychi Nway Nway"}, {"subitem_authors_fullname": "Julia Myint"}]}]}, "item_1583108359239": {"attribute_name": "Upload type", "attribute_value_mlt": [{"interim": "Publication"}]}, "item_1583108428133": {"attribute_name": "Publication type", "attribute_value_mlt": [{"interim": "Conference paper"}]}, "item_1583159729339": {"attribute_name": "Publication date", "attribute_value": "2017-11-02"}, "item_title": "Optimum Checkpoint Interval for MapReduce Fault-Tolerance", "item_type_id": "21", "owner": "45", "path": ["1605779935331"], "permalink_uri": "http://hdl.handle.net/20.500.12678/0000006258", "pubdate": {"attribute_name": "Deposited date", "attribute_value": "2020-11-19"}, "publish_date": "2020-11-19", "publish_status": "0", "recid": "6258", "relation": {}, "relation_version_is_last": true, "title": ["Optimum Checkpoint Interval for MapReduce Fault-Tolerance"], "weko_shared_id": -1}
  1. University of Information Technology
  2. International Conference on Advanced Information Technologies

Optimum Checkpoint Interval for MapReduce Fault-Tolerance

http://hdl.handle.net/20.500.12678/0000006258
http://hdl.handle.net/20.500.12678/0000006258
1d9e581e-8ec8-4771-8962-70bc6f8feeeb
d61a73cc-e75f-436e-afed-eb3936c62404
None
Preview
Name / File License Actions
Optimum Optimum Checkpoint Interval for MapReduce Fault-Tolerance.pdf (1.5 Mb)
© 2017 ICAIT
Publication type
Conference paper
Upload type
Publication
Title
Title Optimum Checkpoint Interval for MapReduce Fault-Tolerance
Language en
Publication date 2017-11-02
Authors
Naychi Nway Nway
Julia Myint
Description
MapReduce is the efficient framework for parallel processing of distributed big data in cluster environment. In such a cluster, task failures can impact on performance of applications. Although MapReduce automatically reschedules the failed tasks, it takes long completion time because it starts from scratch. The checkpointing mechanism is the valuable technique to avoid re-execution of failed tasks in MapReduce. However, defining incorrect checkpoint interval can still decrease the performance of MapReduce applications and job completion time. In this paper, the optimum checkpoint interval is proposed to reduce MapReduce job completion time when failures occur. The proposed system defines checkpoint interval that is based on five parameters: expected job completion time without checkpointing, checkpoint overhead time, rework time, down time and restart time. Therefore, because of proposed checkpoint interval, MapReduce does not need to re-execute the failed tasks, so it reduces job completion time when failures occur. The proposed system reduces job completion time even though the number of failures increases and the performance of this system can be improved 4 times better than the original MapReduce.
Keywords
MapReduce, big data, task failures, completion time, checkpoint interval
Conference papers
ICAIT-2017
1-2 November, 2017
1st International Conference on Advanced Information Technologies
Yangon, Myanmar
Cloud Computing and Big Data Analytics
https://www.uit.edu.mm/icait-2017/
Back
0
0
views
downloads
See details
Views Downloads

Versions

Ver.1 2020-11-19 14:25:03.888967
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Export

OAI-PMH
  • OAI-PMH DublinCore
Other Formats
  • JSON

Confirm


Back to MERAL


Back to MERAL