<?xml version='1.0' encoding='UTF-8'?>
<OAI-PMH xmlns="http://www.openarchives.org/OAI/2.0/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/ http://www.openarchives.org/OAI/2.0/OAI-PMH.xsd">
  <responseDate>2026-04-30T14:17:17Z</responseDate>
  <request metadataPrefix="oai_dc" identifier="oai:meral.edu.mm:recid/2926" verb="GetRecord">https://meral.edu.mm/oai</request>
  <GetRecord>
    <record>
      <header>
        <identifier>oai:meral.edu.mm:recid/2926</identifier>
        <datestamp>2021-12-13T01:03:35Z</datestamp>
        <setSpec>1582963342780:1596102355557</setSpec>
        <setSpec>user-uit</setSpec>
      </header>
      <metadata>
        <oai_dc:dc xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:oai_dc="http://www.openarchives.org/OAI/2.0/oai_dc/" xmlns="http://www.w3.org/2001/XMLSchema" xsi:schemaLocation="http://www.openarchives.org/OAI/2.0/oai_dc/ http://www.openarchives.org/OAI/2.0/oai_dc.xsd">
          <dc:title>Ensuring Reliability in Deduplicated Data by Erasure Coded Replication</dc:title>
          <dc:creator>Myat Pwint Phyu</dc:creator>
          <dc:creator>Thandar Thein</dc:creator>
          <dc:description>As computer systems are taking more and
more responsibilities in critical processes, the
demand for storage is increasing due to
widespread applications. Saving the digital
information in a large disk is expensive and
unreliable. As a result, if the disk fails all the
data is lost. Therefore, the yearning for a better
understanding of the system’s reliability is ever
increasing. In greatest hit storage environments,
deduplication is applied as an effective technique
to optimize the storage space utilization. Usually,
the data deduplication impacts the bad result for
the reliability of the storage system because of
the information sharing.</dc:description>
          <dc:description>In this paper, reliability guaranteed
deduplication algorithm is proposed by
considering reliability during the deduplication
process. The deduplicated data are distributed to
the storage pool by applying the consistent hash
ring as a replicas placement strategy. The
proposed mechanism is evaluated and the result
is compared with pure replication and erasure
coded replication. The proposed mechanism can
provide the better storage utilization and the one
hundred percent of assurance for demanded
reliability level in compared with the existing
systems.</dc:description>
          <dc:date>2014-02-18</dc:date>
          <dc:identifier>http://hdl.handle.net/20.500.12678/0000002926</dc:identifier>
          <dc:identifier>https://meral.edu.mm/records/2926</dc:identifier>
        </oai_dc:dc>
      </metadata>
    </record>
  </GetRecord>
</OAI-PMH>
