Over 60 years of research in coding theory, that started with the works of Shannon and Hamming, have given us nearly optimal ways to add redundancy to messages, encoding bit strings representing messages into longer bit strings called codewords, in a way that the message can still be recovered even if a certain fraction of the codeword bits are corrupted. Classical error-correcting codes, however, do not work well when messages are modern massive datasets, because their decoding time increases (at least) linearly with the length of the message. As a result in typical applications large datasets are first partitioned into small blocks, each of which is then encoded separately. Such encoding allows efficient randomaccess retrieval of the data, but yields poor noise resilience. Locally decodable codes are codes intended to address this seeming conflict between efficient retrievability and reliability. They are codes that simultaneously provide efficient random-access retrieval and high noise resilience by allowing reliable reconstruction of an arbitrary data bit from looking at only a small number of randomly chosen codeword bits. Apart from the natural application to data transmission and storage such codes have important applications in cryptography and computational complexity theory. This review introduces and motivates locally decodable codes, and discusses the central results of the subject. Locally Decodable Codes assumes basic familiarity with the properties of finite fields and is otherwise self-contained. It will benefit computer scientists, electrical engineers, and mathematicians with an interest in coding theory.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.