Date:2026-04-09 Click:
Professor Qiu Houming from the School of Software and Internet of Things Engineering has been dedicated to addressing core challenges in coded distributed computing, including reducing the recovery threshold, enhancing system reliability, and improving decoding flexibility. In recent years, his research findings have been published in top-tier journals in the fields of computer networking and distributed computing, such as 《IEEE Transactions on Mobile Computing》, 《IEEE Transactions on Cognitive Communications and Networking》, 《IEEE Transactions on Cloud Computing》, and《IEEE Transactions on Emerging Topics in Computing》.
Recently, the team has made critical progress in both the field of coded distributed computing and its application in machine learning. Two research outcomes were accepted for publication in the prestigious international journals《IEEE Transactions on Mobile Computing》 and 《IEEE Transactions on Emerging Topics in Computing》 in April and November, respectively. This marks significant industry recognition of the team's work in coded distributed computing and system reliability. The relevant technologies are expected to be directly applicable to large-scale computing systems and edge computing scenarios, providing a novel technical pathway for distributed systems to handle massive data. The two achievements are described as follows:

Achievement 1: A Coded Computing Scheme Based on Barycentric Rational Interpolation Enables Arbitrary-Sized Recovery Thresholds.Published in 《IEEE Transactions on Mobile Computing》 in November 2025, the study titled 《Barycentric Coded Distributed Computing with Flexible Recovery Threshold for Collaborative Mobile Edge Computing》 proposes a coded distributed computing scheme based on barycentric rational interpolation. This scheme can reconstruct the final result using any returned results from worker nodes, significantly reducing task latency. Moreover, it supports computation over both finite fields and real fields while ensuring numerical stability. On the other hand, the carefully designed encoding/decoding functions guarantee the absence of poles, effectively improving the approximation accuracy of decoding and enabling flexible accuracy adjustment. Furthermore, the team integrated the proposed coding scheme with distributed machine learning algorithms, which significantly reduces training time while ensuring convergence and tolerating straggler nodes in the system. This research breaks through the limitation of traditional coded distributed computing schemes that are confined to the single task of matrix multiplication, extending the application domain to arbitrary polynomial computation tasks.

Achievement 2: Approximated Coded Computing: Towards Fast, Private and Secure Distributed Machine Learning.Published in 《IEEE Transactions on Emerging Topics in Computing》in April 2025, the study titled《Approximated Coded Computing:Towards Fast, Private and Secure Distributed Machine Learning》 proposes a fast, secure, and privacy-preserving approximated coded distributed computing scheme. This scheme employs a novel encryption algorithm based on elliptic curve cryptography to ensure data security during transmission. In particular, it does not impose a strict limit on the minimum number of results required to wait for. Furthermore, the scheme can effectively overcome straggler nodes in the system, ensuring that tasks are executed without interference. Meanwhile, by applying random data during the encoding phase, the scheme guarantees that the original data is not disclosed, while successfully decoding the original results during the decoding phase. Finally, deep neural networks built upon the proposed coding scheme achieve a significant improvement in convergence speed over baseline methods. This research not only addresses the strict limitations of traditional coding schemes on recovery thresholds in the design of coding schemes, but also exhibits strong resilience against straggler nodes, colluding nodes, and eavesdroppers during transmission in distributed systems. Its significant advantages in the field of distributed machine learning are expected to see widespread application.