Efficient maliciously secure two-party mixed-protocol framework for data-driven computation tasks
Computer Standards and Interfaces
In the artificial intelligence era, data-driven computation tasks, such as machine learning, have been playing an essential role as the decision-maker to unlock the value of big data in many fields. Moreover, the ultimate goal of pursuing accuracy and efficiency improvement to better promote the application of data-driven computation has never changed. Since the main method to improve the accuracy of these tasks (e.g., the model training of machine learning) is to increase the diversity of datasets, this requires multiple data providers to share their data. However, data providers, e.g., private companies, are reluctant to share their datasets directly, considering the privacy protection of user information and the leakage prevention of their business secrets. Therefore, how to securely and efficiently perform joint datasets based data-driven computation tasks has become the main problem. In this work, without getting the aid of any trusted-third party (e.g., the cloud server), we construct an efficient maliciously secure two-party mixed-protocol framework for data-driven computation tasks. In particular, we construct a new cryptography gadget called committed oblivious linear evaluation (C-OLE) based on the homomorphic commitments in the malicious model. Then we construct two types of share conversion protocols in the malicious model with the above C-OLE gadget to construct the two-party mixed-protocol framework for data-driven computation tasks. Without utilizing the random oracle, our framework can provide a stronger security guarantee than the other two-party mixed-protocol frameworks in the literature. Furthermore, we conscientiously evaluate the theoretical efficiency of the two shares conversion protocols and provide the result as an important reference for future developers who intend to securely and efficiently instantiate the data-driven computation tasks (e.g., privacy-preserving machine learning applications) in the malicious model.
Open Access Status
This publication is not available as open access
Australian Research Council