Scroll Top

The cost of protecting big data for machine learning has dropped by a factor of a hundred


The cost of establishing differential privacy is reduced thanks to the hashing approach.

When utilizing or sharing massive datasets for machine learning, computer scientists have identified an affordable solution for tech businesses to adopt a strict form of personal data protection.

When utilizing or sharing huge datasets for machine learning, Rice University computer scientists have identified an affordable approach for tech businesses to adopt a stringent form of personal data protection.

“If data privacy can be assured, there are numerous scenarios where machine learning might assist society,” said Anshumali Shrivastava, an associate professor of computer science at Rice. “If we could teach machine learning algorithms to seek for patterns in big datasets of medical or financial information, we might improve medical treatments and identify discriminatory tendencies, for example. Because data privacy measures can not scale, this is now impossible.”

With a new strategy they’ll propose this week at CCS 2021, the Association for Computing Machinery’s annual flagship conference on computer and communications security, Shrivastava and Rice graduate student Ben Coleman seek to alter that. Shirvastava and Coleman discovered they could construct a brief summary of an extensive database of sensitive entries using a method called locality sensitive hashing. Their approach, dubbed RACE, gets its name from these summaries, which are also known as “repeated array of count estimators” drawings.

RACE drawings, according to Coleman, are both safe to make public and valuable for algorithms that employ kernel sums, one of machine learning’s core building blocks, and for machine-learning systems that do common tasks like classification, ranking, and regression analysis. RACE, he added, may enable businesses to profit from large-scale, distributed machine learning while still maintaining a strict kind of data protection known as differential privacy.

See also  Simplifying RNA editing for genetic disease treatment

Differential privacy, which is employed by a number of computer giants, is based on the principle of obscuring individual information with random noise.

“There are elegant and strong approaches for meeting differential privacy criteria today,” Coleman said, “but none of them scale.” “As data grows more dimensional, the processing overhead and memory requirements climb exponentially.”

Data is becoming more multi-dimensional, which means it comprises a large number of observations as well as a large number of distinct attributes about each observation.

He said that RACE is used to create scales for high-dimensional data. The drawings are compact and easy to disseminate, as are the computational and memory needs for creating them.

“If engineers want to employ kernel sums today, they must either forfeit their budget or their customers’ privacy,” Shrivastava added. “RACE alters the economics of disclosing high-dimensional data with varying levels of privacy. It’s easy to use, quick, and costs a fraction of what traditional techniques do.”

Shrivasta and his students have created a number of algorithmic ways to make machine learning and data science quicker and more scalable in the past. They and their collaborators have discovered a more efficient way for social media companies to prevent misinformation from spreading online, discovered a way to train large-scale deep learning systems up to 10 times faster for “extreme classification” problems, discovered a way to more accurately and efficiently estimate the number of identified victims killed in the Syrian civil war, and demonstrated that deep neural networks can be trained up to 15 times faster on general purpose CPUs.

See also  Transitioning to Net Zero: The Pioneering Role of Emerging Technologies

Related Posts

Leave a comment

You must be logged in to post a comment.