Training AI Generative Models to Harness Unlabeled Data and Reduce Bias

demo

During his Time Machine Interactive discussion on the future of AI technology, University of Texas at Austin professor Alex Dimakis shared insights into how biases originating from narrow and incomplete data can corrupt the results from using unlabeled information such as blurry images.

 

The work Dimakis and his students have done using neural network filters to improve and clarify compromised images shows the power and potential of AI, and how important it is to incorporate enough data to weed out biases that can lead to flawed results.

 

When properly trained, Dimakis said, the networks can improve the speed of magnetic resonance imaging hardware by up to 12 times, or improve security processes by efficiently scanning video for designated anomalies. But when networks are poorly trained from the outset with limited data and rules, we can, for example, get an anglo-filtered result from an initial highly pixelated photograph of former U.S. President Barack Obama, which went viral in 2020 and led to much discussion about biases in AI research.

AI Bias-Obama

Credit: Twitter/@Chicken3gg

 

“Everybody was going there and uploading images, and it was basically making everybody white,” said Dimakis, co-director of the National Institute on Foundations of Machine Learning. “Is the problem because the generator has only seen white people in the training set, so is it a dataset problem or an algorithms problem? There is bias in the dataset, but there is also a bias amplification because of the algorithm.”

 

How generative models can learn

During his presentation, Dimakis broke down the process used by networks to “imagine” what’s missing from incomplete or poorly rendered images. Generative models use classifiers and filters that help determine what kind of image is being presented, using historical data and results to produce a precise final result.

 

One of the problems, he said, comes when the classifiers and filters don’t have a wide enough data library to correctly interpret an initial image. For example, if a network has learned how to spot a camel in a photograph by learning that camels are always in sandy environments, they’ll be unable to spot a camel in a cityscape picture.

 

And if a network has only been trained on portraits of white people, then a blurry or incomplete photo of the 44th U.S. president will be rendered as a white man.

 

The results will be similarly frustrating, Dimakis joked, if the algorithm used to monitor kidney health and detect the presence of a tumor is trained using images taken mainly from the vast supply of cat photos and videos on social media.

 

The root cause of such problematic results is that the algorithms want to produce a “right” answer or result as often as possible, based on an analysis of existing data.

 

To illustrate how quickly poor usage of unlabeled data can go astray, Dimakis pointed out that an algorithm trained to believe that coin flips will produce a “heads” result 60% of the time will predict a heads result on all future flips. This is the outcome the algorithm produces so that it can expect to be correct most of the time—when it is actually far from accurate.

 

Huge potential, problems using unlabeled data

One of the main issues with unlabeled data, Dimakis said, is that the AI generators needed to correctly interpret the data are difficult and time-consuming to train. Studying and improving this technology is part of the work Dimakis and his students at UT are currently conducting.

 

“As you train these weights internally, it becomes better and better,” he said. “Very few companies are able to train generators unlike your good old classified (data models) where anybody can train it with two weeks of machine learning and research. Training generators requires PhDs and experienced researchers.”

 

Once properly trained, the imaging research Dimakis has led can fill in missing data, perform image or data compression, perform super-resolution, or, in instances of a noisy or blurry image, increase the resolution and do colorization.

 

The benefits of these capabilities are many, including improved seismic imaging and speeding up MRI scans to make them more practical for babies and young children who wouldn’t otherwise be able to remain still long enough for the procedure to be effective.

 

Dimakis said that properly trained generators can use unlabeled data in new and powerful ways, though he hopes their use becomes more widespread than enabling the creation of faked profiles on LinkedIn and other social media sites.

 

“We think it’s a very interesting direction that can have a lot of applications. Generative models are neural networks that can imagine things and … can be used for all kinds of problems, and they can be combined with pre-trained classifiers to guide generation,” he said. “Medical imaging, computational photography, object recognition, noise imaging, free text, video search, and seismic imaging. These are the things that we’re working on. The future of AI will be unsupervised in general because everybody has a lot of data, but not many people have a lot of labeled data.”

 

In addition to partnering in Dimakis’ work in AI at UT, SparkCognition partners with the university’s Texas Robotics program to advance artificial intelligence in robotics and practical industry applications. Students and researchers participating in that work have utilized SparkCognition’s HyperWerx facility located in north Austin, which provides a 50-acre proving ground for robotics and unmanned aerial vehicles (UAVs). The first-of-its-kind facility that HyperWerx provides offers interactive research environments that showcase SparkCognition’s commitment to innovation and making discoveries in all areas of AI utilization.

Latest blogs

SparkCognition is committed to compliance with applicable privacy laws, including GDPR, and we provide related assurances in our contractual commitments. Click here to review our Cookie & Privacy Policy.