Better AI Training with HPC

By Megan Johnson, NCSA
An image of an AI hand - the fingers are bent at odd angles and they have extra joints, creating an unsettling image of a hand.

By now, most people have probably seen those Lovecraftian AI-generated images where people have eight fingers on one hand or too many legs. The AI-generated image of a hand above is a good example, with six fingers that bend in inexplicable ways. An international team of researchers led by Carnegie Mellon University (CMU) is attempting to remedy some of these issues with Artificial intelligence (AI). They’re using ACCESS resources from the Pittsburgh Supercomputing Center (PSC) to fine-tune AI-image generation. In this case, the researchers are specifically trying to resolve some of the issues with AI-generated images that are downright offensive.

To understand the problem, it helps to know how AI creates these images in the first place. AI doesn’t inherently know what a hand or a leg is. It has to learn what these things are and how to create images of them by looking at examples. The fewer examples a machine learning algorithm has to reference, the more likely it is that the AI will get confused. The hand example isn’t special, but weird AI hands might be the most noticeable way to see how easily AI can miss the target. While hands are in a lot of images, hands are rarely posed in exactly the same way, so the AI tries its best to approximate what you want. It doesn’t really know what a hand is, but it’s been trained on the general idea of a hand based only on pictures that say they have hands in them and confuses, for instance, clasped hands with a hand on its own. The AI has to guess, and sometimes those guesses are very bad and many-fingered.

Online image databases used to train these AI algorithms come chock full of stock images of people and things. But these images are dominated by the same Western cultures that dominate most of the internet data available. That means underrepresented cultures don’t have a lot of examples for the AI to learn from. With so little data to go on, AI will sometimes create offensive images when depicting certain cultures, especially because it’s scraping the web at large for images, and will look at any image with a description as an authoritative source, even if it’s on a page filled with offensive material. 

One way AI-generated images can improve is through careful curation of the images added to a learning algorithm. By having experts from various cultures choose the images that the AI uses to create culture specific images, the results are far better than using random images on the Internet.

Read the original article on PSC’s site to see some striking examples of how much better AI generation can be when care is taken in teaching: Representation Matters in AI-Generated Images

Project Details

Resource Provider Institution(s): Pittsburgh Supercomputing Center (PSC)
Affiliations: Carnegie Mellon University
Funding Agency: NSF
Grant or Allocation Number(s): CIS240084

The science story featured here was enabled by the U.S. National Science Foundation’s ACCESS program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.

Sign up for ACCESS news and updates.

Receive our monthly newsletter with ACCESS program news in your inbox. Read past issues.