#ai #artificialintelligence #bias #ethicalai #responsibleai #diversity | Avi Hakhamanesh | 10 comments
🔥 AI has a Hotness Problem.
Have you ever noticed how AI-generated images of people tend to be... well, unusually good looking?
Every time I ask AI image generators to create women, they tend to have big boobs, small waists, perfect skin and long hair.
But why is AI so fixated on beauty?
The complexities and innerworkings of how these AI models work isn’t yet fully understood, even by experts. But there seem to be three main theories:
💠 Hotness In = Hotness Out (Data Bias): AI is trained on datasets of our “best selves” - edited, airbrushed images of models, celebrities and enhanced selfies. So, it inherently adopts heightened beauty standards, generating images thar are ‘too good to be true.’
💠 The Midpoint Hottie Effect (Averageness Effect): AI generates attractive faces as a by-product of its process. When combining multiple features, it leans towards more symmetrical faces and blemish-free complexions, which we perceive as more universally attractive.
💠 Hot by Design (Feedback Loop): Some AI image generators learn from user interactions and feedback data, noting which outputs are preferred. Adobe, for example, noted a "drift toward hotness" in its image-generating tool Firefly, based on which images were most frequently downloaded by users. If we prefer attractive faces, AI learns to generate more of them.
We know that AI carries various biases ranging from gender and race to political and ideological ones. This beauty bias is another concerning limitation.
Just as magazines and influencers have been criticized for setting unrealistic beauty standards, AI’s inclination towards attractiveness could distort our perception of what's 'normal.'
As we continuously integrate AI into our lives, we need to proactively examine and address its biases.
Increasing diverse training data, auditing algorithms and having inclusive teams build AI can help.
But broader change is needed in how we value beauty and diversity.
For a deeper dive, check out the link to the article in The Atlantic in the comments 👇.
#AI #ArtificialIntelligence #Bias #EthicalAI #ResponsibleAI #Diversity | 10 comments on LinkedIn