Every Leading Large Language Model Leans Left Politically
0 responses | 0 likes
Started by metmike - Aug. 17, 2024, 8 a.m.


https://wattsupwiththat.com/2024/08/17/every-leading-large-language-model-leans-left-politically/

“The homogeneity of test results across LLMs developed by a wide variety of organizations is noteworthy,” Rozado commented. 

This raises a key question: why are LLMs so universally biased in favor of leftward political viewpoints? Could the models’ creators be fine-tuning their AIs in that direction, or are the massive datasets upon which they are trained inherently biased? Rozado could not conclusively answer this query.

 “The results of this study should not be interpreted as evidence that organizations that create LLMs deliberately use the fine-tuning or reinforcement learning phases of conversational LLM training to inject political preferences into LLMs. If political biases are being introduced in LLMs post-pretraining, the consistent political leanings observed in our analysis for conversational LLMs may be an unintentional byproduct of annotators’ instructions or dominant cultural norms and behaviors.”

Comments
No replies yet. Be the first!