How does a computer decide what's "extreme", "propaganda", "racist"? These are terms taken for granted in common conversation, but when subject to scrutiny, it becomes obvious they lack objective non-circular definitions. Rather, they are terms predicated on after-the-fact rationalizations that a computer has no way of knowing or distinguishing without, ironically, purposefully inserted biases (and often poorly done at that). You can't build a "convincing" or "charismatic" AI because persuasion and charm are qualities that human beings (supposedly) comprehend and respond to, not machines. AI "Charisma" is just a model built on positive reinforcement.
> These are terms taken for granted in common conversation, but when subject to scrutiny, it becomes obvious they lack objective non-circular definitions
This is false. A simple dictionary check shows that the definitions are in fact not circular.
In general, dictionaries are useful in providing a history, and sometimes, an origin of a term's usage. However, they don't provide a comprehensive or absolute meaning. Unlike scientific laws, words aren't discovered, but rather manufactured. Subsequently they are, adopted by a larger public, delimited by experts, and at times recontextualized by an academic/philosophical discipline or something of that nature.
Even in the best case, when a term is clearly defined and well-mapped to its referent, popular usage creates a connotation that then supplants the earlier meaning. Dictionaries will sometimes retain older meanings/usages, and in doing so, build a roster of "dated", "rare", "antiquated", or "alternative" meanings/usages throughout a term's mimetic lifecycle.
It's an issue of correlating semantics with preconceived value-judgements (i.e. the is-ought problem). While this may affect language as a whole, there are (often abstract and controversial) terms/ideas that are more likely to acquire or have already acquired inconsistent presumptions and interpretations than others. The questionable need for weighting certain responses as well as the odd and uncanny results that follow should be proof enough that what is expected of a human being to "just get" by other members of "society" (an event I'm unconvinced happens as often as desired or claimed) is unfalsifiable or meaningless to a generative model.