AI cannot know the objective truth

We expect AIs to be unbiased, to provide us with objective truths. This is why we want to eliminate hallucinations. But existing LLMs will not be able to do this. Even with RAG, RLHF, online training this is futile. Human language encodes biases and available content contains half-truths and lies, which are hard to detect. Even academic journals do not contain only objective, unobjectionable truths: low-quality studies, one-sided reviews, publication bias, questionable research practices, … are rampant and hard to detect automatically.

We must remain critical thinkers.

Leave a Reply

Your email address will not be published. Required fields are marked *