Our first AI alignment paper offers the solution for LLM hallucinations and mode collapse in conversation contextuality between human and AI.