Tag:
llm# Post Title Date User
Contradicting responses observed 4 days ago Geoff Garabedian
Zero shot not reliable 4 days ago Dulce Martinez
Handling long context still feels unreliable responses lose track 6 days ago Aleecia Centeno
Outputs are technically right but not helpful in real scenarios 6 days ago Chris Brown
Same query different answers every time makes it hard to trust outputs 6 days ago Andrew Day
Evaluating AI outputs feels subjective team members disagree on what is correct 6 days ago Jason Nejezchleb
Not getting consistent responses from same prompt across sessions any fix 6 days ago Shirley Evans-Wofford
Is anyone else seeing random hallucinations even after adding RAG or is it just us 6 days ago Matthew Basile
LLM output looked perfect in testing but broke badly with real users not sure what we missed 6 days ago Mack Silvertooth
