Using a novel set of questions compiled by practicing physicians, the Stanford-built Almanac outperformed plain-vanilla ChatGPT-4, Microsoft’s Bing, and Google’s Bard.

Aller à la source