Benchmarking framework reveals major safety risks of using AI in lab experiments

While artificial intelligence (AI) models have proved useful in some areas of science, like predicting 3D protein structures, a new study shows that it should not yet be trusted in many lab experiments. The study, published in Nature Machine Intelligence, revealed that all of the large-language models (LLMs) and vision-language models (VLMs) tested fell short on lab safety knowledge. Overtrusting these AI models for help in lab experiments can put researchers at risk.

发布者:Dr.Durant,转转请注明出处:https://robotalks.cn/benchmarking-framework-reveals-major-safety-risks-of-using-ai-in-lab-experiments/

(0)
上一篇 19 1 月, 2026 7:18 下午
下一篇 19 1 月, 2026 7:19 下午

相关推荐

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信
社群的价值在于通过分享与互动,让想法产生更多想法,创新激发更多创新。