Start United States USA — software Anthropic's Claude Clamps Down on Biological and Nuclear Weapon Risks

Anthropic's Claude Clamps Down on Biological and Nuclear Weapon Risks

146
0
TEILEN

Though we fortunately haven’t seen any examples in the wild yet, many academic studies have demonstrated it may be possible that large language models could theoretically be used to create biological, chemical, radiological, or nuclear weapons.
As new chatbot safety controversies now crop up seemingly every month, AI start-up Anthropic has updated the usage policy of its Claude chatbot to clamp down on one potentially disastrous use case.
The chatbot now forbids using it to “synthesize, or otherwise develop, high-yield explosives or biological, chemical, radiological, or nuclear weapons or their precursors.” Though its terms and conditions had previously contained a clause forbidding the design of “weapons, explosives, dangerous materials or other systems designed to cause harm,” it’s the first time they have contained this level of granular detail, as The Verge points out.
In contrast, Claude has loosened up its restrictions in some other areas.

Continue reading...