In 2021, the Maryland Division of Well being and the state police have been confronting a disaster: Deadly drug overdoses within the state have been at an all-time excessive, and authorities didn’t know why.
In search of solutions, Maryland officers turned to scientists on the Nationwide Institute of Requirements and Expertise, the nationwide metrology institute for america, which defines and maintains requirements of measurement important to a variety of commercial sectors and well being and safety purposes.
There, a analysis chemist named Ed Sisco and his workforce had developed strategies for detecting hint quantities of medicine, explosives, and different harmful supplies—methods that might shield legislation enforcement officers and others who needed to accumulate these samples. And a pilot uncovered new, vital data nearly instantly. Learn the complete story.
—Adam Bluestein
This story is from the following version of our print journal. Subscribe now to learn it and get a replica of the journal when it lands!
Part two of navy AI has arrived
—James O’Donnell
Final week, I spoke with two US Marines who spent a lot of final yr deployed within the Pacific, conducting coaching workouts from South Korea to the Philippines. Each have been chargeable for analyzing surveillance to warn their superiors about potential threats to the unit. However this deployment was distinctive: For the primary time, they have been utilizing generative AI to scour intelligence, by a chatbot interface much like ChatGPT.
As I wrote in my new story, this experiment is the newest proof of the Pentagon’s push to make use of generative AI—instruments that may interact in humanlike dialog—all through its ranks, for duties together with surveillance. This push raises alarms from some AI security consultants about whether or not massive language fashions are match to research delicate items of intelligence in conditions with excessive geopolitical stakes.