• Does AI Enable and Enhance Biorisks?

    The diversity of the biorisk landscape highlights the need to clearly identify which scenarios and actors are of concern. It is important to consider AI-enhanced risk within the current biorisk landscape, in which both experts and non-experts can cause biological harm without the need for AI tools, thus highlighting the need for layered safeguards throughout the biorisk chain.

  • Generative AI and Weapons of Mass Destruction: Will AI Lead to Proliferation?

    Large Language Models (LLMs) caught popular attention in 2023 through their ability to generate text based on prompts entered by the user. Ian J. Stewart writes that “some have raised concerns about the ability of LLMs to contribute to nuclear, chemical and biological weapons proliferation (CBRN). Put simply, could a person learn enough through an interaction with an LLM to produce a weapon? And if so, would this differ from what the individual could learn by scouring the internet?”

  • Evaluating the Truthfulness of Fake News Through Online Searches Increases the Chances of Believing Misinformation

    Conventional wisdom suggests that searching online to evaluate the veracity of misinformation would reduce belief in it. But a new study by a team of researchers shows the opposite occurs: Searching to evaluate the truthfulness of false news articles actually increases the probability of believing misinformation.

  • New Nuclear Deflection Simulations Advance Planetary Defense Against Asteroid Threats

    As part of an effort to test different technologies to protect Earth from asteroids, a kinetic impactor was deliberately crashed into an asteroid to alter its trajectory. However, with limitations in the mass that can be lifted to space, scientists continue to explore nuclear deflection as a viable alternative to kinetic impact missions. Nuclear devices have the highest ratio of energy density per unit of mass of any human technology, making them an invaluable tool in mitigating asteroid threats.

  • Planning for an Uncertain Future: What Climate-Related Conflict Could Mean for U.S. Central Command

    The Middle East and Central Asia are projected to become hotter and drier, with reduced access to fresh water, resulting from climate change. These changes could lead to greater conflict in U.S. Central Command’s (CENTCOM) area of responsibility.

  • Artificial Intelligence Systems Excel at Imitation, but Not Innovation

    Artificial intelligence (AI) systems are often depicted as sentient agents poised to overshadow the human mind. But AI lacks the crucial human ability of innovation. While children and adults alike can solve problems by finding novel uses for everyday objects, AI systems often lack the ability to view tools in a new way.

  • “Energy Droughts” in Wind and Solar Can Last Nearly a Week, Research Shows

    Understanding the risk of compound energy droughts—times when the sun doesn’t shine and the wind doesn’t blow—will help grid planners understand where energy storage is needed most.

  • Taking Illinois’ Center for Digital Agriculture into the Future

    The Center for Digital Agriculture (CDA) at the University of Illinois Urbana-Champaign has a new executive director, John Reid, who plans to support CDA’s growth across all dimensions of use-inspired research, translation of research into practice, and education and workforce development.

  • Earth Had Is Warmest November on Record

    November 2023 was the warmest November in NOAA’s 174-year global climate record, and 2023 still on track to be the globe’s warmest year recorded.

  • Why Federal Efforts to Protect Schools from Cybersecurity Threats Fall Short

    In August 2023, the White House announced a plan to bolster cybersecurity in K-12 schools – and with good reason. Between 2018 and mid-September 2023, there were 386 recorded cyberattacks in the U.S. education sector and cost those schools $35.1 billion. K-12 schools were the primary target. While the steps taken by the White House are positive, as someone who teaches and conducts research about cybersecurity, I don’t believe the proposed measures are enough to protect schools from cyberthreats.

  • ChatGPT Could Help First Responders During Natural Disasters

    A little over a year since its launch, ChatGPT’s abilities are well known. The machine learning model can write a decent college-level essay and hold a conversation in an almost human-like way. But could its language skills also help first responders find those in distress during a natural disaster?

  • Innovative Long-Duration Energy Storage Project

    Argonne and Idaho National Laboratories have been selected by the U.S. Department of Energy for a project to validate CMBlu Energy’s battery technology for microgrid resilience and electric vehicle charging. U.S. Department of Energy selects national labs to validate the company’s battery technology for microgrid resilience and electric vehicle charging.

  • Costs of the Climate Crisis: An Insurance Umbrella for Nations at Risk

    International study in the run-up to COP28: Public-private partnerships may help protect developing countries from the financial consequences of climate change.

  • AI Networks Are More Vulnerable to Malicious Attacks Than Previously Thought

    Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.

  • Smart Microgrids Can Restore Power More Efficiently and Reliably in an Outage

    It’s a story that’s become all too familiar — high winds knock out a power line, and a community can go without power for hours to days, an inconvenience at best and a dangerous situation at worst. Engineers developed an AI model that optimizes the use of renewables and other energy sources to restore power when a main utility fails.