• Global cybersecurity experts gather at Israel’s Cyber Week

    The magnitude of Israel’s cybersecurity industry was on full show this week at the 9th Annual Cyber Week Conference at Tel Aviv University. The largest conference on cyber tech outside of the United States, Cyber Week saw 8,000 attendees from 80 countries hear from more than 400 speakers on more than 50 panels and sessions.

  • Any single hair from the human body can be used for identification

    Any single hair from anywhere on the human body can be used to identify a person. This conclusion is one of the key findings from a nearly year-long study by a team of researchers. The study could provide an important new avenue of evidence for law enforcement authorities in sexual assault cases.

  • Confirmed: Global warming attributable to human activity, external factors

    Researchers have confirmed that human activity and other external factors are responsible for the rise in global temperature. While this has been the consensus of the scientific community for a long time, uncertainty remained around how natural ocean-cycles might be influencing global warming over the course of multiple decades. The answer we can now give is: Very little to none.

  • Rectifying a wrong nuclear fuel decision

    In the old days, new members of Congress knew they had much to learn. They would defer to veteran lawmakers before sponsoring legislation. But in the Twitter era, the newly elected are instant experts. That is how Washington on 12 June witnessed the remarkable phenomenon of freshman Rep. Elaine Luria (D-Norfolk), successfully spearheading an amendment that may help Islamist radicals get nuclear weapons. The issue is whether the U.S. Navy should explore modifying the reactor fuel in its nuclear-powered vessels — as France already has done — to reduce the risk of nuclear material falling into the hands of terrorists such as al-Qaida or rogue states such as Iran. Luria says no. Alan J. Kuperman writes in the Pilot Online that more seasoned legislators have started to rectify the situation by passing a spending bill on 19 June that includes the funding for naval fuel research. They will have the chance to fully reverse Luria in July on the House floor by restoring the authorization. Doing so would not only promote U.S. national security but teach an important lesson that enthusiasm is no substitute for experience.

  • Deepfake detection algorithms will never be enough

    You may have seen news stories last week about researchers developing tools that can detect deepfakes with greater than 90 percent accuracy. It’s comforting to think that with research like this, the harm caused by AI-generated fakes will be limited. Simply run your content through a deepfake detector and bang, the misinformation is gone!  James Vincent writers in The Verge that software that can spot AI-manipulated videos, however, will only ever provide a partial fix to this problem, say experts. As with computer viruses or biological weapons, the threat from deepfakes is now a permanent feature on the landscape. And although it’s arguable whether or not deepfakes are a huge danger from a political perspective, they’re certainly damaging the lives of women here and now through the spread of fake nudes and pornography.

  • The history of cellular network security doesn’t bode well for 5G

    There’s been quite a bit of media hype about the improvements 5G is set to supposedly bring to users, many of which are no more than telecom talking points. One aspect of the conversation that’s especially important to get right is whether or not 5G will bring much-needed security fixes to cell networks. Unfortunately, we will still need to be concerned about these issues—and more—in 5G.

  • Deepfakes: Forensic techniques to identify tampered videos

    Computer scientists have developed a method that performs with 96 percent accuracy in identifying deepfakes when evaluated on large scale deepfake dataset.

  • AI helps protect emergency personnel in hazardous environments

    Whether it’s at rescue and firefighting operations or deep-sea inspections, mobile robots finding their way around unknown situations with the help of artificial intelligence (AI) can effectively support people in carrying out activities in hazardous environments.

  • Geoengineer the planet? More scientists now say it must be an option

    Once seen as spooky sci-fi, geoengineering to halt runaway climate change is now being looked at with growing urgency. A spate of dire scientific warnings that the world community can no longer delay major cuts in carbon emissions, coupled with a recent surge in atmospheric concentrations of CO2, has left a growing number of scientists saying that it’s time to give the controversial technologies a serious look. Fred Pearce writes in Yale Environment 360 that among the technologies being considered are a range of efforts to restrict solar radiation from reaching the lower atmosphere, including spraying aerosols of sulphate particles into the stratosphere, and refreezing rapidly warming parts of the polar regions by deploying tall ships to pump salt particles from the ocean into polar clouds to make them brighter.

  • Truth prevails: Sandy Hook father’s victory over conspiracy theory crackpots

    Noah Pozner, then 6-year old, was the youngest of twenty children and staff killed at Sandy Hook Elementary School in Connecticut. Last week, his father, Lenny Pozner, won an important court victory against conspiracy theorists who claimed the massacre had been staged by the Obama administration to promote gun control measures. The crackpots who wrote a book advancing this preposterous theory also claimed that Pozner had faked his son’s death certificate as part of this plot.

  • Identifying a fake picture online is harder than you might think

    Research has shown that manipulated images can distort viewers’ memory and even influence their decision-making. So the harm that can be done by fake images is real and significant. Our findings suggest that to reduce the potential harm of fake images, the most effective strategy is to offer more people experiences with online media and digital image editing – including by investing in education. Then they’ll know more about how to evaluate online images and be less likely to fall for a fake.

  • International community unprepared to deal with catastrophic biological event

    The risks of a global catastrophic biological event are growing, intensified by an increasingly interconnected world, terrorist and state interest in weapons of mass destruction, global political instability, and rapid advances in biotechnology. International leaders and organizations today are unprepared to react with the kind of effective, coordinated response needed to investigate and identify the pathogen, prevent the spread of disease, and, most importantly, save lives.

  • How climate change impacts the economy

    Warmer temperatures, sea level rise and extreme weather will be deleterious to the U.S. economy: Rising temperatures damage property and critical infrastructure, impact human health and productivity, and negatively affect sectors such as agriculture, forestry, fisheries, and tourism. The demand for energy will increase as power generation becomes less reliable, and water supplies will be stressed. Damage to other countries around the globe will also affect U.S. business through disruption in trade and supply chains.

  • Conspiracy theories and the people who believe in them: Book review

    In Conspiracy Theories and the People Who Believe in Them, Joseph Uscinski presents a collection that brings together contributors to offer an wide-ranging take on conspiracy theories, examining them as historical phenomena, psychological quirks, expressions of power relations an political instruments. While this is an interesting and expansive volume, it overlooks the conundrum posed by conspiracy theories that succeed in capturing the epistemological authorities.

  • “Vaccinating” algorithms against attacks on machine learning

    Algorithms “learn” from the data they are trained on to create a machine learning model that can perform a given task effectively without needing specific instructions, such as making predictions or accurately classifying images and emails. Researchers have developed a world-first set of techniques to effectively “vaccinate” algorithms against adversarial attacks, a significant advancement in machine learning research.