• Russian Twitter propaganda predicted 2016 U.S. election polls

    There is one irrefutable, unequivocal conclusion which both the U.S. intelligence community and the thorough investigation by Robert Mueller share: Russia unleashed an extensive campaign of fake news and disinformation on social media with the aim of distorting U.S. public opinion, sowing discord, and swinging the election in favor of the Republican candidate Donald Trump. But was the Kremlin successful in its effort to put Trump in the White House? Statistical analysis of the Kremlin’s social media trolls on Twitter in the run-up to the 2016 election social suggests that the answer is “yes.”

  • Personalized medicine software vulnerability uncovered

    A weakness in one common open source software for genomic analysis left DNA-based medical diagnostics vulnerable to cyberattacks. Researchers at Sandia National Laboratories identified the weakness and notified the software developers, who issued a patch to fix the problem.

  • How Content Removal Might Help Terrorists

    In recent years, counterterrorism policy has focused on making social media platforms hostile environments for terrorists and their sympathizers. From the German NetzDG law to the U.K.’s Online Harms White Paper, governments are making it clear that such content will not be tolerated. Platforms—and maybe even specific individuals—will be held accountable using a variety of carrot-and-stick approaches. Joe Whittaker write in Lawfare that most social media platforms are complying, even if they are sometimes criticized for not being proactive enough. On its face, removal of terrorist content is an obvious policy goal—there is no place for videos of the Christchurch attack or those depicting beheadings. However, stopping online terrorist content is not the same as stopping terrorism. In fact, the two goals may be at odds.

  • Second Florida city pays ransom to hackers

    A second small city in Florida has agreed to pay hundreds of thousands of dollars in ransom to cybercriminals who disabled its computer system. Days after ransomware crippled the city of about 12,000 residents, officials of Lake City agreed this week to meet the hackers’ ransom demand: 42 Bitcoin or about $460,000.

  • U.S. House passes election security bill after Russian hacking

    The U.S. House of Representatives, mostly along partisan lines, has passed legislation designed to enhance election security following outrage over Russian cyberinterference in the 2016 presidential election.The Democratic-sponsored bill would mandate paper ballot voting and postelection audit as well as replace outdated and vulnerable voting equipment. The House bill faces strong opposition in the Republican-controlled Senate.

  • Global cybersecurity experts gather at Israel’s Cyber Week

    The magnitude of Israel’s cybersecurity industry was on full show this week at the 9th Annual Cyber Week Conference at Tel Aviv University. The largest conference on cyber tech outside of the United States, Cyber Week saw 8,000 attendees from 80 countries hear from more than 400 speakers on more than 50 panels and sessions.

  • We must prepare for the next pandemic

    When the next pandemic strikes, it will likely be accompanied by a deluge of rumors, misinformation and flat-out lies that will appear on the internet. Bruce Schneier writes that “Pandemics are inevitable. Bioterror is already possible, and will only get easier as the requisite technologies become cheaper and more common. We’re experiencing the largest measles outbreak in twenty-five years thanks to the anti-vaccination movement, which has hijacked social media to amplify its messages; we seem unable to beat back the disinformation and pseudoscience surrounding the vaccine. Those same forces will dramatically increase death and social upheaval in the event of a pandemic.”

  • Deepfake detection algorithms will never be enough

    You may have seen news stories last week about researchers developing tools that can detect deepfakes with greater than 90 percent accuracy. It’s comforting to think that with research like this, the harm caused by AI-generated fakes will be limited. Simply run your content through a deepfake detector and bang, the misinformation is gone!  James Vincent writers in The Verge that software that can spot AI-manipulated videos, however, will only ever provide a partial fix to this problem, say experts. As with computer viruses or biological weapons, the threat from deepfakes is now a permanent feature on the landscape. And although it’s arguable whether or not deepfakes are a huge danger from a political perspective, they’re certainly damaging the lives of women here and now through the spread of fake nudes and pornography.

  • Monitoring Russia’s and China’s disinformation campaigns in Latin America and the Caribbean

    Propaganda has taken on a different form. Social media and multiple sources of information have obviated the traditional heavy-handed tactics of misinformation. Today, governments and state media exploit multiple platforms to shade the truth or report untruths that exploit pre-existing divisions and prejudices to advance their political and geo-strategic agendas. Global Americans monitors four state news sources that have quickly gained influence in the region—Russia Today and Sputnik from Russia, and Xinhua and People’s Daily from China— to understand how they portray events for readers in Latin America and the Caribbean. Global Americans says it will feature articles that clearly intend to advance a partial view, agenda, or an out-and-out mistruth, labeling them either False or Misleading, explaining why the Global Americans team has determined them so, including a reference, if relevant, that disproves the article’s content.

  • The history of cellular network security doesn’t bode well for 5G

    There’s been quite a bit of media hype about the improvements 5G is set to supposedly bring to users, many of which are no more than telecom talking points. One aspect of the conversation that’s especially important to get right is whether or not 5G will bring much-needed security fixes to cell networks. Unfortunately, we will still need to be concerned about these issues—and more—in 5G.

  • Deepfakes: Forensic techniques to identify tampered videos

    Computer scientists have developed a method that performs with 96 percent accuracy in identifying deepfakes when evaluated on large scale deepfake dataset.

  • Russian trolls are coming for 2020, smarter than ever, Clemson researchers warn

    Many Americans think they know what a Russian troll looks like. After the 2016 election, voters are more aware of bad actors on social media who might be trying to influence their opinion and their vote on behalf of a foreign government. Bristow Marchant writes in The State that Clemson University professors Darren Linvill and Patrick Warren warn, however, that picture may not be accurate. “People I know — smart, educated people — send me something all the time and say ‘Is this a Russian? Is this foreign disinformation?’” said Linvill, a communications professor at the Upstate university. “And it’s just someone saying something they disagree with. It’s just someone being racist. That’s not what disinformation looks like.”

  • Top takes: Suspected Russian intelligence operation

    A Russian-based information operation used fake accounts, forged documents, and dozens of online platforms to spread stories that attacked Western interests and unity. Its size and complexity indicated that it was conducted by a persistent, sophisticated, and well-resourced actor, possibly an intelligence operation. Operators worked across platforms to spread lies and impersonate political figures, and the operation shows online platforms’ ongoing vulnerability to disinformation campaigns.

  • Truth prevails: Sandy Hook father’s victory over conspiracy theory crackpots

    Noah Pozner, then 6-year old, was the youngest of twenty children and staff killed at Sandy Hook Elementary School in Connecticut. Last week, his father, Lenny Pozner, won an important court victory against conspiracy theorists who claimed the massacre had been staged by the Obama administration to promote gun control measures. The crackpots who wrote a book advancing this preposterous theory also claimed that Pozner had faked his son’s death certificate as part of this plot.

  • Identifying a fake picture online is harder than you might think

    Research has shown that manipulated images can distort viewers’ memory and even influence their decision-making. So the harm that can be done by fake images is real and significant. Our findings suggest that to reduce the potential harm of fake images, the most effective strategy is to offer more people experiences with online media and digital image editing – including by investing in education. Then they’ll know more about how to evaluate online images and be less likely to fall for a fake.