• DHS Chief Orders Probe of Agents' Offensive Facebook Posts

    DHS secretary on Wednesday ordered an immediate investigation into a report that current and former U.S. Border Patrol agents are part of a Facebook group that posts racist, sexist and violent comments about migrants and Latin American lawmakers.

  • The Russian Submarine that Caught Fire and Killed 14 May Have Been Designed to Cut Undersea Internet Cables

    A Russian navy submarine caught fire on Monday, killing 14 sailors on board. Two independent Russian news outlets reported that the vessel was the AS-12 “Losharik,” a nuclear-powered vessel that US officials have said is designed to cut undersea cables that keep the world’s internet running. Alexandra Ma and Ryan Pickrell write in Business Insider that Moscow officials have remained secretive about the type of vessel and whether it was nuclear-powered, prompting accusations of a cover-up. President Vladimir Putin canceled a scheduled event on Tuesday and told his defense minister to “personally receive reports” on the investigation into the accident, Radio Free Europe/Radio Liberty reported.

  • Bipartisan, Bicameral Legislation to Tackle Rising Threat of Deepfakes

    New bipartisan bill would require DHS secretary to publish annual report on the state of digital content forgery. “Deepfakes pose a serious threat to our national security, homeland security, and the integrity of our elections,” said Rep. Derek Kilmer (D-Washington), one of the bill’s sponsors.

  • Before Connecting an IoT Device, Heed Cybersecurity Advice

    Seemingly every appliance we use comes in a version that can be connected to a computer network. But each gizmo we add brings another risk to our security and privacy. So before linking your office’s new printer or coffee maker to the internet of things (IoT), have a look at an informational report from NIST outlining these risks and some considerations for mitigating them.

  • No, Russian Twitter Trolls Did Not Demonstrably Push Trump’s Poll Numbers Higher

    We should note at the outset that it’s clear that Russia’s interference in the election had a tangible effect. The information stolen from the Democratic National Committee and Hillary Clinton’s campaign chairman that was later released by WikiLeaks was a staple of media coverage around the conventions in July 2016 and during the last month of the campaign. While measuring the effect of that leaked information is tricky, it’s clear that it had influence. The Russian social media push, though? Philip Bump writes in the Washington Post: “[A]s I’ve written before, there’s very little evidence that Russia effectively targeted American voters with messages that powered Trump’s victory.: He adds: “We certainly can’t definitively say that no votes were changed as a result of Russian disinformation on Twitter or that no one’s political views were influenced by it. We can say, though, that [a recent University of Tennessee] study is worth a great deal of skepticism — especially among those who are looking for evidence that Russia’s trolling handed the election to Trump.”

  • Defending democracy from cyberwarfare

    Foreign meddling in democratic elections, the proliferation of fake news and threats to national security through the “weaponization of social media” will be tackled by a new research Center being launched last week at Australia’s Flinders University.

  • Russian Twitter propaganda predicted 2016 U.S. election polls

    There is one irrefutable, unequivocal conclusion which both the U.S. intelligence community and the thorough investigation by Robert Mueller share: Russia unleashed an extensive campaign of fake news and disinformation on social media with the aim of distorting U.S. public opinion, sowing discord, and swinging the election in favor of the Republican candidate Donald Trump. But was the Kremlin successful in its effort to put Trump in the White House? Statistical analysis of the Kremlin’s social media trolls on Twitter in the run-up to the 2016 election social suggests that the answer is “yes.”

  • Personalized medicine software vulnerability uncovered

    A weakness in one common open source software for genomic analysis left DNA-based medical diagnostics vulnerable to cyberattacks. Researchers at Sandia National Laboratories identified the weakness and notified the software developers, who issued a patch to fix the problem.

  • How Content Removal Might Help Terrorists

    In recent years, counterterrorism policy has focused on making social media platforms hostile environments for terrorists and their sympathizers. From the German NetzDG law to the U.K.’s Online Harms White Paper, governments are making it clear that such content will not be tolerated. Platforms—and maybe even specific individuals—will be held accountable using a variety of carrot-and-stick approaches. Joe Whittaker write in Lawfare that most social media platforms are complying, even if they are sometimes criticized for not being proactive enough. On its face, removal of terrorist content is an obvious policy goal—there is no place for videos of the Christchurch attack or those depicting beheadings. However, stopping online terrorist content is not the same as stopping terrorism. In fact, the two goals may be at odds.

  • Second Florida city pays ransom to hackers

    A second small city in Florida has agreed to pay hundreds of thousands of dollars in ransom to cybercriminals who disabled its computer system. Days after ransomware crippled the city of about 12,000 residents, officials of Lake City agreed this week to meet the hackers’ ransom demand: 42 Bitcoin or about $460,000.

  • U.S. House passes election security bill after Russian hacking

    The U.S. House of Representatives, mostly along partisan lines, has passed legislation designed to enhance election security following outrage over Russian cyberinterference in the 2016 presidential election.The Democratic-sponsored bill would mandate paper ballot voting and postelection audit as well as replace outdated and vulnerable voting equipment. The House bill faces strong opposition in the Republican-controlled Senate.

  • Global cybersecurity experts gather at Israel’s Cyber Week

    The magnitude of Israel’s cybersecurity industry was on full show this week at the 9th Annual Cyber Week Conference at Tel Aviv University. The largest conference on cyber tech outside of the United States, Cyber Week saw 8,000 attendees from 80 countries hear from more than 400 speakers on more than 50 panels and sessions.

  • We must prepare for the next pandemic

    When the next pandemic strikes, it will likely be accompanied by a deluge of rumors, misinformation and flat-out lies that will appear on the internet. Bruce Schneier writes that “Pandemics are inevitable. Bioterror is already possible, and will only get easier as the requisite technologies become cheaper and more common. We’re experiencing the largest measles outbreak in twenty-five years thanks to the anti-vaccination movement, which has hijacked social media to amplify its messages; we seem unable to beat back the disinformation and pseudoscience surrounding the vaccine. Those same forces will dramatically increase death and social upheaval in the event of a pandemic.”

  • Deepfake detection algorithms will never be enough

    You may have seen news stories last week about researchers developing tools that can detect deepfakes with greater than 90 percent accuracy. It’s comforting to think that with research like this, the harm caused by AI-generated fakes will be limited. Simply run your content through a deepfake detector and bang, the misinformation is gone!  James Vincent writers in The Verge that software that can spot AI-manipulated videos, however, will only ever provide a partial fix to this problem, say experts. As with computer viruses or biological weapons, the threat from deepfakes is now a permanent feature on the landscape. And although it’s arguable whether or not deepfakes are a huge danger from a political perspective, they’re certainly damaging the lives of women here and now through the spread of fake nudes and pornography.

  • Monitoring Russia’s and China’s disinformation campaigns in Latin America and the Caribbean

    Propaganda has taken on a different form. Social media and multiple sources of information have obviated the traditional heavy-handed tactics of misinformation. Today, governments and state media exploit multiple platforms to shade the truth or report untruths that exploit pre-existing divisions and prejudices to advance their political and geo-strategic agendas. Global Americans monitors four state news sources that have quickly gained influence in the region—Russia Today and Sputnik from Russia, and Xinhua and People’s Daily from China— to understand how they portray events for readers in Latin America and the Caribbean. Global Americans says it will feature articles that clearly intend to advance a partial view, agenda, or an out-and-out mistruth, labeling them either False or Misleading, explaining why the Global Americans team has determined them so, including a reference, if relevant, that disproves the article’s content.