• U.S. House passes election security bill after Russian hacking

    The U.S. House of Representatives, mostly along partisan lines, has passed legislation designed to enhance election security following outrage over Russian cyberinterference in the 2016 presidential election.The Democratic-sponsored bill would mandate paper ballot voting and postelection audit as well as replace outdated and vulnerable voting equipment. The House bill faces strong opposition in the Republican-controlled Senate.

  • Global cybersecurity experts gather at Israel’s Cyber Week

    The magnitude of Israel’s cybersecurity industry was on full show this week at the 9th Annual Cyber Week Conference at Tel Aviv University. The largest conference on cyber tech outside of the United States, Cyber Week saw 8,000 attendees from 80 countries hear from more than 400 speakers on more than 50 panels and sessions.

  • We must prepare for the next pandemic

    When the next pandemic strikes, it will likely be accompanied by a deluge of rumors, misinformation and flat-out lies that will appear on the internet. Bruce Schneier writes that “Pandemics are inevitable. Bioterror is already possible, and will only get easier as the requisite technologies become cheaper and more common. We’re experiencing the largest measles outbreak in twenty-five years thanks to the anti-vaccination movement, which has hijacked social media to amplify its messages; we seem unable to beat back the disinformation and pseudoscience surrounding the vaccine. Those same forces will dramatically increase death and social upheaval in the event of a pandemic.”

  • Deepfake detection algorithms will never be enough

    You may have seen news stories last week about researchers developing tools that can detect deepfakes with greater than 90 percent accuracy. It’s comforting to think that with research like this, the harm caused by AI-generated fakes will be limited. Simply run your content through a deepfake detector and bang, the misinformation is gone!  James Vincent writers in The Verge that software that can spot AI-manipulated videos, however, will only ever provide a partial fix to this problem, say experts. As with computer viruses or biological weapons, the threat from deepfakes is now a permanent feature on the landscape. And although it’s arguable whether or not deepfakes are a huge danger from a political perspective, they’re certainly damaging the lives of women here and now through the spread of fake nudes and pornography.

  • Monitoring Russia’s and China’s disinformation campaigns in Latin America and the Caribbean

    Propaganda has taken on a different form. Social media and multiple sources of information have obviated the traditional heavy-handed tactics of misinformation. Today, governments and state media exploit multiple platforms to shade the truth or report untruths that exploit pre-existing divisions and prejudices to advance their political and geo-strategic agendas. Global Americans monitors four state news sources that have quickly gained influence in the region—Russia Today and Sputnik from Russia, and Xinhua and People’s Daily from China— to understand how they portray events for readers in Latin America and the Caribbean. Global Americans says it will feature articles that clearly intend to advance a partial view, agenda, or an out-and-out mistruth, labeling them either False or Misleading, explaining why the Global Americans team has determined them so, including a reference, if relevant, that disproves the article’s content.

  • The history of cellular network security doesn’t bode well for 5G

    There’s been quite a bit of media hype about the improvements 5G is set to supposedly bring to users, many of which are no more than telecom talking points. One aspect of the conversation that’s especially important to get right is whether or not 5G will bring much-needed security fixes to cell networks. Unfortunately, we will still need to be concerned about these issues—and more—in 5G.

  • Deepfakes: Forensic techniques to identify tampered videos

    Computer scientists have developed a method that performs with 96 percent accuracy in identifying deepfakes when evaluated on large scale deepfake dataset.

  • Russian trolls are coming for 2020, smarter than ever, Clemson researchers warn

    Many Americans think they know what a Russian troll looks like. After the 2016 election, voters are more aware of bad actors on social media who might be trying to influence their opinion and their vote on behalf of a foreign government. Bristow Marchant writes in The State that Clemson University professors Darren Linvill and Patrick Warren warn, however, that picture may not be accurate. “People I know — smart, educated people — send me something all the time and say ‘Is this a Russian? Is this foreign disinformation?’” said Linvill, a communications professor at the Upstate university. “And it’s just someone saying something they disagree with. It’s just someone being racist. That’s not what disinformation looks like.”

  • Top takes: Suspected Russian intelligence operation

    A Russian-based information operation used fake accounts, forged documents, and dozens of online platforms to spread stories that attacked Western interests and unity. Its size and complexity indicated that it was conducted by a persistent, sophisticated, and well-resourced actor, possibly an intelligence operation. Operators worked across platforms to spread lies and impersonate political figures, and the operation shows online platforms’ ongoing vulnerability to disinformation campaigns.

  • Truth prevails: Sandy Hook father’s victory over conspiracy theory crackpots

    Noah Pozner, then 6-year old, was the youngest of twenty children and staff killed at Sandy Hook Elementary School in Connecticut. Last week, his father, Lenny Pozner, won an important court victory against conspiracy theorists who claimed the massacre had been staged by the Obama administration to promote gun control measures. The crackpots who wrote a book advancing this preposterous theory also claimed that Pozner had faked his son’s death certificate as part of this plot.

  • Identifying a fake picture online is harder than you might think

    Research has shown that manipulated images can distort viewers’ memory and even influence their decision-making. So the harm that can be done by fake images is real and significant. Our findings suggest that to reduce the potential harm of fake images, the most effective strategy is to offer more people experiences with online media and digital image editing – including by investing in education. Then they’ll know more about how to evaluate online images and be less likely to fall for a fake.

  • National emergency alerts potentially vulnerable to spoofing

    On 3 October 2018, cell phones across the United States received a text message labeled “Presidential Alert.” It was the first trial run for a new national alert system, developed by several U.S. government agencies as a way to warn as many people across the United States as possible if a disaster was imminent. Now, a new study raises a red flag around these alerts—namely, that such emergency alerts authorized by the President of the United States can, theoretically, be spoofed.

  • The Budapest Convention offers an opportunity for modernizing crimes in cyberspace

    Governments worldwide are in the process of updating the Budapest Convention, also known as the Convention on Cybercrime, which serves as the only major international treaty focused on cybercrime. This negotiation of an additional protocol to the convention provides lawmakers an opportunity the information security community has long been waiting for: modernizing how crimes are defined in cyberspace. Specifically, the Computer Fraud and Abuse Act (CFAA), codified at 18 U.S.C.§ 1030, dictates what constitutes illegal acts in cyberspace in the United States. Andrew Burt and Dan Geer write in Lawfare that without changing the CFAA—and other cybercrime laws like it—we’re collectively headed for trouble.

  • What a U.S. operation in Russia shows about the limits of coercion in cyber space

    The New York Times recently reported that the United States planted computer code in the Russian energy grid last year. The operation was part of a broader campaign to signal to Moscow the risks of interfering in the 2018 midterm elections as it did in 2016.  According to unnamed officials, the effort to hold Russian power plants at risk accompanied disruption operations targeting the Internet Research Agency, the “troll farm” behind some of the 2016 election disinformation efforts. The operations made use of new authorities U.S. Cyber Command received to support its persistent engagement strategy, a concept for using preemptive actions to compel adversaries and, over time, establish new norms in cyberspace. Benjamin Jensen writes in War on the Rocks that the character of cyber competition appears to be shifting from political warfare waged in the shadows to active military disruption campaigns. Yet, the recently disclosed Russia case raises question about the logic of cyber strategy. Will escalatory actions such as targeting adversaries’ critical infrastructure actually achieve the desired strategic effect?

  • New U.S. visa rules may push foreigners to censor their social-media posts

    Foreigners who decry American imperialism while seeking to relax on Miami’s sandy beaches or play poker at Las Vegas’s casinos may seek to soften their tone on Twitter. The reason? The U.S. State Department is now demanding visa applicants provide their social-media profiles on nearly two dozen platforms, including Facebook and Twitter.